path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
Macro_NLU_Intent_Refinement.ipynb
###Markdown Macro NLU Data Refinement It's a bit like the TV show [Serverance](https://www.imdb.com/title/tt11280740/) .![Helly R and Mark S](https://media.npr.org/assets/img/2022/02/15/atv_severance_photo_010103-5f8033cc2b219ba64fe265ce893eae4c90e83896-s1100-c50.jpg "Helly R and Mark G")*Helly R*: `My job is to scroll through the spreadsheet and look for the numbers that feel scary?`*Mark S*: `I told you, you’ll understand when you see it, so just be patient.`![MDR](https://www.imore.com/sites/imore.com/files/styles/large/public/field/image/2022/03/refinement-software-severance-apple-tv.jpg "serverance micro data refinement")*Helly R*: `That was scary. The numbers were scary.`Hopefully the intents and entities that are wrong aren't scary, just a bit frustrating. Let's see if we can find the right ones.NOTE: We will use Logistic Regression with TFIDF features to train our intent models and CRFs for entity exraction. Why? Well, they are very fast and both methods aren't state-of-the-art. This is good, because it is easier to find problems we will need to refine in the dataset than if we were to use a proper NLU engine like Snips or something SOTA like BERT. It is very important to note that some of the the problems we will pick up on, might not be an actual issue, but might be due to the limitations of the models. Refining the real problems and ignoring the limitations of the models is a good way to improve the models. Then when the dataset is ready, we can use some more advanced NLU engine and get the best performance possible.* Macro NLU Data Refinement: Intent* Macro NLU Data Refinement: Entity Load the dataset ###Code try: nlu_data_df = pd.read_csv( 'data/refined/nlu_data_refined_df.csv', sep=',', index_col=0) print('Successfully loaded nlu_data_refined_df.csv') except: data = 'data/NLU-Data-Home-Domain-Annotated-All-Cleaned.csv' nlu_data_df = DataUtils.load_data( data ) # TODO: Remove this when done.It's just for testing! data = 'data/NLU-Data-Home-Domain-Annotated-All-Cleaned.csv' nlu_data_df = DataUtils.load_data( data ) removed_nlu_data_refined_df = nlu_data_df[nlu_data_df['remove'] != True] removed_nlu_data_refined_df ###Output _____no_output_____ ###Markdown Intent Create intent classifier report Let's do a report by domain classification. ###Code domain_labels = 'scenario' domain_report_df = NLUEngine.evaluate_intent_classifier( data_df_path=nlu_data_df, labels_to_predict=domain_labels, classifier=LR ) domain_report_df ###Output _____no_output_____ ###Markdown It might be easier to see this graphed. ###Code Analytics.plot_report(domain_report_df) ###Output _____no_output_____ ###Markdown And now let's do a report by intent classification. ###Code intent_labels= 'intent' intent_report_df = NLUEngine.evaluate_intent_classifier( data_df_path=nlu_data_df, labels_to_predict=intent_labels, classifier=LR ) intent_report_df ###Output _____no_output_____ ###Markdown A graph might also be nice here.. ###Code Analytics.plot_report(intent_report_df) ###Output _____no_output_____ ###Markdown Macro Intent Data Refinement For changing to another domain, start here again. Let's train a classifier with the current state of the data and get the predicted intent labels.If you have already performed refinements, this will refresh the predicted labels.(why not split a training test set? Because we want to see the results of the intent classifier on the whole data set, I mean if it's still getting it wrong when it has trained on it, then perhaps there is something wrong with the utterance, tagging, overlapping intents, etc.) ###Code LR_intent_classifier_model, tfidf_vectorizer = NLUEngine.train_intent_classifier( data_df_path=nlu_data_df, labels_to_predict='intent', classifier=LR ) nlu_data_df = IntentMatcher.get_predicted_labels( nlu_data_df, LR_intent_classifier_model, tfidf_vectorizer) ###Output _____no_output_____ ###Markdown Now that we know what works and what doesn't, we can start refining the intents. For each domain, this step will be repeated until all intents have been refined. Pick a domain (scenario) to review ###Code domain_selection = MacroDataRefinement.list_and_select_domain(nlu_data_df) ###Output _____no_output_____ ###Markdown We want to get all of the entries for that domain. ###Code domain_df = DataUtils.get_domain_df(nlu_data_df, domain_selection) ###Output _____no_output_____ ###Markdown We will get the intent keyword features and their rankings (coefs) from the intent classifier. ###Code #TODO: This will be removed in a later version (no need to rank in the reports, it isn't so helpful!) intent_feature_rank_df = MacroIntentRefinement.intent_keyword_feature_rankings( LR_intent_classifier_model, tfidf_vectorizer) ###Output _____no_output_____ ###Markdown Having all of the incorrectly predicted intents to review for this domain is a good way to see what is going wrong. Especially when you have used them all as a training set, and yet it still can't predict some entries correctly.The big question is: Is it because of defects in the data or is it because of the intent classifier? We really want to find defects in the data to refine over classifier defects. ###Code incorrect_intent_predictions_df = IntentMatcher.get_incorrect_predicted_labels( domain_df, LR_intent_classifier_model, tfidf_vectorizer) incorrect_intent_predictions_df ###Output _____no_output_____ ###Markdown However, it can be a bit much seeing everything that isn't working right, perhaps we can break it down better in a report! Let's take a look at the report. You can use the circle with the plus to expand the items individually in the report or click on the number of items. Might I recommend looking at one intent at a time and expanding the nested items for each of those. There is a lot of information to look at here, but this stuff is super important to understand for the refinement of the data.Each intent has the following items:* **f1 score**: the overall score of the intent(we wanrt to improve this number!)* **total count**: the total number of utterances that have this intent* **total incorrect count**: the total number of utterances that have this intent but are incorrectly predicted(we want to reduce this number!)* **top features**: the top ten features(words) that are associated with this intent(these are just the individual words ranked, not combined together!)* **overlapping features**: the top ten features(words) that are associated with this intent and are also associated with other intents that may make classification based solely on these features difficult* **correct utterance example**: the intent, the first annotated utterance that is correctly predicted as an example, and a list of the words in the utterance with their coefficient rankings* **incorrect utterance example**: the intent, the first annotated utterance that is incorrectly predicted as an example, and a list of the words in the utterance with their coefficient rankings* **incorrect predicted intents and counts**: for this intent, a list of the incorrectly predicted intents and their counts(we want to reduce this!) ###Code incorrect_predicted_intents_report = MacroIntentRefinement.get_incorrect_predicted_intents_report( domain_df, incorrect_intent_predictions_df, intent_report_df, intent_feature_rank_df) # TODO: Remove unused measures. RenderJSON(incorrect_predicted_intents_report) ###Output _____no_output_____ ###Markdown It's always a good idea to save the report. ###Code DataUtils.save_json(incorrect_predicted_intents_report, 'data/reports/' + domain_selection + '_incorrect_predicted_intents_report.json') #TODO: add in way to show the improvements when refinement is complete (save original json and new json as one file with two main keys), how best to compare them? ###Output _____no_output_____ ###Markdown And finally, we will save a dictionary of all of the incorrectly predicted intents for each intent to refine in this domain. ###Code intent_refinement_dictionary = MacroIntentRefinement.get_intent_dataframes_to_refine( incorrect_intent_predictions_df) ###Output _____no_output_____ ###Markdown Intent refinement: human in the for loop.For changing to another intent to refine within the same domain, start here again.Now it's your turn to shine human!You will provide a refinement to each incorrectly predicted intent. some of the incorrectly predicted utterances are actually fine the way they are, you may need to review the intent that is fasly being predicted...Besides correcting the utterances(e.g. spelling or grammar), you can also mark an entry with the following:* **review**: the utterance needs to be reviewed again by a human* **move**: the utterance needs to be moved to another intent(NOTE: if you have a big data set, it might be better to just **remove** the utterance from the data set)* **remove**: the utterance should be removed from the datasetYou can use your human ability to refine the NLU intent data by answering the following questions:0. What should this intent actually do in general? There is usually a specific action the intent should do, and some of the utterances are mislabeled or are not specific enough. If you can't exactly place the utterance without any context (just the untagged utterance by itself), then an AI can't either. 1. Does the utterance fit to the intent? -> mark as move, remove, and/or review2. Is the utterance grammar or spelling wrong but(1) is fine? -> correct the utterance3. Is this intent collidating with another intent because the scope of both intents are overlapping? -> redefine the scope of the intents (either combine them or separate their functionality better)4. Is the intent collidating with another intent because certain keywords overlap between intents? -> redefine the keywords to split between intents or merge them together if they are similar From the list of intents, pick one to refine. ###Code intent_to_refine = MacroIntentRefinement.list_and_select_intent(incorrect_intent_predictions_df) ###Output _____no_output_____ ###Markdown It's good to take a quick second look at entries that are predicted correctly, to see what this intent is supposed to be doing (but perhaps even some of these are wrong too, LOL) ###Code domain_df[(domain_df['intent'] == intent_to_refine) & ( domain_df['intent'] == domain_df['predicted_label'])].head(10) ###Output _____no_output_____ ###Markdown In rare occasions, you may want to remove the whole intent from the data set. This can be useful when the whole intent doesn't make sense or is out of scope. Run this and the next cell ONLY if you are sure you want to remove the whole intent, not just the incorrect predictions! Then start on the next domain again further above. ###Code updated_df = MacroIntentRefinement.remove_intent(nlu_data_df, intent_to_refine) ###Output _____no_output_____ ###Markdown Save the state of the data set with the removed intent, and then start again on the next intent or domain at the top. ###Code updated_df.to_csv('data/refined/nlu_data_refined_df.csv') nlu_data_df = pd.read_csv( 'data/refined/nlu_data_refined_df.csv', sep=',', index_col=0) ###Output _____no_output_____ ###Markdown Let's refine the incorrect predictions of this intent! ###Code to_review_sheet = MacroDataRefinement.create_sheet( intent_refinement_dictionary[intent_to_refine]) to_review_sheet ###Output _____no_output_____ ###Markdown Let's save the reviewed data set. ###Code reviewed_intent_df = MacroDataRefinement.convert_sheet_to_dataframe( to_review_sheet) reviewed_intent_df.to_csv( 'data/reviewed/reviewed_'+ domain_selection + '_' + intent_to_refine + '_incorrectly_predicted_df.csv') #TODO: Remove this when done. reviewed_intent_df = pd.read_csv( 'data/reviewed/reviewed_' + domain_selection + '_' + intent_to_refine + '_incorrectly_predicted_df.csv', sep=',', index_col=0) ###Output _____no_output_____ ###Markdown For entries that have been marked `move`, we will need to know the intent that they are moving to. ###Code #TODO: add in list of intents above this cell so people can see if it's the right intent to pick #TODO: BUG: change method to look up scenario for changed intent to relabel the scenario refined_intent_df = reviewed_intent_df.apply( MacroDataRefinement.move_entry, axis=1) ###Output _____no_output_____ ###Markdown It's probably a good idea to save this in a csv! ###Code refined_intent_df.to_csv( 'data/refined/refined_' + domain_selection + '_' + intent_to_refine + '_incorrectly_predicted_df.csv', sep=',') #TODO: Remove this when done. refined_intent_df = pd.read_csv( 'data/refined/refined_' + domain_selection + '_' + intent_to_refine + '_incorrectly_predicted_df.csv', sep=',', index_col=0) ###Output _____no_output_____ ###Markdown We will mark all the refined entries and merge these with the original data set. ###Code refined_intent_df = MacroDataRefinement.mark_entries_as_refined(refined_dataframe=refined_intent_df, refined_type='intent') updated_df = MacroDataRefinement.update_dataframe( nlu_data_df, refined_intent_df) ###Output _____no_output_____ ###Markdown It's always good to double check your data set after you have refined it before saving it. ###Code updated_df.shape ###Output _____no_output_____ ###Markdown Check to make sure the intents you just refined are there. This just checks the first 10, but that is good enough! ###Code updated_df[(updated_df['intent']== intent_to_refine) & (updated_df['intent_refined']== True)].head(10) ###Output _____no_output_____ ###Markdown Total number of entries refined ###Code updated_df[updated_df['intent_refined']== True].shape ###Output _____no_output_____ ###Markdown Number of entries to be reviewed (excluding removed) ###Code updated_df[(updated_df['intent'] != updated_df['predicted_label']) & (updated_df['intent_refined'].isna()) & (updated_df['remove'].isna())].shape ###Output _____no_output_____ ###Markdown Once you are sure the numbers are correct, save the `updated_df` to a csv and reload it in the next cell. ###Code updated_df.to_csv('data/refined/nlu_data_refined_df.csv') nlu_data_df = pd.read_csv('data/refined/nlu_data_refined_df.csv', sep=',', index_col=0) ###Output _____no_output_____ ###Markdown Stop here and repeat the loops for the next intent or domain until you have done them all. Refining the last little bit!Run this once you have refined all of the intents for all of the domains to see if there is anything left over to do. ###Code batch_to_refine_df = updated_df[(updated_df['intent'] != updated_df['predicted_label']) & (updated_df['intent_refined'].isna()) & (updated_df['remove'].isna())] batch_to_refine_df.shape batch_to_refine_df = DataUtils.prepare_dataframe_for_refinement(batch_to_refine_df) to_review_sheet = MacroDataRefinement.create_sheet( batch_to_refine_df) to_review_sheet reviewed_intent_df = MacroDataRefinement.convert_sheet_to_dataframe( to_review_sheet) reviewed_intent_df reviewed_intent_df.to_csv('data/reviewed/reviewed_misc_domains_and_intents_to_sort_incorrectly_predicted_df.csv') ###Output _____no_output_____ ###Markdown This is an EXPERIMENTAL feature!We want to save the reviewed intents to the correct csvs that already exist. ###Code #TODO: Refactor this into a class. This is a bit of a hack. intents = reviewed_intent_df.intent.unique().tolist() domains = [domain for intent in intents for domain in reviewed_intent_df[reviewed_intent_df.intent == intent]['scenario'].unique()] intents_domains = zip(intents, domains) for intent, domain in intents_domains: reviewed_intent_df[reviewed_intent_df.intent == intent].to_csv( 'data/reviewed/reviewed_' + domain + '_' + intent + '_incorrectly_predicted_df.csv', mode='a', header=False) print(intent, domain) ###Output _____no_output_____ ###Markdown Mark any possible entries to be moved (if the data set is big, you can usually just remove them). ###Code #TODO: same bug as above! refined_intent_df = reviewed_intent_df.apply( MacroDataRefinement.move_entry, axis=1) ###Output _____no_output_____ ###Markdown Take a quick peak again, just to make sure there are the right number of entries. ###Code refined_intent_df.shape ###Output _____no_output_____ ###Markdown Add the updated `refined_intent_df` into the `nlu_data_df` dataset. ###Code refined_intent_df = MacroDataRefinement.mark_entries_as_refined( refined_dataframe=refined_intent_df, refined_type='intent') updated_df = MacroDataRefinement.update_dataframe( nlu_data_df, refined_intent_df) ###Output _____no_output_____ ###Markdown Double check that the changes were implemented. ###Code updated_df[(updated_df['intent'] != updated_df['predicted_label']) & (updated_df['intent_refined'].isna()) & (updated_df['remove'].isna())] ###Output _____no_output_____ ###Markdown If the changes are correct, save the `nlu_data_df` to a csv and reload it in the next cell. ###Code updated_df.to_csv('data/refined/nlu_data_refined_df.csv') nlu_data_df = pd.read_csv('data/refined/nlu_data_refined_df.csv', sep=',', index_col=0) #TODO: Improve flow between a domain and refining within the domain. ###Output _____no_output_____ ###Markdown Benchmark the new data set ###Code removed_nlu_data_refined_df = nlu_data_df[nlu_data_df['remove'] != True] LR_intent_classifier_model, tfidf_vectorizer = NLUEngine.train_intent_classifier( data_df_path=removed_nlu_data_refined_df, labels_to_predict='intent', classifier=LR ) improved_intent_report_df = NLUEngine.evaluate_intent_classifier( data_df_path=removed_nlu_data_refined_df, labels_to_predict='intent', classifier=LR ) improved_intent_report_df Analytics.plot_report(intent_report_df, improved_intent_report_df) ###Output _____no_output_____ ###Markdown What is the next cell for? Do I need it? ###Code MacroDataRefinement.get_incorrect_predicted_intents_report( removed_nlu_data_refined_df[removed_nlu_data_refined_df['scenario'] == domain_selection], refined_incorrect_intent_predictions_df, improved_intent_report_df) ###Output _____no_output_____ ###Markdown From here everything needs to be refactoredWe will get the probabilities of each intent from the intent classifier and append that to our dataframe. Then we will review the 250 (or more??) lowest ranking entries and see if they are actually correct (spoiler alert: most aren't). ###Code removed_nlu_data_refined_df['intent_probability'] = removed_nlu_data_refined_df['answer_normalised'].apply( lambda x: IntentMatcher.get_prediction_probability(LR_intent_classifier_model, tfidf_vectorizer, x)) #TODO: review the weakest 100 in a sheet. low_probability_to_refine_df = removed_nlu_data_refined_df[removed_nlu_data_refined_df['intent_refined'] != True].sort_values(by='intent_probability', ascending=True).head(250) low_probability_to_review_df = low_probability_to_refine_df.drop( columns=[ 'userid', 'notes', 'answer', 'answerid', 'suggested_entities', 'intent_refined', 'remove', 'status', 'intent_probability', 'entity_refined' ]) to_review_sheet = MacroDataRefinement.create_sheet( low_probability_to_review_df) to_review_sheet reviewed_intent_df = MacroDataRefinement.convert_sheet_to_dataframe( to_review_sheet) #TODO: export reviewed_intent_df to a csv file and integrate it into the nlu_data_refined_df. reviewed_intent_df.to_csv('data/reviewed/reviewed_low_scoring_intents.csv') refined_intent_df = reviewed_intent_df.apply( MacroDataRefinement.move_entry, axis=1) refined_intent_df.to_csv( 'data/refined/refined_low_scoring_intents_df.csv', sep=',') refined_intent_df = MacroDataRefinement.mark_entries_as_refined( refined_dataframe=refined_intent_df, refined_type='intent') try: updated_df = MacroDataRefinement.update_dataframe( nlu_data_refined_df, refined_intent_df) print('updated refined df') except: updated_df = MacroDataRefinement.update_dataframe( upgraded_df, refined_intent_df) print('updated upgraded df') updated_df.to_csv('data/refined/nlu_data_refined_df.csv') updated_df[updated_df.index == 1259] ###Output _____no_output_____ ###Markdown The following are just some ideas, notes, etc. It will be removed after the next refactoring. Besides some incorrect utterances and intents, we can see that there is an overlap between the intent 'alarm_set' and the intent 'calandar_set'. This is because those two intents are not well defined and will require refinement. We will try to fix this. ###Code #TODO: integrate refined_intent_df into the main dataset and save it as nlu_data_refined_df #TODO: export nlu_data_refined_df to a csv file and save it as NLU-Data-Home-Domain-Annotated-Refined.csv #TODO: for every intent in the predicted intent column, get the top 5 tfidf features and their scores # Like this: https://stackoverflow.com/questions/34232190/scikit-learn-tfidfvectorizer-how-to-get-top-n-terms-with-highest-tf-idf-score #TODO: Make sure to pass them to the intent refinement process for each intent by putting them in the report! #TODO: get the counts of the terms from the utterances that are incorrect for a specific domain (should I filter by tfidf scores?) #TODO: Look up the most popular terms for an intent if they are red or green for that intent # For every word (feature) in the utterances, we get the coeficients for the intents. # From the shape, we see it contains the classes and the coeficients. coefs = LR_intent_classifier_model.coef_ coefs.shape from nlu_engine import LabelEncoder # We need to get the encoded classes classes = LR_intent_classifier_model.classes_ classes # We cant to get the actual feature names (the words) feature_names = tfidf_vectorizer.get_feature_names() # Let's try an example with TFIDF only, this only tells us overall the TFIDF score for each word, not related to the intent from nlu_engine import TfidfEncoder utterance = 'turn off the alarm I set' response = TfidfEncoder.encode_vectors( utterance, tfidf_vectorizer) for vector in response.nonzero()[1]: print(f'word: {feature_names[vector]} - ranking: {response[0, vector]}') # Let's rip out a list of tuples for the features and their coeficients for the intent output = [] for classIndex, features in enumerate(coefs): for featureIndex, feature in enumerate(features): output.append( (classes[classIndex], feature_names[featureIndex], feature)) feature_rank_df = pd.DataFrame(output, columns=['class', 'feature', 'coef']) feature_rank_df # It is a good idea to convert the classes from the encoded to a normal human form feature_rank_df['class'] = LabelEncoder.inverse_transform(feature_rank_df['class']) # Sort the features by the absolute value of their coefficient and color them red or green feature_rank_df["abs_value"] = feature_rank_df["coef"].apply(lambda x: abs(x)) feature_rank_df["colors"] = feature_rank_df["coef"].apply(lambda x: "green" if x > 0 else "red") feature_rank_df = feature_rank_df.sort_values("abs_value", ascending=False) # Take a look at an example of the word 'set' feature_rank_df[(feature_rank_df['feature'] == 'set') & (feature_rank_df['colors'] == 'red') ].sort_values('abs_value', ascending=False) ###Output _____no_output_____ ###Markdown Entity extraction report The entity extraction could be greatly improved by improving the features it uses. It would be great if someone would take a look at this. Perhaps the CRF features similar to what Snips uses would be better such as Brown clustering (probably). ###Code #TODO: implement brown clustering to improve entity extraction (see entity_extractor.py) ###Output _____no_output_____ ###Markdown It is important to have the NLTK tokenizer to be able to extract entities. ###Code try: nltk.data.find('tokenizers/punkt') except LookupError: nltk.download('punkt') ###Output _____no_output_____ ###Markdown Due to this error featured in [this git issue](https://github.com/TeamHG-Memex/sklearn-crfsuite/issues/60) we have to use an older version of scikit learn (sklearn<0.24), otherwise the latest version would work. Hopefully this gets fixed one day.. ###Code entity_report_df = NLUEngine.evaluate_entity_classifier(data_df=nlu_data_df) entity_report_df.sort_values(by=['f1-score']) #TODO: Benchmark the state features to find the best and the worst, remove/replace worst: add in state features like here: https://sklearn-crfsuite.readthedocs.io/en/latest/tutorial.html#let-s-check-what-classifier-learned # Specifically, we want print_state_features() ###Output _____no_output_____ ###Markdown As we have seen from the entity extraction report, the entity extraction is not working for the alarm_type. ###Code #TODO: move this below the intent cleaning flow nlu_scenario_df = nlu_scenario_df[nlu_scenario_df['answer_annotation'].str.contains( 'alarm_type')] ###Output _____no_output_____ ###Markdown Entity Convert to ipysheet and reviewTODO: add in description of the types of fixes we can do to the NLU data for entity ###Code # TODO: same as above for intents but with predicted entities: report on them, break them down into a dictionary of dataframes and refine them.. ###Output _____no_output_____ ###Markdown For the example with 'alarm' and the alarm_type:* We see that the alarm_type entities are really event_name(ie wake up, soccer practice) except for ID 5879, we will need to change them to event_name and remove ID 5879.* The last one(ID 6320) is a mistake. Someone got confused with the prompt and assumed alarm is a security system. This is out of scope for the alarm domain, as the alarms are ones set on a phone or other device. We will drop this utterance.Once you are done reviewing, you convert it back to a dataframe and check to make sure it looks okay.Let's change all alarm_type entities to event_name. ###Code reviewed_scenario_df['answer_annotation'] = reviewed_scenario_df['answer_annotation'].str.replace( 'alarm_type', 'event_name') reviewed_scenario_df ###Output _____no_output_____ ###Markdown Okay dokey, now we can merge this with the original data set and see if it made a difference already(well of course it did!). ###Code nlu_data_df.drop( reviewed_scenario_df[reviewed_scenario_df['remove'] == True].index, inplace=True) reviewed_scenario_df = reviewed_scenario_df[~reviewed_scenario_df['remove'] == True] nlu_data_df.loc[nlu_data_df.index.intersection( reviewed_scenario_df.index), 'answer_annotation'] = reviewed_scenario_df['answer_annotation'] nlu_data_df[(nlu_data_df['scenario'].str.contains('alarm')) & (nlu_data_df['answer_annotation'].str.contains( 'event_name'))] ###Output _____no_output_____ ###Markdown Benchmark changed data setTODO: repeat reports for the changed data set for domain and entities and compare ###Code entity_reviewed_report_df = NLUEngine.evaluate_entity_classifier( data_df=nlu_data_df) entity_reviewed_report_df.sort_values(by=['f1-score']) ###Output _____no_output_____ ###Markdown If you are sure it is okay, you can save it as a csv file, make sure to name it correctly(i.e. `alarm_domain_first_review.csv`) ###Code reviewed_scenario_df.to_csv('alarm_domain_first_review.csv') ###Output _____no_output_____ ###Markdown Load it back up and check to make sure it looks okay. Make sure to give it the right name! ###Code audio_domain_first_review_df = pd.read_csv( 'alarm_domain_first_review.csv', index_col=0) audio_domain_first_review_df.tail(50) # TODO: implement the evaluate_classifier in the NLU engine to check f1 score for intents and entities in the domain vs original NLU data of domain! # Value: benchmark! #TODO: implement a flow for getting the domains with the lowest f1 scores by intent/domain and entities and cleaning them by the order of the lowest f1 scores # TODO: concat all reviewed dfs and save to csv # TODO: add benchmark for whole NLU data set before and after cleaning! (by intents and domains!) # TODO: review the review marked entries # TODO: add new column for notes # TODO: change flow of review for only ones that should be reviewed, not all of the ones that have been changed (track changes by comparing against the original data set) # TODO: do the changed utterances have to be changed in other fields too or is it just enough for the tagged utterancve field? # TODO: add visualizations of domains, their intents, keywords in utterances, and entities to top ###Output _____no_output_____
notebooks/lab3.ipynb
###Markdown Practical session 3 - Practice with numpyCourse: [SDIA-Python](https://github.com/guilgautier/sdia-python)Date: 10/06/2021Instructor: [Guillaume Gautier](https://guilgautier.github.io/)Students (pair):- [KABIRI Salim](https://github.com/KsalimK)- [Ait Lemqeddem Amine](https://github.com/AmineAitLemqeddem) ###Code %load_ext autoreload %autoreload 2 import numpy as np from sdia_python.lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output <class 'numpy.ndarray'> ###Markdown Propose at leat 2 ways to create an integer vector of size 100 made of 1s ###Code int_vector1=np.ones(100) int_vector2=np.array([1 for i in range(100)]) ###Output _____no_output_____ ###Markdown Create a vector with values ranging from 10 to 49 ###Code rand_vect=np.random.randint(10,50,2) rand_vect ###Output _____no_output_____ ###Markdown Propose a way to construct the vector $(0.0, 0.2, 0.4, 0.6, 0.8)$ ###Code vect_construction1=np.linspace(0,0.8,5) vect_construction1 vect_construction2=np.arange(0,1,0.2) #stop+1 to achieve the stop desired vect_construction2 ###Output _____no_output_____ ###Markdown Convert a float array into an integer array in place ###Code float_array=np.array([1,23,np.pi,np.sqrt(5)]) int_array=float_array.astype(int) int_array ###Output _____no_output_____ ###Markdown Given a boolean array- return the indices where - negate the array inplace? ###Code bool_array=np.array([False,False,True,False,True]) #rand_vect=np.random.randint(0,2,nb point) , rand_vect.astype(bool) Indices_True=np.nonzero(bool_array==True) Indices_True Indices_False=np.nonzero(bool_array==False) ##########Negate the array########## bool_array_negation=(bool_array==False) ## np.invert bool_array_negation ###Output _____no_output_____ ###Markdown Given 2 vectors $u, v$, propose at least- 2 ways to compute the inner product $v^{\top} u$ (here they must have the same size)- 2 ways to compute the outer product matrix $u v^{\top}$- 2 ways to compute the outer sum matrix "$M = u + v^{\top}$", where $M_{ij} = u_i + v_j$ ###Code u=np.array([1,2,3,4,5]) v=np.array([6,7,8,9,0]) ################inner product########################## inner_product1=np.vdot(u,v) inner_product1 inner_product2=np.dot(u,v.T) inner_product2 ###############outer product#################### outer_product1=np.outer(u,v) outer_product1 v_reshape=v.reshape((1,5)) u_reshape=u.reshape((5,1)) outer_product2=np.dot(u_reshape,v_reshape) outer_product2 #############the outer sum########## ###Method1##### M1=np.zeros((len(u),len(v))) for i in range(len(u)): for j in range(len(v)): M1[i][j]=u[i]+v[j] M1 ####Method2#### M2=np.array([u]*len(u)).T+v M2 M1==M2 ###Output _____no_output_____ ###Markdown Given the following matrix$$M = \begin{pmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \\ 6 & 7 & 8 \\\end{pmatrix}$$- Create $M$ using as a list of lists and access the element in the middle- Propose at least 2 ways to create $M$ using numpy and access the element in the middle- Swap its first and second row- Propose at least 3 ways to extract the submatrix $\begin{pmatrix}4 & 5 \\7 & 8 \\\end{pmatrix}$- Propose at least 2 ways to extract the diagonal of $M$- Propose at least 2 ways to compute $M^3$- Compute $v^{\top} M$, resp. $M N$ for a vector, resp. a matrix of your choice. - Propose 2 ways to "vectorize" the matrix, i.e., transform it into - $(0, 1, 2, 3, 4, 5, 6, 7, 8)$ - $(0, 3, 6, 1, 4, 7, 2, 5, 8)$- Consider $v = (1, 2 , 3)$, compute the - row-wise multiplication of $M$ by $v$ ($M_{i\cdot}$ is multiplied by $v_i$) - column-wise multiplication of $M$ by $v$ ($M_{\cdot j}$ is multiplied by $v_i$) ###Code ######## M Méthode list of list ####### M=[[0,1,2],[3,4,5],[6,7,7]] M[1][1] ######## M Méthode 1 numpy array ####### M1=np.array([[0,1,2],[3,4,5],[6,7,8]]) M1 M1[1][1] ######## M Méthode 2 numpy array ####### M2=np.arange(0,9,1) M2=M2.reshape((3,3)) M2 M2[1][1] ############################################################################ #Par la suite nous allons utiliser M2 comme variable de stockage de la matrice M# ############################################################################ ########### Inverser la première et la deuxième ligne######## M2_copy=np.copy(M2) M2_copy[0,:],M2_copy[1,:]=M2[1,:],M2[0,:] M2_copy ########## Extraction de la sous-matrice ############# ##### Méthode 1 ###### submatrix1=M2[1:,1:] submatrix1 ##### Méthode 2 ###### submatrix2=M2[1:3,1:3] submatrix2 ##### Méthode 3 ###### submatrix3=M2[1:][:,1:] submatrix3 ######## diagonale de M ##### ##### Méthode 1 ###### diag1=np.diag(M2) diag1 ##### Méthode 2 ###### diag2=np.array([M2[i][i] for i in range(len(M))]) diag2 ##### Méthode 3 ###### diag3=M2[1:][:,1:] diag3 ######################## M^3 ########################## ##### Méthode 1 ###### M3_1=np.dot(np.dot(M2,M2),M2) ##### Méthode 2 ###### D,V =np.linalg.eig(M2) matrixD=np.zeros((3,3)) matrixD[0][0]=D[0] matrixD[1][1]=D[1] matrixD[2][2]=D[2] M3_2=np.dot(np.dot(np.linalg.inv(V),matrixD**3),V) ######### #Mathématiquement, cette méthode devrait donner le bon résultat mais j'obtiens un résultat différent (à voir avec le prof) ######## ################## v.T M and MN############ v=np.array([1,2,3]) N=np.array([[1,2,3],[4,5,6],[7,8,9]]) vTM=np.dot(v.T,N) vTM MN=np.dot(M,N) MN ##################### vectorize the matrix m ################# M3_copy=np.copy(M2) vect1=M3_copy.reshape(9) vect1 vect2=M3_copy.T.reshape(9) vect2 ##################### row-wise and column-wise multiplication######## v=np.array([1,2,3]) Mv_row_wise=M2*v.reshape((3,1)) Mv_row_wise Mv_column_wise=M2*v Mv_column_wise ###Output _____no_output_____ ###Markdown Write a function `is_symmetric` that checks whether a given n x n matrix is symmetric, and provide an example ###Code def is_symmetric(x): """function that checks whether a given n x n matrix is symmetric Args: x (array): the matrix to test Returns: bool: True if x is symmetric , False if not """ return np.array_equal(x,x.T) x1=np.array([[1,2,3],[4,5,6],[6,7,8]]) is_symmetric(x1) #False x2=np.array([[1,2,3],[2,2,3],[3,3,3]]) is_symmetric(x2) #True ###Output _____no_output_____ ###Markdown RandomREQUIREMENT: USE THE FUNCTION `get_random_number_generator` as previously used in Lab 2. Consider the Bernoulli(0.4) distribution- Propose at least 2 ways to generate n=1000 samples from it- Compute the empirical mean and variance ###Code rng=get_random_number_generator(0) simulation1= rng.binomial(1,0.4,1000) mean1=np.mean(simulation1) var1=np.var(simulation1) simulation1 , mean1 , var1 simulation2= np.array([rng.binomial(1,0.4) for i in range(1000)]) mean2=np.mean(simulation2) var2=np.var(simulation2) simulation2 , mean2 , var2 ###Output _____no_output_____ ###Markdown Consider a random matrix of size $50 \times 100$, filled with i.i.d. standard Gaussian variables, compute- the absolute value of each entry- the sum of each row- the sum of each colomn - the (euclidean) norm of each row- the (euclidean) norm of each column ###Code Gaussian_matrix=rng.normal(0,1,(50,100)) Gaussian_matrix Gaussian_matrix_abs=np.abs(Gaussian_matrix) Sum_row = Gaussian_matrix.sum(axis=1) Sum_row Sum_column = Gaussian_matrix.sum(axis=0) Sum_column norm_row = np.linalg.norm(Gaussian_matrix,axis=1) norm_row norm_row = np.linalg.norm(Gaussian_matrix,axis=0) norm_row ###Output _____no_output_____ ###Markdown Practical session 3 - Practice with numpyCourse: [SDIA-Python](https://github.com/guilgautier/sdia-python)Date: 10/06/2021Instructor: [Guillaume Gautier](https://guilgautier.github.io/)Students (pair):- [Capucine GARCON]([link](https://github.com/CapucineGARCON))- [Student 2]([link](https://github.com/username2)) ###Code %load_ext autoreload %autoreload 2 import numpy as np from lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output <class 'numpy.ndarray'> ###Markdown Propose at leat 2 ways to create an integer vector of size 100 made of 1s ###Code A1 = np.ones((1, 100), dtype=int) A2 = np.array([1 for i in range (100)]) A1, A2 ###Output _____no_output_____ ###Markdown Create a vector with values ranging from 10 to 49 ###Code B = np.arange(10, 50) B ###Output _____no_output_____ ###Markdown Propose a way to construct the vector $(0.0, 0.2, 0.4, 0.6, 0.8)$ ###Code C = np.arange(0, 0.9, 0.2) C ###Output _____no_output_____ ###Markdown Convert a float array into an integer array in place ###Code # Convertion of a float array into an integer array C1 = np.arange(0, 0.9, 0.2) np.linspace C2 = C1.astype(int) C1, C2 ###Output _____no_output_____ ###Markdown Given a boolean array- return the indices that are True - negate the array inplace ###Code M = np.array([[True, True, False, True, False, False]]) # Two methods to return the indices that are True j = np.where(M == True) i =np.flatnonzero(M == True) # Negation of M: two methods N = ~M N2 = np.invert(M) ###Output _____no_output_____ ###Markdown Given 2 vectors $u, v$, propose at least- 2 ways to compute the inner product $v^{\top} u$ (here they must have the same size)- 2 ways to compute the outer product matrix $u v^{\top}$- 2 ways to compute the outer sum matrix "$M = u + v^{\top}$", where $M_{ij} = u_i + v_j$ ###Code # Inner product u = np.array([1, 2, 3]) v = np.array([2, 2, 1]) np.dot(u,v), u @ v # Outer product np.outer(u,v) # Outer sum matrix np.add.outer(u,v) ###Output _____no_output_____ ###Markdown Given the following matrix$$M = \begin{pmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \\ 6 & 7 & 8 \\\end{pmatrix}$$- Create $M$ using as a list of lists and access the element in the middle- Propose at least 2 ways to create $M$ using numpy and access the element in the middle- Swap its first and second row- Propose at least 3 ways to extract the submatrix $\begin{pmatrix}4 & 5 \\7 & 8 \\\end{pmatrix}$- Propose at least 2 ways to extract the diagonal of $M$- Propose at least 2 ways to compute $M^3$- Compute $v^{\top} M$, resp. $M N$ for a vector, resp. a matrix of your choice. - Propose 2 ways to "vectorize" the matrix, i.e., transform it into - $(0, 1, 2, 3, 4, 5, 6, 7, 8)$ - $(0, 3, 6, 1, 4, 7, 2, 5, 8)$- Consider $v = (1, 2 , 3)$, compute the - row-wise multiplication of $M$ by $v$ ($M_{i\cdot}$ is multiplied by $v_i$) - column-wise multiplication of $M$ by $v$ ($M_{\cdot j}$ is multiplied by $v_i$) ###Code # Creation of M using a list of list M = [[0, 1, 2], [3, 4, 5], [6, 7, 8]] M[1][1] # Creation of M using numpy: 3 methods M = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) N = np.arange(0, 9) N.shape = (3, 3) O = np.matrix('[0, 1, 2; 3, 4, 5; 6, 7, 8]') M[1][1] # Swap first and second raw M[0], M[1] = M[1], M[0] M # Extraction of the submatrix M[1:3, 1:3], # Extraction of the diagonal np.diag(M), M.diagonal(), [M[i][i] for i in range(len(M))] # Calculation of M^3 M_cube1 = np.dot(M,np.dot(M,M)) M_cube2= np.linalg.matrix_power(M, 3) M_cube3 = M @ M @ M M_cube1, M_cube2, M_cube3 # Calculation of vTM M = [[0, 1, 2], [3, 4, 5], [6, 7, 8]] v = np.array([1, 2, 3]) np.dot(np.transpose(v), M) # Vectorisation of the matrix M M = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) M_vect = tuple(np.ravel(M, order ='C')) M_vect2 = tuple(np.ravel(M, order='F')) M_vect, M_vect2 v = np.array([1, 2, 3]) np.dot(v, M), np.dot(M,v) ###Output _____no_output_____ ###Markdown Write a function `is_symmetric` that checks whether a given n x n matrix is symmetric, and provide an example ###Code # Check if the matrix is symmetric or not def is_symmetric(M): return(np.all(M == np.transpose(M))) is_symmetric(M = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]])), is_symmetric(np.array([[1, 2, 3],[2, 0, 2], [3, 2, 1]])) ###Output _____no_output_____ ###Markdown RandomREQUIREMENT: USE THE FUNCTION `get_random_number_generator` as previously used in Lab 2. Consider the Bernoulli(0.4) distribution- Propose at least 2 ways to generate n=1000 samples from it- Compute the empirical mean and variance ###Code # Generation of 1000 samples rng = get_random_number_generator(0) first_way = rng.binomial(1, 0.4, 1000) # Calculation of mean and variance np.mean(first_way), np.var(first_way) ###Output _____no_output_____ ###Markdown Consider a random matrix of size $50 \times 100$, filled with i.i.d. standard Gaussian variables, compute- the absolute value of each entry- the sum of each row- the sum of each colomn - the (euclidean) norm of each row- the (euclidean) norm of each column ###Code # Computation of the Gaussian Matrix of size 50x100 G = rng.normal(size = (50, 100)) # Absolute value of the Gaussian Matrix G_absolute = abs(G) #Sum of each row G_sum_rows = G.sum(axis=1) # Sum of each colomn G_sum_colomn = G.sum(axis=0) # Norm of each row G_norm_row = np.linalg.norm(G, axis=1) # Norm of each colomn G_norm_colomn = np.linalg.norm(G, axis = 0) ###Output _____no_output_____ ###Markdown Practical session 3 - Practice with numpy Course: [SDIA-Python](https://github.com/guilgautier/sdia-python) Date: 10/06/2021 Instructor: [Guillaume Gautier](https://guilgautier.github.io/) Students (pair): - [Student 1](https://github.com/AnnaMarizy/sdia-python) - [Student 2](https://github.com/LoanSarazin/sdia-python) ###Code %load_ext autoreload %autoreload 2 import numpy as np from lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output _____no_output_____ ###Markdown Propose at leat 2 ways to create an integer vector of size 100 made of 1s ###Code l1 = np.array([1 for i in range(100)]) l2 = np.ones(100) l3 = np.full(100, 1) ###Output _____no_output_____ ###Markdown Create a vector with values ranging from 10 to 49 ###Code l = np.arange(10, 50) ###Output _____no_output_____ ###Markdown Propose a way to construct the vector $(0.0, 0.2, 0.4, 0.6, 0.8)$ ###Code l = np.arange(0, 1, 0.2) l ###Output _____no_output_____ ###Markdown Convert a float array into an integer array in place ###Code l = np.arange(0, 5, 0.3) print(l) l = l.astype(np.int16) print(l) ###Output _____no_output_____ ###Markdown Given a boolean array - return the indices that are True - negate the array inplace? ###Code l = np.random.randint(0, 2, 5).astype(bool) print(f"array = {l}, indices = {np.nonzero(l)[0]}") np.logical_not(l, out=l) ###Output _____no_output_____ ###Markdown Given 2 vectors $u, v$, propose at least- 2 ways to compute the inner product $v^{\top} u$ (here they must have the same size)- 2 ways to compute the outer product matrix $u v^{\top}$- 2 ways to compute the outer sum matrix "$M = u + v^{\top}$", where $M_{ij} = u_i + v_j$ ###Code u, v = np.random.randint(0, 3, 5), np.random.randint(0, 2, 5) print(f"u = {u}, v = {v}") print(f"inner product = {np.inner(v, u)}") print(f"outer product = {np.outer(u, v)}") ###Output _____no_output_____ ###Markdown Given the following matrix$$M = \begin{pmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \\ 6 & 7 & 8 \\\end{pmatrix}$$- Create $M$ using as a list of lists and access the element in the middle- Propose at least 2 ways to create $M$ using numpy and access the element in the middle- Swap its first and second row- Propose at least 3 ways to extract the submatrix $\begin{pmatrix}4 & 5 \\7 & 8 \\\end{pmatrix}$- Propose at least 2 ways to extract the diagonal of $M$- Propose at least 2 ways to compute $M^3$- Compute $v^{\top} M$, resp. $M N$ for a vector, resp. a matrix of your choice. - Propose 2 ways to "vectorize" the matrix, i.e., transform it into - $(0, 1, 2, 3, 4, 5, 6, 7, 8)$ - $(0, 3, 6, 1, 4, 7, 2, 5, 8)$- Consider $v = (1, 2 , 3)$, compute the - row-wise multiplication of $M$ by $v$ ($M_{i\cdot}$ is multiplied by $v_i$) - column-wise multiplication of $M$ by $v$ ($M_{\cdot j}$ is multiplied by $v_i$) ###Code M = [[0, 1, 2], [3, 4, 5], [6, 7, 8]] M[1][1] M = np.reshape(np.arange(0, 9, 1), (3, 3)) M[1,1] ###Output _____no_output_____ ###Markdown Practical session 3 - Practice with numpyCourse: [SDIA-Python](https://github.com/guilgautier/sdia-python)Date: 10/06/2021Instructor: [Guillaume Gautier](https://guilgautier.github.io/)Students (pair):- [Student 1]([link](https://github.com/username1))- [Student 2]([link](https://github.com/username2)) ###Code %load_ext autoreload %autoreload 2 import numpy as np from sdia_python.lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output _____no_output_____ ###Markdown Propose at leat 2 ways to create an integer vector of size 100 made of 1s ###Code A = np.ones(100) print(A) A = np.full(100,1) print(A) ###Output _____no_output_____ ###Markdown Create a vector with values ranging from 10 to 49 ###Code U = np.arange(10,50) print(U) ###Output _____no_output_____ ###Markdown Propose a way to construct the vector $(0.0, 0.2, 0.4, 0.6, 0.8)$ ###Code U = np.linspace(0,0.8,5) print(U) U = np.arange(0,1,0.2) print(U) ###Output _____no_output_____ ###Markdown Convert a float array into an integer array in place ###Code U=np.array([5.,6.,2.5,3.8,3.14,9.58]) print(U) V=U.astype(int) print(V) ###Output _____no_output_____ ###Markdown Given a boolean array- return the indices where - negate the array inplace? ###Code U = np.array([True, False, False, True, True, True, False, True]) print(U) print(np.argwhere(U)) print(np.where(U == False,True,False)) ###Output _____no_output_____ ###Markdown Given 2 vectors $u, v$, propose at least- 2 ways to compute the inner product $v^{\top} u$ (here they must have the same size)- 2 ways to compute the outer product matrix $u v^{\top}$- 2 ways to compute the outer sum matrix "$M = u + v^{\top}$", where $M_{ij} = u_i + v_j$ ###Code U=np.array([1,2,3,4,5,6]) V=np.array([7,8,9,10,11,12]) print(np.dot(U,V)) print(np.inner(U,V)) tU = np.reshape(U,(-1,1)) tV = np.reshape(V,(-1,1)) print(np.outer(U,V)) print(np.dot(tU,tV.T)) M = tU.T + tV print(M) M=np.add.outer(U,V) print(M) ###Output _____no_output_____ ###Markdown Given the following matrix$$M = \begin{pmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \\ 6 & 7 & 8 \\\end{pmatrix}$$- Create $M$ using as a list of lists and access the element in the middle- Propose at least 2 ways to create $M$ using numpy and access the element in the middle- Swap its first and second row- Propose at least 3 ways to extract the submatrix $\begin{pmatrix}4 & 5 \\7 & 8 \\\end{pmatrix}$- Propose at least 2 ways to extract the diagonal of $M$- Propose at least 2 ways to compute $M^3$- Compute $v^{\top} M$, resp. $M N$ for a vector, resp. a matrix of your choice. - Propose 2 ways to "vectorize" the matrix, i.e., transform it into - $(0, 1, 2, 3, 4, 5, 6, 7, 8)$ - $(0, 3, 6, 1, 4, 7, 2, 5, 8)$- Consider $v = (1, 2 , 3)$, compute the - row-wise multiplication of $M$ by $v$ ($M_{i\cdot}$ is multiplied by $v_i$) - column-wise multiplication of $M$ by $v$ ($M_{\cdot j}$ is multiplied by $v_i$) ###Code M=[ [0,1,2], [3,4,5], [6,7,8] ] print(M[1][1]) M=np.array(M) print(M) print(M[1][1]) M=np.arange(9).reshape(3,3) print(M) print(M[1,1]) #M[[1,0]] = M[[0,1]] #print(M) #print(M[1:,1:]) #print(M[np.ix_([1,2],[1,2])]) #print(np.array([M[1][1:],M[2][1:]])) print(np.diag(M)) #print([M[i][i] for i in range(len(M))]) print(M@M@M) #print(np.dot(np.dot(M,M),M)) V=np.array(range(3)) print(np.dot(V,M)) N=np.array(range(12)).reshape(3,4) print(M@N) print(M.reshape(1,-1)) print((M.T).reshape(1,-1)) V = np.array([1,2,3]) print(V*M[:,None]) print((V*(M.T)[:,None]).T) ###Output _____no_output_____ ###Markdown Write a function `is_symmetric` that checks whether a given n x n matrix is symmetric, and provide an example ###Code def is_symmetric(M): assert len(M) == len(M[0]) return (M.T == M).all() print(is_symmetric(M)) N = np.array([[1,2,3], [2,4,5], [3,5,8]]) print(is_symmetric(N)) ###Output _____no_output_____ ###Markdown RandomREQUIREMENT: USE THE FUNCTION `get_random_number_generator` as previously used in Lab 2. ###Code from sdia_python.lab2.utils import get_random_number_generator ###Output _____no_output_____ ###Markdown Consider the Bernoulli(0.4) distribution- Propose at least 2 ways to generate n=1000 samples from it- Compute the empirical mean and variance ###Code #2 ways to generate samples of a Bernoulli distribution rng = get_random_number_generator(100) a = rng.binomial(1, 0.4, 1000) b = (rng.uniform(0,1,1000)<0.4).astype(int) print (a,b) #empirical mean and variance print (np.mean(a)) print (np.mean(b)) print (np.var(a)) print (np.var(b)) ###Output _____no_output_____ ###Markdown Consider a random matrix of size $50 \times 100$, filled with i.i.d. standard Gaussian variables, compute- the absolute value of each entry- the sum of each row- the sum of each colomn - the (euclidean) norm of each row- the (euclidean) norm of each column ###Code #generate a matrix of size 50×100 , filled with i.i.d. standard Gaussian variables c = rng.normal(size = (50,100)) print (c) #absolute value print (abs(c)) #sum of each row print (np.sum(c, axis=1)) #sum of each column print (np.sum(c, axis=0)) #euclidian norm of each row print(np.linalg.norm(c, axis = 1)) #euclidian norm of each column print(np.linalg.norm(c, axis = 0)) ###Output _____no_output_____ ###Markdown Practical session 3 - Practice with numpyCourse: [SDIA-Python](https://github.com/guilgautier/sdia-python)Date: 10/06/2021Instructor: [Guillaume Gautier](https://guilgautier.github.io/)Students (pair):- [Student 1]([link](https://github.com/username1))- [Student 2]([link](https://github.com/username2)) ###Code %load_ext autoreload %autoreload 2 import numpy as np from sdia_python.lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output <class 'numpy.ndarray'> ###Markdown Propose at leat 2 ways to create an integer vector of size 100 made of 1s ###Code v1=np.ones(100) print(v1, len(v1)) v2=np.full(100, fill_value=1) print(v2) ###Output [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] 100 [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1] ###Markdown Create a vector with values ranging from 10 to 49 ###Code v=np.arange(10,50) print(v) ###Output [10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49] ###Markdown Propose a way to construct the vector $(0.0, 0.2, 0.4, 0.6, 0.8)$ ###Code v1=np.linspace(0,0.8,5) print(v1, type(v1)) v2=np.arange(0,1,0.2) print(v2, type(v2)) ###Output [0. 0.2 0.4 0.6 0.8] <class 'numpy.ndarray'> [0. 0.2 0.4 0.6 0.8] <class 'numpy.ndarray'> ###Markdown Convert a float array into an integer array in place ###Code float_array=np.array([0.2,5.6,4.3]) int_array=float_array.astype(int) print(int_array) ###Output [0 5 4] ###Markdown Given a boolean array- return the indices where - negate the array inplace? ###Code boolean_array=np.array([True,True,False,True]) where_true=np.where(boolean_array) print(where_true) inv=np.invert(boolean_array) print(inv, type(inv)) ###Output (array([0, 1, 3], dtype=int64),) [False False True False] <class 'numpy.ndarray'> ###Markdown Given 2 vectors $u, v$, propose at least- 2 ways to compute the inner product $v^{\top} u$ (here they must have the same size)- 2 ways to compute the outer product matrix $u v^{\top}$- 2 ways to compute the outer sum matrix "$M = u + v^{\top}$", where $M_{ij} = u_i + v_j$ ###Code u=np.array([[1,2,3]]).T v=np.array([[1,1,1]]).T sol1=np.dot(v.T,u) print('inner product :', sol1) sol2=np.matmul(v.T,u) print('inner product :',sol2) sol3=np.dot(u,v.T) print('outer product : \n',sol3) sol4=np.matmul(u,v.T) print('outer product : \n',sol4) sol5=np.add(u,v.T) print('outer sum matrix :\n',sol5) sol6=np.add(u,v.T) print('outer sum matrix :\n',sol6) ###Output inner product : [[6]] inner product : [[6]] outer product : [[1 1 1] [2 2 2] [3 3 3]] outer product : [[1 1 1] [2 2 2] [3 3 3]] outer sum matrix : [[2 2 2] [3 3 3] [4 4 4]] ###Markdown Given the following matrix$$M = \begin{pmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \\ 6 & 7 & 8 \\\end{pmatrix}$$- Create $M$ using as a list of lists and access the element in the middle- Propose at least 2 ways to create $M$ using numpy and access the element in the middle- Swap its first and second row- Propose at least 3 ways to extract the submatrix $\begin{pmatrix}4 & 5 \\7 & 8 \\\end{pmatrix}$- Propose at least 2 ways to extract the diagonal of $M$- Propose at least 2 ways to compute $M^3$- Compute $v^{\top} M$, resp. $M N$ for a vector, resp. a matrix of your choice. - Propose 2 ways to "vectorize" the matrix, i.e., transform it into - $(0, 1, 2, 3, 4, 5, 6, 7, 8)$ - $(0, 3, 6, 1, 4, 7, 2, 5, 8)$- Consider $v = (1, 2 , 3)$, compute the - row-wise multiplication of $M$ by $v$ ($M_{i\cdot}$ is multiplied by $v_i$) - column-wise multiplication of $M$ by $v$ ($M_{\cdot j}$ is multiplied by $v_i$) ###Code M=[[0,1,2],[3,4,5],[6,7,8]] print('element in the middle : ', M[1][1] ) #----------------------- M1=np.array([[0,1,2],[3,4,5],[6,7,8]]) print(M1, type(M1)) print('element in the middle : ', M1[1][1] ) M2=np.matrix([[0,1,2],[3,4,5],[6,7,8]]) print(M2, type(M2)) print('element in the middle : ', M2[1,1] ) #----------------------- M1[[0,1]]=M1[[1,0]] print('swap rows for array : \n', M1) M2[[0,1]]=M2[[1,0]] print('swap rows for matrix: \n', M2) #----------------------- M1=np.array([[0,1,2],[3,4,5],[6,7,8]]) M2=np.matrix([[0,1,2],[3,4,5],[6,7,8]]) M1_sub1=M1[np.ix_([1,2],[1,2])] print('submatrix : \n', M1_sub1) M1_sub2=M1[1:3,1:3] print('submatrix : \n', M1_sub2) M1_sub3=M1[[[1],[2]],[1,2]] print('submatrix : \n', M1_sub3) #----------------------- M1_d1=np.diag(M1) print('diagonal : \n', M1_d1) M1_d2=np.array([M1[k,k] for k in range(len(M1))]) print('diagonal : \n', M1_d2) #----------------------- M_cube1=np.dot(M,np.dot(M,M)) print('M cube : \n', M_cube1) M_cube2=np.linalg.matrix_power(M,3) print('M cube : \n', M_cube2) #----------------------- M_vect1=np.matrix.flatten(M2) print('M vect : \n', M_vect1) M_vect1_F=np.matrix.flatten(M2,'F') print('M vect F: \n', M_vect1_F) M_vect2=M1.reshape((1,9)) print('M vect : \n', M_vect2) M_vect2_F=(M1.T).reshape((1,9)) print('M vect F : \n', M_vect2_F) #----------------------- v=np.array([[1,2,3]]).T product1=np.dot(M1,v) print("product : \n",product1) ###Output element in the middle : 4 [[0 1 2] [3 4 5] [6 7 8]] <class 'numpy.ndarray'> element in the middle : 4 [[0 1 2] [3 4 5] [6 7 8]] <class 'numpy.matrix'> element in the middle : 4 swap rows for array : [[3 4 5] [0 1 2] [6 7 8]] swap rows for matrix: [[3 4 5] [0 1 2] [6 7 8]] submatrix : [[4 5] [7 8]] submatrix : [[4 5] [7 8]] submatrix : [[4 5] [7 8]] diagonal : [0 4 8] diagonal : [0 4 8] M cube : [[ 180 234 288] [ 558 720 882] [ 936 1206 1476]] M cube : [[ 180 234 288] [ 558 720 882] [ 936 1206 1476]] M vect : [[0 1 2 3 4 5 6 7 8]] M vect F: [[0 3 6 1 4 7 2 5 8]] M vect : [[0 1 2 3 4 5 6 7 8]] M vect F : [[0 3 6 1 4 7 2 5 8]] product : [[ 8] [26] [44]] ###Markdown Write a function `is_symmetric` that checks whether a given n x n matrix is symmetric, and provide an example ###Code def is_symmetric(M): if np.all(M==np.transpose(M)): return True else: return('matrix non-symmetric') print('M:',is_symmetric(M2)) N=np.matrix([[1,5,3],[5,2,6],[3,6,7]]) print('N:',is_symmetric(N)) ###Output M: matrix non-symmetric N: True ###Markdown RandomREQUIREMENT: USE THE FUNCTION `get_random_number_generator` as previously used in Lab 2. Consider the Bernoulli(0.4) distribution- Propose at least 2 ways to generate n=1000 samples from it- Compute the empirical mean and variance ###Code # binomiale np.random.Generator.uniform([0,1],[0,2]) # générer une uniforme rng = get_random_number_generator (None) p=0.4 rng.uniform()== 0.4 ###Output _____no_output_____ ###Markdown Practical session 3 - Practice with numpyCourse: [SDIA-Python](https://github.com/guilgautier/sdia-python)Date: 10/06/2021Instructor: [Guillaume Gautier](https://guilgautier.github.io/)Students (pair):- [Student 1]([link](https://github.com/username1))- [Student 2]([link](https://github.com/username2)) ###Code %load_ext autoreload %autoreload 2 import numpy as np from sdia_python.lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output <class 'numpy.ndarray'> ###Markdown Propose at leat 2 ways to create an integer vector of size 100 made of 1s ###Code vector1=np.ones(100) vector2=np.zeros(100)+1 print(vector1) print(vector2) ###Output [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] ###Markdown Create a vector with values ranging from 10 to 49 ###Code vec=np.arange(10,50) vec ###Output _____no_output_____ ###Markdown Propose a way to construct the vector $(0.0, 0.2, 0.4, 0.6, 0.8)$ ###Code vect1=np.array([0, 0.2, 0.4, 0.6, 0.8]) vect2=np.arange(0, 1, 0.2) print(vect1) print(vect2) ###Output [0. 0.2 0.4 0.6 0.8] [0. 0.2 0.4 0.6 0.8] ###Markdown Convert a float array into an integer array in place ###Code floatvec=np.array([1.2, 5.2, 2.0]) floatvec.astype(int) ###Output _____no_output_____ ###Markdown Given a boolean array- return the indices where True- negate the array inplace? ###Code bl=np.array([True,False,True,True,False]) np.where(bl==True) np.invert(bl) ###Output _____no_output_____ ###Markdown Given 2 vectors $u, v$, propose at least- 2 ways to compute the inner product $v^{\top} u$ (here they must have the same size)- 2 ways to compute the outer product matrix $u v^{\top}$- 2 ways to compute the outer sum matrix "$M = u + v^{\top}$", where $M_{ij} = u_i + v_j$ ###Code u=np.array([1, 2, 3]) v=np.array([0, 1, 2]) ide=np.ones(3) np.dot(u,v) np.sum(u*v) np.outer(u,v) uu=np.matrix(u) vv=np.matrix(v) np.array(uu.T*vv) np.outer(u,ide)+np.outer(v,ide).T np.array(np.matrix([u,u,u]).T+np.matrix([v,v,v])) ###Output _____no_output_____ ###Markdown Given the following matrix$$M = \begin{pmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \\ 6 & 7 & 8 \\\end{pmatrix}$$- Create $M$ using as a list of lists and access the element in the middle- Propose at least 2 ways to create $M$ using numpy and access the element in the middle- Swap its first and second row- Propose at least 3 ways to extract the submatrix $\begin{pmatrix}4 & 5 \\7 & 8 \\\end{pmatrix}$- Propose at least 2 ways to extract the diagonal of $M$- Propose at least 2 ways to compute $M^3$- Compute $v^{\top} M$, resp. $M N$ for a vector, resp. a matrix of your choice. - Propose 2 ways to "vectorize" the matrix, i.e., transform it into - $(0, 1, 2, 3, 4, 5, 6, 7, 8)$ - $(0, 3, 6, 1, 4, 7, 2, 5, 8)$- Consider $v = (1, 2 , 3)$, compute the - row-wise multiplication of $M$ by $v$ ($M_{i\cdot}$ is multiplied by $v_i$) - column-wise multiplication of $M$ by $v$ ($M_{\cdot j}$ is multiplied by $v_i$) ###Code M = [[0, 1, 2], [3, 4, 5], [6, 7, 8]] M[1][1] M1 = np.array(M) M1 M2 = np.arange(0,9).reshape(3,3) M2 M1[0, :] = M1[1, :] M1[1, :] = M2[0, :] M1 M2[1:, 1:] M2[np.ix_([1,2],[1,2])] np.diag(M2) M2.diagonal() np.dot(np.dot(M2, M2), M2) Mmatrix=np.matrix(M2) np.array(Mmatrix**3) v=np.array([1, 0, 1]) N=np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) print(np.dot(v, M2)) print(np.dot(M2, N)) M2.reshape(9) np.resize(M2,9) M2.T.reshape(9) np.resize(M2.T,9) v=np.array([1, 2, 3]) (M2.T*v).T M2*v ###Output _____no_output_____ ###Markdown Write a function `is_symmetric` that checks whether a given n x n matrix is symmetric, and provide an example ###Code def is_symmetric(M): l=M==M.T return not (False in l) A=np.array([[1,2,3],[2,5,6],[3,6,7]]) is_symmetric(A) is_symmetric(M2) ###Output _____no_output_____ ###Markdown RandomREQUIREMENT: USE THE FUNCTION `get_random_number_generator` as previously used in Lab 2. Consider the Bernoulli(0.4) distribution- Propose at least 2 ways to generate n=1000 samples from it- Compute the empirical mean and variance ###Code rng = get_random_number_generator(None) ber=rng.binomial(1, 0.4, 1000) ber l=np.zeros(1000) for i in range(1000): l[i]=rng.binomial(1, 0.4) l ber.mean() ber.std() ###Output _____no_output_____ ###Markdown Practical session 3 - Practice with numpyCourse: [SDIA-Python](https://github.com/guilgautier/sdia-python)Date: 10/06/2021Instructor: [Guillaume Gautier](https://guilgautier.github.io/)Students (pair):- [Student 1]([link](https://github.com/username1))- [Student 2]([link](https://github.com/username2)) ###Code %load_ext autoreload %autoreload 2 import numpy as np from lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output _____no_output_____ ###Markdown Practical session 3 - Practice with numpy Course: [SDIA-Python](https://github.com/guilgautier/sdia-python) Date: 10/06/2021 Instructor: [Guillaume Gautier](https://guilgautier.github.io/) Students (pair): - [Student 1](https://github.com/AnnaMarizy/sdia-python) - [Student 2](https://github.com/LoanSarazin/sdia-python) ###Code %load_ext autoreload %autoreload 2 import numpy as np from sdia_python.lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output _____no_output_____ ###Markdown Propose at leat 2 ways to create an integer vector of size 100 made of 1s ###Code l1 = np.array([1 for i in range(100)]) l2 = np.ones(100) l3 = np.full(100, 1) ###Output _____no_output_____ ###Markdown Create a vector with values ranging from 10 to 49 ###Code l = np.arange(10, 50) ###Output _____no_output_____ ###Markdown Propose a way to construct the vector $(0.0, 0.2, 0.4, 0.6, 0.8)$ ###Code l = np.arange(0, 1, 0.2) l ###Output _____no_output_____ ###Markdown Convert a float array into an integer array in place ###Code l = np.arange(0, 5, 0.3) print(l) l = l.astype(np.int16) print(l) ###Output _____no_output_____ ###Markdown Given a boolean array - return the indices that are True - negate the array inplace? ###Code l = np.random.randint(0, 2, 5).astype(bool) print(f"array = {l}, indices = {np.nonzero(l)[0]}") np.logical_not(l, out=l) ###Output _____no_output_____ ###Markdown Given 2 vectors $u, v$, propose at least- 2 ways to compute the inner product $v^{\top} u$ (here they must have the same size)- 2 ways to compute the outer product matrix $u v^{\top}$- 2 ways to compute the outer sum matrix "$M = u + v^{\top}$", where $M_{ij} = u_i + v_j$ ###Code u, v = np.random.randint(0, 3, 5), np.random.randint(0, 2, 5) print(f"u = {u}, v = {v}") print(f"inner product = {np.inner(v, u)}") print(f"outer product = {np.outer(u, v)}") ###Output _____no_output_____ ###Markdown Given the following matrix$$M = \begin{pmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \\ 6 & 7 & 8 \\\end{pmatrix}$$- Create $M$ using as a list of lists and access the element in the middle- Propose at least 2 ways to create $M$ using numpy and access the element in the middle- Swap its first and second row- Propose at least 3 ways to extract the submatrix $\begin{pmatrix}4 & 5 \\7 & 8 \\\end{pmatrix}$- Propose at least 2 ways to extract the diagonal of $M$- Propose at least 2 ways to compute $M^3$- Compute $v^{\top} M$, resp. $M N$ for a vector, resp. a matrix of your choice. - Propose 2 ways to "vectorize" the matrix, i.e., transform it into - $(0, 1, 2, 3, 4, 5, 6, 7, 8)$ - $(0, 3, 6, 1, 4, 7, 2, 5, 8)$- Consider $v = (1, 2 , 3)$, compute the - row-wise multiplication of $M$ by $v$ ($M_{i\cdot}$ is multiplied by $v_i$) - column-wise multiplication of $M$ by $v$ ($M_{\cdot j}$ is multiplied by $v_i$) ###Code M = [[0, 1, 2], [3, 4, 5], [6, 7, 8]] M[1][1] M = np.reshape(np.arange(0, 9, 1), (3, 3)) M[1,1] ###Output _____no_output_____ ###Markdown Practical session 3 - Practice with numpyCourse: [SDIA-Python](https://github.com/guilgautier/sdia-python)Date: 10/06/2021Instructor: [Guillaume Gautier](https://guilgautier.github.io/)Students (pair):- [Student 1]([link](https://github.com/username1))- [Student 2]([link](https://github.com/username2)) ###Code %load_ext autoreload %autoreload 2 import numpy as np from lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output <class 'numpy.ndarray'> ###Markdown Propose at leat 2 ways to create an integer vector of size 100 made of 1s ###Code M = np.ones(100) print(M) ###Output [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] ###Markdown Create a vector with values ranging from 10 to 49 ###Code M = np.arange(10,50) print (M) ###Output [10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49] ###Markdown Propose a way to construct the vector $(0.0, 0.2, 0.4, 0.6, 0.8)$ ###Code M = np.arange(0,1,0.2) print(M) ###Output [0. 0.2 0.4 0.6 0.8] ###Markdown Convert a float array into an integer array in place ###Code M = np.arange(0,10,0.5) M = M.astype(int) print(M) ###Output [0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9] ###Markdown Given a boolean array - return the indices where it's True - negate the array inplace? ###Code A = np.random.randn(3,3) bool_A = A<0,5 print(bool_A) ###Output (array([[ True, True, True], [ True, True, False], [ True, True, True]]), 5) ###Markdown Given 2 vectors $u, v$, propose at least- 2 ways to compute the inner product $v^{\top} u$ (here they must have the same size)- 2 ways to compute the outer product matrix $u v^{\top}$- 2 ways to compute the outer sum matrix "$M = u + v^{\top}$", where $M_{ij} = u_i + v_j$ ###Code u = np.arange(0,4) v = np.arange(4,8) Sa = v.T @ u Sb = u @ v.T M = u + v.T print(Sb) print(M) ###Output 38 [ 4 6 8 10] ###Markdown Practical session 3 - Practice with numpy Course: [SDIA-Python](https://github.com/guilgautier/sdia-python) Date: 10/06/2021 Instructor: [Guillaume Gautier](https://guilgautier.github.io/) Students (pair): - [Hadrien Salem]([link](https://github.com/SnowHawkeye)) - [Emilie Salem]([link](https://github.com/EmilieSalem)) ###Code %load_ext autoreload %autoreload 2 import numpy as np from sdia_python.lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output _____no_output_____ ###Markdown Propose at leat 2 ways to create an integer vector of size 100 made of 1s ###Code vect1 = np.ones(100) vect2 = np.array([1 for i in range(100)]) ###Output _____no_output_____ ###Markdown Create a vector with values ranging from 10 to 49 ###Code np.array([i for i in range(10,50)]) ###Output _____no_output_____ ###Markdown Propose a way to construct the vector $(0.0, 0.2, 0.4, 0.6, 0.8)$ ###Code float_vect = np.array([0.1*i for i in range(0,10,2)]) ###Output _____no_output_____ ###Markdown Convert a float array into an integer array in place ###Code int_vect = float_vect.astype(int) int_vect ###Output _____no_output_____ ###Markdown Given a boolean array - return the indices where - negate the array inplace? ###Code bool_array = np.array([True, False, True, True]) index_true = np.where(bool_array == True) index_false = np.where(bool_array == False) bool_array_negate = np.invert(bool_array) ###Output _____no_output_____ ###Markdown Given 2 vectors $u, v$, propose at least - 2 ways to compute the inner product $v^{\top} u$ (here they must have the same size) - 2 ways to compute the outer product matrix $u v^{\top}$ - 2 ways to compute the outer sum matrix "$M = u + v^{\top}$", where $M_{ij} = u_i + v_j$ ###Code u = np.array([0,1]) v = np.array([2,3]) inner_prod1 = np.dot(u,v) inner_prod2 = np.inner(u,v) u = np.array([0,1, 2]) v = np.array([2,3]) outer_prod1 = np.tensordot(u, v, axes=0) outer_prod2 = np.outer(u,v) sum_matrix1 = np.add.outer(u,v) sum_matrix2 = "?" # TODO ###Output _____no_output_____ ###Markdown Given the following matrix $$ M = \begin{pmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \\ 6 & 7 & 8 \\ \end{pmatrix} $$ - Create $M$ using as a list of lists and access the element in the middle - Propose at least 2 ways to create $M$ using numpy and access the element in the middle - Swap its first and second row - Propose at least 3 ways to extract the submatrix $\begin{pmatrix}4 & 5 \\7 & 8 \\\end{pmatrix}$ - Propose at least 2 ways to extract the diagonal of $M$ - Propose at least 2 ways to compute $M^3$ - Compute $v^{\top} M$, resp. $M N$ for a vector, resp. a matrix of your choice. - Propose 2 ways to "vectorize" the matrix, i.e., transform it into - $(0, 1, 2, 3, 4, 5, 6, 7, 8)$ - $(0, 3, 6, 1, 4, 7, 2, 5, 8)$ - Consider $v = (1, 2 , 3)$, compute the - row-wise multiplication of $M$ by $v$ ($M_{i\cdot}$ is multiplied by $v_i$) - column-wise multiplication of $M$ by $v$ ($M_{\cdot j}$ is multiplied by $v_i$) ###Code # M as a list M_list = [ [0, 1, 2], [3, 4, 5], [6, 7, 8], ] middle_element_list = M_list[1][1] # M as an array M_numpy1 = np.array(M_list) M_numpy2 = np.array([i for i in range(9)]).reshape((3,3)) M_numpy3 = np.array(np.arange(9)).reshape((3,3)) middle_element_numpy = M_numpy3[1,1] # Swapping rows M_swap = np.copy(M_numpy1) M_swap[[0,1]] = M_swap[[1,0]] M_swap M = np.copy(M_numpy3) # Extracting submatrixes submatrix1 = M[1:,1:] submatrix2 = M[[1,2]].T[[1,2]].T submatrix3 = M[[1,2]][:,1:] # Extracting diagonal diagonal1 = np.diag(M) diagonal2 = np.array([M[i][i] for i in range(len(M))]) # Compute M cube M_cube = M.dot(M).dot(M) M_cube2 = np.linalg.matrix_power(M, 3) # Products v = np.array([0,1,0]) N = np.array(np.arange(0,6)).reshape(3,2) vect_prod = v.dot(M) matrix_prod = M.dot(N) # Vectorize M M_vectorized1_rows = M.reshape(1,9)[0] M_vectorized1_columns = M.T.reshape(1,9)[0] M_vectorized2_rows = M.flatten() M_vectorized2_columns = M.flatten(order = 'F') # Row-wise and column-wise multiplication v = np.array([1,0,2]) row_wise_prod = np.multiply(M.T,v).T column_wise_prod = np.multiply(M,v) row_wise_prod, column_wise_prod ###Output _____no_output_____ ###Markdown Practical session 3 - Practice with numpyCourse: [SDIA-Python](https://github.com/guilgautier/sdia-python)Date: 10/06/2021Instructor: [Guillaume Gautier](https://guilgautier.github.io/)Students (pair):- [Student 1]([link](https://github.com/username1))- [Student 2]([link](https://github.com/username2)) ###Code %load_ext autoreload %autoreload 2 import numpy as np from sdia_python.lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output <class 'numpy.ndarray'> ###Markdown Propose at leat 2 ways to create an integer vector of size 100 made of 1s ###Code one = np.ones(100) one_bis = np.zeros(100) + 1 print(one,"\n", one_bis) ###Output [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] ###Markdown Create a vector with values ranging from 10 to 49 ###Code arange = np.arange(10,50) print(arange) ###Output [10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49] ###Markdown Propose a way to construct the vector $(0.0, 0.2, 0.4, 0.6, 0.8)$ ###Code arange_bis = np.arange(0.0,1.0,0.2) print(arange_bis) ###Output [0. 0.2 0.4 0.6 0.8] ###Markdown Convert a float array into an integer array in place ###Code arange_bis.astype(int) ###Output _____no_output_____ ###Markdown Given a boolean array - return the indices where - negate the array inplace? ###Code boolean = np.array([False,False,False,True,False,True]) np.argwhere(boolean) np.invert(boolean, out = boolean) ###Output _____no_output_____ ###Markdown Given 2 vectors $u, v$, propose at least- 2 ways to compute the inner product $v^{\top} u$ (here they must have the same size)- 2 ways to compute the outer product matrix $u v^{\top}$- 2 ways to compute the outer sum matrix "$M = u + v^{\top}$", where $M_{ij} = u_i + v_j$ ###Code u,v = np.arange(1,10),np.arange(11,20) np.inner(v,u) np.dot(v,u) np.outer(u,v) np.multiply.outer(u,v) np.add.outer(u,v) ###Output _____no_output_____ ###Markdown Given the following matrix$$M = \begin{pmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \\ 6 & 7 & 8 \\\end{pmatrix}$$- Create $M$ using as a list of lists and access the element in the middle- Propose at least 2 ways to create $M$ using numpy and access the element in the middle- Swap its first and second row- Propose at least 3 ways to extract the submatrix $\begin{pmatrix}4 & 5 \\7 & 8 \\\end{pmatrix}$- Propose at least 2 ways to extract the diagonal of $M$- Propose at least 2 ways to compute $M^3$- Compute $v^{\top} M$, resp. $M N$ for a vector, resp. a matrix of your choice. - Propose 2 ways to "vectorize" the matrix, i.e., transform it into - $(0, 1, 2, 3, 4, 5, 6, 7, 8)$ - $(0, 3, 6, 1, 4, 7, 2, 5, 8)$- Consider $v = (1, 2 , 3)$, compute the - row-wise multiplication of $M$ by $v$ ($M_{i\cdot}$ is multiplied by $v_i$) - column-wise multiplication of $M$ by $v$ ($M_{\cdot j}$ is multiplied by $v_i$) Write a function `is_symmetric` that checks whether a given n x n matrix is symmetric, and provide an example ###Code M = [[0, 1, 2],[3,4,5],[6,7,8]] print(M,"\n",M[1][1]) M = np.array(M) M = np.arange(9).reshape(3,3) print("\n",M, "\n", M[1,1]) M[[0, 1]] = M[[1, 0]] print("\n",M) M = np.arange(9).reshape(3,3) M[1:3,1:3] I = slice(1,3) print(M[I,I]) np.where() ###Output [[4 5] [7 8]] ###Markdown Write a function `is_symmetric` that checks whether a given n x n matrix is symmetric, and provide an example ###Code def is_symmetric(M): return np.array_equal(M, M.T) A = np.array(np.arange(9)).reshape((3,3)) B = np.identity(8) print(f"is A symmetric? > {is_symmetric(A)}") print(f"is B symmetric? > {is_symmetric(B)}") ###Output _____no_output_____ ###Markdown Random REQUIREMENT: USE THE FUNCTION `get_random_number_generator` as previously used in Lab 2. Consider the Bernoulli(0.4) distribution - Propose at least 2 ways to generate n=1000 samples from it - Compute the empirical mean and variance ###Code rng = get_random_number_generator(0) results1 = np.array([rng.binomial(1,0.4) for _ in range(1000)]) results2 = rng.binomial(1, 0.4, 1000) mean = np.mean(results1) variance = np.var(results1) mean, variance ###Output _____no_output_____ ###Markdown Consider a random matrix of size $50 \times 100$, filled with i.i.d. standard Gaussian variables, compute - the absolute value of each entry - the sum of each row - the sum of each colomn - the (euclidean) norm of each row - the (euclidean) norm of each column ###Code gaussian_matrix = rng.normal(size = (50, 100)) gaussian_matrix_absolute = abs(gaussian_matrix) gaussian_sum_rows = gaussian_matrix.sum(axis=1) gaussian_sum_columns = gaussian_matrix.sum(axis=0) gaussian_norm_rows = np.linalg.norm(gaussian_matrix, axis=1) gaussian_norm_columns = np.linalg.norm(gaussian_matrix, axis=0) ###Output _____no_output_____ ###Markdown Practical session 3 - Practice with numpyCourse: [SDIA-Python](https://github.com/guilgautier/sdia-python)Date: 10/06/2021Instructor: [Guillaume Gautier](https://guilgautier.github.io/)Students (pair):- [Student 1]([link](https://github.com/username1))- [Student 2]([link](https://github.com/username2)) ###Code %load_ext autoreload %autoreload 2 import numpy as np from sdia_python.lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output _____no_output_____ ###Markdown Propose at leat 2 ways to create an integer vector of size 100 made of 1s ###Code V1 = np.ones(100) V2 = np.zeros(100)+1 V3 = np.array([1]*100) print(np.array_equal(V1,V2)) print(np.array_equal(V1,V3)) ###Output _____no_output_____ ###Markdown Create a vector with values ranging from 10 to 49 ###Code V1 = np.arange(10,50,1) print(V1) ###Output _____no_output_____ ###Markdown Propose a way to construct the vector $(0.0, 0.2, 0.4, 0.6, 0.8)$ ###Code V1 = np.arange(0,1,0.2) print(V1) ###Output _____no_output_____ ###Markdown Convert a float array into an integer array in place ###Code V1 = np.arange(0,30,np.pi) print(V1) V2 = V1.astype(int) print(V2) ###Output _____no_output_____ ###Markdown Given a boolean array- return the indices where - negate the array inplace? ###Code V = np.random.choice([False, True], size=(10)) print("V : ",V) Ind = np.where(V==True) print("Indices of True values in V : ",Ind) print("not(V) : ",np.logical_not(V)) print("not(V) : ",np.invert(V)) ###Output _____no_output_____ ###Markdown Given 2 vectors $u, v$, propose at least- 2 ways to compute the inner product $v^{\top} u$ (here they must have the same size)- 2 ways to compute the outer product matrix $u v^{\top}$- 2 ways to compute the outer sum matrix "$M = u + v^{\top}$", where $M_{ij} = u_i + v_j$ ###Code U = np.random.randint(0,10,5) V = np.random.randint(0,10,5) inner_prod1 = (U).dot(V) inner_prod2 = np.sum(U*V) print("inner product : ", inner_prod1, inner_prod2) outer_prod1 = U[:, None].dot(V[None, :]) outer_prod2 = np.outer(U,V) print("outer product : \n", outer_prod1, "\n", outer_prod2) sum1 = U + np.flip(V) sum2 = U + V[::-1] print("sum : ", sum1, sum2) ###Output _____no_output_____ ###Markdown Given the following matrix$$M = \begin{pmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \\ 6 & 7 & 8 \\\end{pmatrix}$$- Create $M$ using as a list of lists and access the element in the middle- Propose at least 2 ways to create $M$ using numpy and access the element in the middle- Swap its first and second row- Propose at least 3 ways to extract the submatrix $\begin{pmatrix}4 & 5 \\7 & 8 \\\end{pmatrix}$- Propose at least 2 ways to extract the diagonal of $M$- Propose at least 2 ways to compute $M^3$- Compute $v^{\top} M$, resp. $M N$ for a vector, resp. a matrix of your choice. - Propose 2 ways to "vectorize" the matrix, i.e., transform it into - $(0, 1, 2, 3, 4, 5, 6, 7, 8)$ - $(0, 3, 6, 1, 4, 7, 2, 5, 8)$- Consider $v = (1, 2 , 3)$, compute the - row-wise multiplication of $M$ by $v$ ($M_{i\cdot}$ is multiplied by $v_i$) - column-wise multiplication of $M$ by $v$ ($M_{\cdot j}$ is multiplied by $v_i$) ###Code M = [[3*i+j for j in range(3)] for i in range(3)] print(M) print("middle element : ", M[1][1], "\n") M1 = np.reshape(np.arange(0,9,1),(3,3)) M2 = np.fromfunction(lambda i, j: 3*i+j, (3, 3),dtype=int) print("initialize matrix :\n", M1, "\n", M2, "\n") print("acess element :\n", M1[1,1],M[1][1], "\n") subM1 = M1[1:,1:] subM2 = M1[[1,2]][:,[1,2]] subM3 = M1 print("submatrix :\n",subM1,"\n", subM2, "\n", subM3, "\n") diag1 = np.diag(M1) diag2 = np.diagonal(M1) print("diagonal :\n",diag1,"\n", diag2, "\n") M3pow1 = np.linalg.matrix_power(M1, 3) M3pow2 = M1.dot(M1.dot(M1)) print("M^3 :\n", M3pow1,"\n", M3pow2, "\n") v = np.array([1,2,3]) N = np.fromfunction(lambda i, j: (i+1)*(j+1), (3, 3),dtype=int) print("v=",v,"\n","N=",N) print("vM=",v.dot(M)) print("MN=",M1.dot(N)) M_vec1 = np.reshape(M1,9) print(M_vec1) M_vec2 = np.reshape(M1,9,order="F") print(M_vec2) v = np.array([1,2,3]) print(M*v) print(M*np.reshape(v,(3,1))) ###Output _____no_output_____ ###Markdown Write a function `is_symmetric` that checks whether a given n x n matrix is symmetric, and provide an example ###Code from sdia_python.lab3.functions import is_symetric S = np.fromfunction(lambda i, j: (i+1)*(j+1), (3, 3),dtype=int) NS = np.fromfunction(lambda i, j: (i+1)*(j+2), (3, 3),dtype=int) print(is_symetric(S)) print(is_symetric(NS)) ###Output _____no_output_____ ###Markdown RandomREQUIREMENT: USE THE FUNCTION `get_random_number_generator` as previously used in Lab 2. ###Code from sdia_python.lab2.utils import get_random_number_generator rng = get_random_number_generator(None) ###Output _____no_output_____ ###Markdown Consider the Bernoulli(0.4) distribution- Propose at least 2 ways to generate n=1000 samples from it- Compute the empirical mean and variance ###Code N = 1000 p = 0.4 A = rng.binomial(1,p,(N)) result = np.sum(A) result2 = rng.binomial(N,p) B = rng.uniform(size=(N)) < p result3 = np.sum(B) mean = result/N variance = (result*(1-mean)**2 + (N-result)*(mean)**2)/N print("result: ",result,"success / mean: ",mean," / variance: ", variance ) mean2 = result2/N variance2 = (result2*(1-mean2)**2 + (N-result2)*(mean2)**2)/N print("result2: ",result2,"success / mean2: ",mean2," / variance2: ", variance2 ) mean3 = result3/N variance3 = (result3*(1-mean3)**2 + (N-result3)*(mean3)**2)/N print("result3: ",result3,"success3 / mean3: ",mean3," / variance3: ", variance2 ) ###Output _____no_output_____ ###Markdown Consider a random matrix of size $50 \times 100$, filled with i.i.d. standard Gaussian variables, compute- the absolute value of each entry- the sum of each row- the sum of each colomn - the (euclidean) norm of each row- the (euclidean) norm of each column ###Code loc = 0 scale = 1 Gaussian = rng.normal(loc,scale,(50,100)) absoluteG = np.abs(Gaussian) rowSumG = np.sum(Gaussian,1) colSumG = np.sum(Gaussian,0) normRowG = np.linalg.norm(Gaussian,axis=1) normColG = np.linalg.norm(Gaussian,axis=0) ###Output _____no_output_____ ###Markdown Practical session 3 - Practice with numpyCourse: [SDIA-Python](https://github.com/guilgautier/sdia-python)Date: 10/06/2021Instructor: [Guillaume Gautier](https://guilgautier.github.io/)Students (pair):- [Student 1]([link](https://github.com/username1))- [Student 2]([link](https://github.com/username2)) ###Code %load_ext autoreload %autoreload 2 import numpy as np from sdia_python.lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output _____no_output_____ ###Markdown Lab 3: Cooling models of the oceanic lithosphereIn this lab, we will calculate the bathymetry of the oceans predicted by two conductive cooling models of the oceanic lithosphere: the half-space model and the plate model. To assess model predictions, we'll use bathymetry data from [ETOPO1](https://doi.org/10.7289/V5C8276M) and age of the oceanic lithosphere data from [Müller et al. (2008)](https://doi.org/10.1029/2007GC001743).Learning objectives:* Expand on the theorical knowledge acquired in [Lecture 3](https://www.leouieda.com/envs398/slides/3-oceanic-lithosphere/).* Apply the principles of isostatic equilibrium to estimate bathymetry from cooling models.* Convert the theoretical knowledge into computations that can be used to model real data. General instructionsThis is a [Jupyter notebook](https://jupyter.org/) running in [Jupyter Lab](https://jupyterlab.readthedocs.io/en/stable/). The notebook is a programming environment that mixes code (the parts with `[1]: ` or similar next to them) and formatted text/images/equations with [Markdown](https://www.markdownguide.org/basic-syntax) (like this part right here).Quick start guide:* **Edit** any cell (blocks of code or text) by double clicking on it.* **Execute** a code or Markdown cell by typing `Shift + Enter` after selecting it.* The current active cell is the one with a **blue bar next to it**.* You can run cells **in any order** as long as the code sequence makes sense (it's best to go top-to-bottom, though).* To copy any file to the current directory, drag and drop it to the file browser on the left side.* Notebook files have the extension `.ipynb`. Import thingsAs before, the first thing to do is load the Python libraries that we'll be using. We'll group all our imports here at the top to make it easier to see what we're using. ###Code # The base of the entire scientific Python stack import numpy as np # Scipy defines a bunch of scientific goodness on top of numpy import scipy.integrate import scipy.special # For making plots and figures import matplotlib.pyplot as plt # To load and operate on data tables import pandas as pd ###Output _____no_output_____ ###Markdown Load the dataThe data that we will try to fit with our cooling models are in a [CSV](https://en.wikipedia.org/wiki/Comma-separated_values) file in the `data` folder. We'll use `pandas` to load it below. The data are bathymetry and age measurements for the South Pacific. ###Code pacific = pd.read_csv("data/pacific-bathymetry-age.csv") pacific ###Output _____no_output_____ ###Markdown The dataset contains the coordinates of the data points and their associated age (in million years) and bathymetry (in meters). Below, we'll plot the age and bathymetry data using a scatter plot. ###Code plt.figure(figsize=(14, 6)) plt.subplot(1, 2, 1) plt.title("Age") plt.scatter(pacific.longitude, pacific.latitude, c=pacific.age_myr, s=60, cmap="inferno") plt.colorbar(label='age [Myr]') plt.xlabel("longitude") plt.ylabel("latitude") plt.subplot(1, 2, 2) plt.title("Bathymetry") plt.scatter(pacific.longitude, pacific.latitude, c=pacific.bathymetry_m, s=60, cmap="Blues_r") plt.colorbar(label='bathymetry [m]') plt.xlabel("longitude") plt.ylabel("latitude") plt.show() ###Output _____no_output_____ ###Markdown In the grids, we can see a ridge with associated transform faults. There is also a very old section of lithosphere (> 120 Myr) in the deepest part of the grid that is not associated with the current spreading center. ---- YOUR TURNMake a plot of the age (x-axis) versus bathymetry (y-axis) below. ###Code plt.figure(figsize=(7, 6)) # Fill in the lines below with your own code plt.xlabel("age [Myr]") plt.ylabel("bathymetry [m]") plt.show() ###Output _____no_output_____ ###Markdown You plot should look something like this: Questions* What do the outliers in this graph (the shallow points in old lithosphere) most likely represent? ---- Predicting bathymetry from half-space cooling The main idea for estimating bathymetry from cooling is models is that we assume that the oceanic lithosphere is in isostatic equilibrium. As we say in Lecture 2, this means that at a given compensation depth $D$, the pressure from the rock overburden is constant. This translated into vertical columns of material needing to have the same total mass.![](https://raw.githubusercontent.com/leouieda/envs398/master/slides/3-oceanic-lithosphere/cooliing-and-bathymetry.svg) For a given column at $x = x_1$ (or $t = t_1$), the total mass of the column must be the same as the total mass at the ridge. Since the ridge has no lithosphere yet, the total mass at the ridge is the sum of the mass of asthenosphere plus the sum of the water column. The mass at a column at $x_1$ is the sum of the mass of asthenosphere, lithosphere, and water. If the height of the water column is $w_r$, then$$\rho_w w_r + \rho_a (D - w_r) = \rho_w w + \rho_a (D - L - w) + \int\limits_{w}^{w + L}\rho(z) dz $$The density of the lithosphere will depend on its temperature (which we know from the models) and a [coefficient of thermal expansion](https://en.wikipedia.org/wiki/Thermal_expansionCoefficient_of_thermal_expansion) $\alpha_V$:$$\rho(T) = \rho_a \left[1 - \alpha_V (T - T_a)\right]$$Substituting this equation into the isostatic equilibrium condition and the temperature $T$ for the half-space model temperature, we can arrive at:$$ w(t) = w_r + \dfrac{2 \rho_a \alpha_V (T_a - T_0)}{\rho_a - \rho_w} \sqrt{\dfrac{\alpha t}{\pi}} $$There are some other assumptions and approximations that go into this equation. Those interested are refered to "The Solid Earth" section 7.5.2. We can make a *Python function* that calculates the equation above. The physical parameters that our function will take as input are:* $w_r$ = bathymetry at the ridge = `ridge_depth` in km* $\rho_w$ and $\rho_a$ = density of water and the asthenosphere (mantle) = `density_water` and `density_mantle` in kg/m³* $\alpha_V$ = coefficient of thermal expansion = `thermal_expansion` in 1/K* $\alpha$ = thermal diffusivity = `diffusivity` in mm²/s* $T_a$ and $T_0$ = temperature of the asthemosphere and surface = `temperature_mantle` and `temperature_surface` in KFinally, the function will also receive the age of the lithosphere $t$ in million years.Below you'll find the code for this function. Notice that we need to take special care with the units. ###Code def bathymetry_halfspace(age, ridge_depth, density_mantle, density_water, temperature_mantle, temperature_surface, thermal_expansion, diffusivity): "Predict bathymetry from the half-space cooling model" bathymetry = ( # Convert from km to m ridge_depth * 1e3 + 2 * density_mantle * thermal_expansion * (temperature_mantle - temperature_surface) / ( density_mantle - density_water # Convert diffusivity from mm²/s to m²/s # Convert the age from Myr to s ) * np.sqrt(diffusivity * 1e-6 * age * 31557600000000 / np.pi) ) # -1 because the equation gives us thickness of the water layer return -1 * bathymetry ###Output _____no_output_____ ###Markdown ---- YOUR TURNComplete the code below to use our new function to predict the half-space model bathymetry for the given age range.The input parameters should be:* $w_r = 2.5\ km$* $\rho_w = 1000\ kg/m^3$ and $\rho_a = 3300\ kg/m^3$* $\alpha_V = 3 \times 10^{-5}\ 1/K$* $\alpha = 1\ mm^2/s$* $T_a = 1600\ K$ and $T_0 = 273\ K$ ###Code ages = np.linspace(0, 140, 100) # Fill in the lines below with your own code predicted_bathymetry_hspace = bathymetry_halfspace( ... ) ###Output _____no_output_____ ###Markdown Now add a **red line** to the plot below with the half-space model predictions (`predicted_bathymetry_hspace`). ###Code plt.figure(figsize=(7, 6)) plt.plot(pacific.age_myr, pacific.bathymetry_m, ".k", markersize=2) # Fill in the lines below with your own code plt.xlabel("age [Myr]") plt.ylabel("bathymetry [m]") plt.show() ###Output _____no_output_____ ###Markdown You plot should look something like this: Questions* How well does the model fit the data?* Is this consistent with what we saw for the heat flow data? **Place your answers here** (double click on the text to edit it). ---- Predicting bathymetry from the plate modelDoing the same procedure for isostatic equilibrium for the plate model yields the following equation (see "Geodynamics" section 4.23):$$w(t) = w_r + \dfrac{\rho_m \alpha_V (T_a - T_0) L}{\rho_a - \rho_w} \left[ \dfrac{1}{2} - \dfrac{4}{\pi^2}\sum\limits_{m=0}^{\infty} \dfrac{1}{(1 + 2m)^2} \exp\left(-\dfrac{t \alpha \pi^2 (1 + 2m)^2}{L^2}\right) \right]$$in which $L$ is the plate thickness. The function that implemets the bathymetry calculation for the plate model will be very similar to the one for the half-space model. The inputs will be same except for the added `thickness` argument represeting the plate thickness $L$. ---- YOUR TURNComplete the function below to calculate the equation shown above. The only part left to add is the summation term. Summation in programming is usually calculated using [the accumulator pattern](http://swcarpentry.github.io/python-novice-gapminder/12-for-loops/index.html).Be aware of the units! We want the output to be in meters. Remember that the arguments of exponentials have to be dimensionless. So you'll have to make sure the units of the diffusivity, age, and thickness match. ###Code def bathymetry_plate( age, thickness, ridge_depth, density_mantle, density_water, temperature_mantle, temperature_surface, thermal_expansion, diffusivity ): "Predicted bathymetry for the plate cooling model" multiplier = density_mantle * thermal_expansion * (temperature_mantle - temperature_surface) * thickness * 1e3 / ( density_mantle - density_water ) # Calculate the summation term. We'll truncate the sum at m=99 sum_total = 0 for m in range(0, 100): sum_total = sum_total + ( # Fill in the lines below with your own code ) # The 1e3 converts from km to m bathymetry = ridge_depth * 1e3 + multiplier * 1 / 2 - multiplier * (4 / np.pi**2) * sum_total return -1 * bathymetry ###Output _____no_output_____ ###Markdown To test your code, run the lines below to calculate the bathymetry predictions for the plate model using the same parameters used for the half-space model.The **thickness of the plate should be 150 km** here. ###Code thickness = 150 # Fill in the lines below with your own code predicted_bathymetry_plate = bathymetry_plate( ... ) ###Output _____no_output_____ ###Markdown Add the predictions from the plate model to the data plot. ###Code plt.figure(figsize=(7, 6)) plt.plot(pacific.age_myr, pacific.bathymetry_m, ".k", markersize=1) # Fill in the lines below with your own code plt.xlabel("age [Myr]") plt.ylabel("bathymetry [m]") plt.show() ###Output _____no_output_____ ###Markdown You plot should look something like this: Now we need to determine **which plate thickness $L$ best fits the data**. To do this, repeat the calculation above varying the value of `thickness`.**BONUTS**: To really take advantage of the power of programming, I would suggest using a `for` loop to make a single plot with the model predictions for various values of $L$. This is much better than changing $L$ manually and re-running the code every time. ###Code # Fill in the lines below with your own code ###Output _____no_output_____ ###Markdown Practical session 3 - Practice with numpyCourse: [SDIA-Python](https://github.com/guilgautier/sdia-python)Date: 10/06/2021Instructor: [Guillaume Gautier](https://guilgautier.github.io/)Students (pair):- [KABIRI Salim](https://github.com/KsalimK)- [Ait Lemqeddem Amine](https://github.com/AmineAitLemqeddem) ###Code %load_ext autoreload %autoreload 2 import numpy as np from lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output <class 'numpy.ndarray'> ###Markdown Propose at leat 2 ways to create an integer vector of size 100 made of 1s ###Code int_vector1=np.ones(100) int_vector2=np.array([1 for i in range(100)]) ###Output _____no_output_____ ###Markdown Create a vector with values ranging from 10 to 49 ###Code rand_vect=np.random.randint(10,50,2) rand_vect ###Output _____no_output_____ ###Markdown Propose a way to construct the vector $(0.0, 0.2, 0.4, 0.6, 0.8)$ ###Code vect_construction1=np.linspace(0,0.8,5) vect_construction1 vect_construction2=np.arange(0,1,0.2) #stop+1 to achieve the stop desired vect_construction2 ###Output _____no_output_____ ###Markdown Convert a float array into an integer array in place ###Code float_array=np.array([1,23,np.pi,np.sqrt(5)]) int_array=float_array.astype(int) int_array ###Output _____no_output_____ ###Markdown Given a boolean array- return the indices where - negate the array inplace? ###Code bool_array=np.array([False,False,True,False,True]) #rand_vect=np.random.randint(0,2,nb point) , rand_vect.astype(bool) Indices_True=np.nonzero(bool_array==True) Indices_True Indices_False=np.nonzero(bool_array==False) ##########Negate the array########## bool_array_negation=(bool_array==False) ## np.invert bool_array_negation ###Output _____no_output_____ ###Markdown Given 2 vectors $u, v$, propose at least- 2 ways to compute the inner product $v^{\top} u$ (here they must have the same size)- 2 ways to compute the outer product matrix $u v^{\top}$- 2 ways to compute the outer sum matrix "$M = u + v^{\top}$", where $M_{ij} = u_i + v_j$ ###Code u=np.array([1,2,3,4,5]) v=np.array([6,7,8,9,0]) ################inner product########################## inner_product1=np.vdot(u,v) inner_product1 inner_product2=np.dot(u,v.T) inner_product2 ###############outer product#################### outer_product1=np.outer(u,v) outer_product1 v_reshape=v.reshape((1,5)) u_reshape=u.reshape((5,1)) outer_product2=np.dot(u_reshape,v_reshape) outer_product2 #############the outer sum########## ###Method1##### M1=np.zeros((len(u),len(v))) for i in range(len(u)): for j in range(len(v)): M1[i][j]=u[i]+v[j] M1 ####Method2#### M2=np.array([u]*len(u)).T+v M2 M1==M2 ###Output _____no_output_____ ###Markdown Given the following matrix$$M = \begin{pmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \\ 6 & 7 & 8 \\\end{pmatrix}$$- Create $M$ using as a list of lists and access the element in the middle- Propose at least 2 ways to create $M$ using numpy and access the element in the middle- Swap its first and second row- Propose at least 3 ways to extract the submatrix $\begin{pmatrix}4 & 5 \\7 & 8 \\\end{pmatrix}$- Propose at least 2 ways to extract the diagonal of $M$- Propose at least 2 ways to compute $M^3$- Compute $v^{\top} M$, resp. $M N$ for a vector, resp. a matrix of your choice. - Propose 2 ways to "vectorize" the matrix, i.e., transform it into - $(0, 1, 2, 3, 4, 5, 6, 7, 8)$ - $(0, 3, 6, 1, 4, 7, 2, 5, 8)$- Consider $v = (1, 2 , 3)$, compute the - row-wise multiplication of $M$ by $v$ ($M_{i\cdot}$ is multiplied by $v_i$) - column-wise multiplication of $M$ by $v$ ($M_{\cdot j}$ is multiplied by $v_i$) ###Code ######## M Méthode list of list ####### M=[[0,1,2],[3,4,5],[6,7,7]] M[1][1] ######## M Méthode 1 numpy array ####### M1=np.array([[0,1,2],[3,4,5],[6,7,8]]) M1 M1[1][1] ######## M Méthode 2 numpy array ####### M2=np.arange(0,9,1) M2=M2.reshape((3,3)) M2 M2[1][1] ############################################################################ #Par la suite nous allons utiliser M2 comme variable de stockage de la matrice M# ############################################################################ ########### Inverser la première et la deuxième ligne######## M2_copy=np.copy(M2) M2_copy[0,:],M2_copy[1,:]=M2[1,:],M2[0,:] M2_copy ########## Extraction de la sous-matrice ############# ##### Méthode 1 ###### submatrix1=M2[1:,1:] submatrix1 ##### Méthode 2 ###### submatrix2=M2[1:3,1:3] submatrix2 ##### Méthode 3 ###### submatrix3=M2[1:][:,1:] submatrix3 ######## diagonale de M ##### ##### Méthode 1 ###### diag1=np.diag(M2) diag1 ##### Méthode 2 ###### diag2=np.array([M2[i][i] for i in range(len(M))]) diag2 ##### Méthode 3 ###### diag3=M2[1:][:,1:] diag3 ######################## M^3 ########################## ##### Méthode 1 ###### M3_1=np.dot(np.dot(M2,M2),M2) ##### Méthode 2 ###### D,V =np.linalg.eig(M2) matrixD=np.zeros((3,3)) matrixD[0][0]=D[0] matrixD[1][1]=D[1] matrixD[2][2]=D[2] M3_2=np.dot(np.dot(np.linalg.inv(V),matrixD**3),V) ######### #Mathématiquement, cette méthode devrait donner le bon résultat mais j'obtiens un résultat différent (à voir avec le prof) ######## ################## v.T M and MN############ v=np.array([1,2,3]) N=np.array([[1,2,3],[4,5,6],[7,8,9]]) vTM=np.dot(v.T,N) vTM MN=np.dot(M,N) MN ##################### vectorize the matrix m ################# M3_copy=np.copy(M2) vect1=M3_copy.reshape(9) vect1 vect2=M3_copy.T.reshape(9) vect2 ##################### row-wise and column-wise multiplication######## v=np.array([1,2,3]) Mv_row_wise=M2*v.reshape((3,1)) Mv_row_wise Mv_column_wise=M2*v Mv_column_wise ###Output _____no_output_____ ###Markdown Write a function `is_symmetric` that checks whether a given n x n matrix is symmetric, and provide an example ###Code def is_symmetric(x): """function that checks whether a given n x n matrix is symmetric Args: x (array): the matrix to test Returns: bool: True if x is symmetric , False if not """ return np.array_equal(x,x.T) x1=np.array([[1,2,3],[4,5,6],[6,7,8]]) is_symmetric(x1) #False x2=np.array([[1,2,3],[2,2,3],[3,3,3]]) is_symmetric(x2) #True ###Output _____no_output_____ ###Markdown RandomREQUIREMENT: USE THE FUNCTION `get_random_number_generator` as previously used in Lab 2. Consider the Bernoulli(0.4) distribution- Propose at least 2 ways to generate n=1000 samples from it- Compute the empirical mean and variance ###Code rng=get_random_number_generator(0) simulation1= rng.binomial(1,0.4,1000) mean1=np.mean(simulation1) var1=np.var(simulation1) simulation1 , mean1 , var1 simulation2= np.array([rng.binomial(1,0.4) for i in range(1000)]) mean2=np.mean(simulation2) var2=np.var(simulation2) simulation2 , mean2 , var2 ###Output _____no_output_____ ###Markdown Consider a random matrix of size $50 \times 100$, filled with i.i.d. standard Gaussian variables, compute- the absolute value of each entry- the sum of each row- the sum of each colomn - the (euclidean) norm of each row- the (euclidean) norm of each column ###Code Gaussian_matrix=rng.normal(0,1,(50,100)) Gaussian_matrix Gaussian_matrix_abs=np.abs(Gaussian_matrix) Sum_row = Gaussian_matrix.sum(axis=1) Sum_row Sum_column = Gaussian_matrix.sum(axis=0) Sum_column norm_row = np.linalg.norm(Gaussian_matrix,axis=1) norm_row norm_row = np.linalg.norm(Gaussian_matrix,axis=0) norm_row ###Output _____no_output_____ ###Markdown Practical session 3 - Practice with numpy Course: [SDIA-Python](https://github.com/guilgautier/sdia-python) Date: 10/06/2021 Instructor: [Guillaume Gautier](https://guilgautier.github.io/) Students (pair): - [Hadrien Salem]([link](https://github.com/SnowHawkeye)) - [Emilie Salem]([link](https://github.com/EmilieSalem)) ###Code %load_ext autoreload %autoreload 2 import numpy as np from sdia_python.lab2.utils import get_random_number_generator my_array = np.array([0]) print(type(my_array)) dir(np.ndarray) ###Output _____no_output_____ ###Markdown Propose at leat 2 ways to create an integer vector of size 100 made of 1s ###Code vect1 = np.ones(100) vect2 = np.array([1 for i in range(100)]) ###Output _____no_output_____ ###Markdown Create a vector with values ranging from 10 to 49 ###Code np.array([i for i in range(10,50)]) ###Output _____no_output_____ ###Markdown Propose a way to construct the vector $(0.0, 0.2, 0.4, 0.6, 0.8)$ ###Code float_vect = np.array([0.1*i for i in range(0,10,2)]) ###Output _____no_output_____ ###Markdown Convert a float array into an integer array in place ###Code int_vect = float_vect.astype(int) int_vect ###Output _____no_output_____ ###Markdown Given a boolean array - return the indices where - negate the array inplace? ###Code bool_array = np.array([True, False, True, True]) index_true = np.where(bool_array == True) index_false = np.where(bool_array == False) bool_array_negate = np.invert(bool_array) ###Output _____no_output_____ ###Markdown Given 2 vectors $u, v$, propose at least - 2 ways to compute the inner product $v^{\top} u$ (here they must have the same size) - 2 ways to compute the outer product matrix $u v^{\top}$ - 2 ways to compute the outer sum matrix "$M = u + v^{\top}$", where $M_{ij} = u_i + v_j$ ###Code u = np.array([0,1]) v = np.array([2,3]) inner_prod1 = np.dot(u,v) inner_prod2 = np.inner(u,v) u = np.array([0,1, 2]) v = np.array([2,3]) outer_prod1 = np.tensordot(u, v, axes=0) outer_prod2 = np.outer(u,v) sum_matrix1 = np.add.outer(u,v) sum_matrix2 = "?" # TODO ###Output _____no_output_____ ###Markdown Given the following matrix $$ M = \begin{pmatrix} 0 & 1 & 2 \\ 3 & 4 & 5 \\ 6 & 7 & 8 \\ \end{pmatrix} $$ - Create $M$ using as a list of lists and access the element in the middle - Propose at least 2 ways to create $M$ using numpy and access the element in the middle - Swap its first and second row - Propose at least 3 ways to extract the submatrix $\begin{pmatrix}4 & 5 \\7 & 8 \\\end{pmatrix}$ - Propose at least 2 ways to extract the diagonal of $M$ - Propose at least 2 ways to compute $M^3$ - Compute $v^{\top} M$, resp. $M N$ for a vector, resp. a matrix of your choice. - Propose 2 ways to "vectorize" the matrix, i.e., transform it into - $(0, 1, 2, 3, 4, 5, 6, 7, 8)$ - $(0, 3, 6, 1, 4, 7, 2, 5, 8)$ - Consider $v = (1, 2 , 3)$, compute the - row-wise multiplication of $M$ by $v$ ($M_{i\cdot}$ is multiplied by $v_i$) - column-wise multiplication of $M$ by $v$ ($M_{\cdot j}$ is multiplied by $v_i$) ###Code # M as a list M_list = [ [0, 1, 2], [3, 4, 5], [6, 7, 8], ] middle_element_list = M_list[1][1] # M as an array M_numpy1 = np.array(M_list) M_numpy2 = np.array([i for i in range(9)]).reshape((3,3)) M_numpy3 = np.array(np.arange(9)).reshape((3,3)) middle_element_numpy = M_numpy3[1,1] # Swapping rows M_swap = np.copy(M_numpy1) M_swap[[0,1]] = M_swap[[1,0]] M_swap M = np.copy(M_numpy3) # Extracting submatrixes submatrix1 = M[1:,1:] submatrix2 = M[[1,2]].T[[1,2]].T submatrix3 = M[[1,2]][:,1:] # Extracting diagonal diagonal1 = np.diag(M) diagonal2 = np.array([M[i][i] for i in range(len(M))]) # Compute M cube M_cube = M.dot(M).dot(M) M_cube2 = np.linalg.matrix_power(M, 3) # Products v = np.array([0,1,0]) N = np.array(np.arange(0,6)).reshape(3,2) vect_prod = v.dot(M) matrix_prod = M.dot(N) # Vectorize M M_vectorized1_rows = M.reshape(1,9)[0] M_vectorized1_columns = M.T.reshape(1,9)[0] M_vectorized2_rows = M.flatten() M_vectorized2_columns = M.flatten(order = 'F') # Row-wise and column-wise multiplication v = np.array([1,0,2]) row_wise_prod = np.multiply(M.T,v).T column_wise_prod = np.multiply(M,v) row_wise_prod, column_wise_prod ###Output _____no_output_____ ###Markdown Write a function `is_symmetric` that checks whether a given n x n matrix is symmetric, and provide an example ###Code def is_symmetric(M): return np.array_equal(M, M.T) A = np.array(np.arange(9)).reshape((3,3)) B = np.identity(8) print(f"is A symmetric? > {is_symmetric(A)}") print(f"is B symmetric? > {is_symmetric(B)}") ###Output _____no_output_____ ###Markdown Random REQUIREMENT: USE THE FUNCTION `get_random_number_generator` as previously used in Lab 2. Consider the Bernoulli(0.4) distribution - Propose at least 2 ways to generate n=1000 samples from it - Compute the empirical mean and variance ###Code rng = get_random_number_generator(0) results1 = np.array([rng.binomial(1,0.4) for _ in range(1000)]) results2 = rng.binomial(1, 0.4, 1000) mean = np.mean(results1) variance = np.var(results1) mean, variance ###Output _____no_output_____ ###Markdown Consider a random matrix of size $50 \times 100$, filled with i.i.d. standard Gaussian variables, compute - the absolute value of each entry - the sum of each row - the sum of each colomn - the (euclidean) norm of each row - the (euclidean) norm of each column ###Code gaussian_matrix = rng.normal(size = (50, 100)) gaussian_matrix_absolute = abs(gaussian_matrix) gaussian_sum_rows = gaussian_matrix.sum(axis=1) gaussian_sum_columns = gaussian_matrix.sum(axis=0) gaussian_norm_rows = np.linalg.norm(gaussian_matrix, axis=1) gaussian_norm_columns = np.linalg.norm(gaussian_matrix, axis=0) ###Output _____no_output_____
Gradiente_Conjugado_(Fletcher_Reeves).ipynb
###Markdown UNIVERSIDADE FEDERAL DO PIAUÍCURSO DE GRADUAÇÃO EM ENGENHARIA ELÉTRICADISCIPLINA: TÉCNICAS DE OTIMIZAÇÃODOCENTE: ALDIR SILVA SOUSADISCENTE: MARIANA DE SOUSA MOURAAtividade 4: Otimização Irrestrita pelo Método do Gradiente Conjugado (Fletcher-Reeves) **Método do Gradiente Conjudado (Fletcher-Reeves) - Multivariável** **Método da Bisseção Monovariável**Trecho responsável por realizar a minimização do valor de lamda, que corresponde a taxa com a qual a otimização da função é feita. ###Code import numpy as np import sympy as sym #Para criar variáveis simbólicas. class Params: def __init__(self,f,vars,eps,a,b): self.f = f self.a = a self.b = b self.vars = vars #variáveis simbólicas self.eps = eps def eval(sym_f,vars,x): map = dict() map[vars[0]] = x return sym_f.subs(map) import pandas as pd import math def bissecao(params): n = math.ceil( -math.log(params.eps/(params.b-params.a),2) ) f = params.f diff = sym.diff(f) #retorna a derivada simbólica de f a = params.a b = params.b for k in range(n): x = (b + a)/2 fx = eval(f,params.vars,x) dfx = eval(diff,params.vars,x) if (dfx == 0): break # Mínimo encontrado. Parar. if (dfx > 0 ): #Passo 2 b = x else: #Passo 3 a = x x = (a+b)/2 return x ###Output _____no_output_____ ###Markdown **Cômputo do gradiente e avaliação das funções nos pontos**Nesta seção, têm-se definições de funções para calcular os valores do gradiente e da hessiana utilizando-se a função em termos de variáveis simbólicas. Também realiza-se aqui, por meio dessas definições de funções, a substituição das variáveis simbólicas por valores numéricos passados como parâmetro. ###Code # Função para o cálculo do gradiente e da Hessiana import sympy as sym #Para criar variáveis simbólicas. def gradiente_simbolico(funcao,variaveis): g = [sym.diff(funcao,x) for x in variaveis] return g # Função para substituição dos valores nas variáveis simbólicas def eval_simbolica(f,variaveis,x): mp = dict() for i in range(len(variaveis)): mp[variaveis[i]] = x[i] return float(f.subs(mp)) # Função para substituição de f(x + lambda*d) def eval_simb(f,variaveis,x): mp = dict() for i in range(len(variaveis)): mp[variaveis[i]] = x[i] return f.subs(mp) def eval_gradiente(grad_simb,variaveis,x): g = [ eval_simbolica(f,variaveis,x) for f in grad_simb] return g import numpy as np import sympy as sym class Parametros: def __init__(self,f,d1f,variaveis,x,eps): self.f = f self.d1f = d1f self.x = x self.eps = eps self.variaveis = variaveis #variáveis simbólicas import pandas as pd import math lmbd = sym.Symbol('lmbd') def gradiente_conjugado(p): f = p.f d1f = p.d1f eps = p.eps x = p.x variaveis = p.variaveis n = 0 cols = ['x','y','df(x)','Tolerância'] table = pd.DataFrame([], columns=cols) y = x d = eval_gradiente(d1f,variaveis,x) d = - np.array(d) k = j = 1 tolerancia = np.linalg.norm(d) while (tolerancia > eps): v = 0 v = x + d*lmbd subP = eval_simb(f,variaveis,v) arg = Params(subP,[lmbd],eps,0,1) lmb = bissecao(arg) G1 = eval_gradiente(d1f,variaveis,y) y = y + d*lmb G2 = eval_gradiente(d1f,variaveis,y) a = pow(np.linalg.norm(G2),2)/pow(np.linalg.norm(G1),2) d = d*a - eval_gradiente(d1f,variaveis,y) print('d ',d) print('y ',y) tolerancia = np.linalg.norm(d) # calcula o valor da tolerância - critério de parada print('tolerância ',tolerancia) print('=====================================================') x = y sP = eval_simbolica(f,variaveis,x) row = pd.DataFrame([[x,y,d,tolerancia]],columns=cols) table = table.append(row, ignore_index=True) # concatena valores de cada iteração return y,sP,table ###Output _____no_output_____ ###Markdown **1.** Considere o seguinte problema:Minimizar $\sum_{i=2}^{n} [100(x_i-x^2_{i-1})^2 + (1-x_{i-1})^2]$Resolva para n = 5, 10, e 50. Iniciando do ponto $x_0 = [-1.2,1.0,-1.2,1.0,...]$ ###Code # Para n = 5 import numpy as np import sympy as sym import pandas as pd variaveis = list(sym.symbols("x:5")) print('variaveis ',variaveis) c = variaveis print('c ',c) f1 = 0 for i in range(1,5): f1 = f1 + 100*(c[i] - c[i-1]**2)**2 + (1 - c[i-1])**2 x = [] for i in range(1,6): if (i%2 != 0): x.append(-1.2) else: x.append(1) print('x ',x) eps = 1e-5 grad = gradiente_simbolico(f1,c) d1f = eval_gradiente(grad,c,x) print(grad) print(f1) print(d1f) p = Parametros(f1,grad,c,x,eps) m,fx,table = gradiente_conjugado(p) ###Output variaveis [x0, x1, x2, x3, x4] c [x0, x1, x2, x3, x4] x [-1.2, 1, -1.2, 1, -1.2] [-400*x0*(-x0**2 + x1) + 2*x0 - 2, -200*x0**2 - 400*x1*(-x1**2 + x2) + 202*x1 - 2, -200*x1**2 - 400*x2*(-x2**2 + x3) + 202*x2 - 2, -200*x2**2 - 400*x3*(-x3**2 + x4) + 202*x3 - 2, -200*x3**2 + 200*x4] (1 - x0)**2 + (1 - x1)**2 + (1 - x2)**2 + (1 - x3)**2 + 100*(-x0**2 + x1)**2 + 100*(-x1**2 + x2)**2 + 100*(-x2**2 + x3)**2 + 100*(-x3**2 + x4)**2 [-215.5999999999999, 792.0, -655.5999999999999, 792.0, -440.0] d [346.31407322 116.64653708 147.69085615 -23.0650806 162.25959417] y [-0.91461029 -0.04837036 -0.33218231 -0.04837036 -0.61757202] tolerância 426.86369575921236 ===================================================== d [ 41.38495491 -3.71055021 -41.48628338 45.66385677 -5.72110716] y [ 0.56632414 0.45044238 0.2993845 -0.147003 0.07629473] tolerância 74.60236770190072 ===================================================== d [ -6.81182371 5.15390173 -0.47058492 13.07426706 -12.20090523] y [ 0.69025294 0.439331 0.17515227 -0.01026088 0.05916266] tolerância 19.823815047497135 ===================================================== d [ 0.49660893 -5.07995285 8.5555359 7.24591118 -8.81493526] y [0.66782784 0.45629808 0.17360307 0.0327807 0.01899626] tolerância 15.147806719916318 ===================================================== d [-0.41363978 1.24166023 3.66353533 -0.18527716 0.40394085] y [ 0.66908005 0.4434889 0.19517598 0.05105138 -0.00323073] tolerância 3.9155848765365726 ===================================================== d [1.77793721 3.23421805 2.49774528 0.89486472 1.04802939] y [ 0.66685993 0.45015324 0.2148392 0.05005694 -0.00106267] tolerância 4.6646658058422945 ===================================================== d [28.75285906 32.06114988 31.18513036 13.95513805 7.94755255] y [0.71407146 0.53603501 0.28116459 0.07381928 0.02676681] tolerância 55.54337099384056 ===================================================== d [26.98892414 25.89260546 31.55764931 20.25127347 0.50199211] y [0.78437856 0.61443161 0.35741913 0.10794267 0.04620034] tolerância 52.96300922864571 ===================================================== d [28.82607215 35.43814815 44.59870109 35.91200128 -4.97269888] y [0.8223688 0.65087864 0.40184041 0.13644884 0.04690695] tolerância 73.41831167385926 ===================================================== d [31.35153825 57.45421543 71.15705134 64.23258323 -9.43724695] y [0.85788676 0.69454366 0.45679257 0.18069771 0.04077986] tolerância 116.45650756688255 ===================================================== d [14.7484763 51.02491921 65.7595499 64.56027096 -5.78778356] y [0.87738101 0.73026844 0.50103772 0.22063725 0.03491182] tolerância 106.5218539138573 ===================================================== d [ 1.34386061 34.53268759 52.04522123 54.58327513 1.35849203] y [0.88362598 0.75187399 0.52888238 0.2479741 0.03246109] tolerância 82.97106092589641 ===================================================== d [-4.23247243 23.53771367 47.91632171 52.73696106 9.18339116] y [0.88416425 0.76570583 0.54972874 0.26983706 0.03300522] tolerância 75.71943877721705 ===================================================== d [-5.48955945 16.45527996 51.0726041 58.81108408 19.45930047] y [0.88243667 0.77531328 0.56928687 0.29136284 0.03675363] tolerância 82.1384829184258 ===================================================== d [-3.35327219 10.50177482 54.02735259 66.28284778 32.50449716] y [0.88023787 0.78190433 0.58974365 0.31491922 0.04454792] tolerância 92.14360955565486 ===================================================== d [ 1.21557549 7.00532652 53.91829949 72.84342217 47.56957294] y [0.87897149 0.78587037 0.61014735 0.33995127 0.05682341] tolerância 102.60001786764207 ===================================================== d [ 7.01094251 8.54672891 55.15322004 84.61017888 68.35652359] y [0.87943983 0.78856942 0.63092123 0.36801671 0.07515122] tolerância 122.45641778737154 ===================================================== d [ 14.16220051 17.08298154 60.88893088 106.71972266 100.78109003] y [0.88230151 0.79205796 0.65343327 0.40255226 0.10305248] tolerância 160.4549326082454 ===================================================== d [ 22.51777472 34.62750805 69.74006613 135.46009749 144.79737169] y [0.88905457 0.80020376 0.68246737 0.45344019 0.15110865] tolerância 214.20902916633935 ===================================================== d [ 28.59991203 55.60442452 74.04770615 145.67839017 177.25918753] y [0.90219704 0.82041406 0.72317107 0.5325013 0.23561945] tolerância 249.0700357447772 ===================================================== d [ 33.2664405 68.53892809 81.60992422 134.11934111 193.25729571] y [0.92390794 0.86262475 0.77938252 0.64308937 0.37018129] tolerância 260.38586840387006 ===================================================== d [ 46.14803529 79.59845654 112.55551193 144.2208136 243.04869757] y [0.95474498 0.92615838 0.85503259 0.76741417 0.54932528] tolerância 317.8153727815729 ===================================================== d [ 31.74074718 37.56179228 75.66864644 74.67508514 153.96818433] y [0.97886257 0.96775761 0.91385562 0.84278592 0.67634581] tolerância 193.45975666904093 ===================================================== d [12.2169242 4.75201495 27.069481 21.11157488 61.00789895] y [0.98891232 0.97965042 0.93781382 0.86642953 0.7250952 ] tolerância 71.2197585834578 ===================================================== d [ 4.68195526 -2.95609807 9.09914824 9.62751532 35.73204018] y [0.99278044 0.981155 0.94638455 0.87311388 0.74441151] tolerância 38.50871935661736 ===================================================== d [ 1.64839283 -5.15900366 3.11025618 14.01294404 45.33276674] y [0.99497725 0.97976798 0.95065394 0.87763118 0.76117727] tolerância 47.85842195648635 ===================================================== d [-3.63020644 -5.14588485 -3.015451 26.94622067 67.9065717 ] y [0.99598964 0.97659949 0.95256416 0.88623746 0.78901912] tolerância 73.39041328926844 ===================================================== d [-8.27656902 0.12329765 -4.81137782 24.84362357 44.9905524 ] y [0.99434171 0.97426352 0.9511953 0.89846967 0.81984524] tolerância 52.27830899213355 ===================================================== d [-7.03772118 2.02311689 -0.11039126 14.28989322 16.50884648] y [0.99159489 0.97430444 0.9495985 0.90671474 0.83477664] tolerância 23.029932002123935 ===================================================== d [-5.54208372 1.47843027 4.14761233 10.23925342 8.26679037] y [0.98904445 0.97503761 0.9495585 0.91189334 0.84075939] tolerância 14.942733006823499 ===================================================== d [-4.06266093 0.68455296 8.30126999 8.89979987 6.66133705] y [0.98652862 0.97570875 0.9514413 0.91654144 0.84451209] tolerância 14.472897393974584 ===================================================== d [-0.35111137 0.23256666 8.5630969 4.8032116 4.58361223] y [0.98477737 0.97600383 0.95501965 0.9203778 0.84738353] tolerância 10.84363120510917 ===================================================== d [2.40789091 0.97069623 5.54978085 1.65152301 2.9078191 ] y [0.98465013 0.97608811 0.95812289 0.92211846 0.84904461] tolerância 6.980200385662174 ===================================================== d [4.24608287 2.67268725 4.03188809 1.15925738 2.99749493] y [0.98566971 0.97649914 0.96047284 0.92281777 0.85027588] tolerância 7.194265214151183 ===================================================== d [6.35090648 5.67004387 2.86988326 2.0428986 4.13357278] y [0.98801835 0.97797748 0.962703 0.92345899 0.85193388] tolerância 10.09848994218932 ===================================================== d [ 4.05027 5.36324916 -0.28118842 2.33803926 3.09130421] y [0.99085288 0.98050813 0.96398389 0.92437078 0.85377878] tolerância 7.76342610293602 ===================================================== d [ 0.60882397 2.0709218 -1.44491495 1.38504913 1.29499154] y [0.99207347 0.98212441 0.96389915 0.92507537 0.85471038] tolerância 3.2159799653704964 ===================================================== d [-0.46344455 0.37204642 -1.16462041 0.76163875 0.81674051] y [0.99224766 0.9827169 0.96348575 0.92547163 0.85508088] tolerância 1.7195053793186765 ===================================================== d [-0.96807212 -0.51885269 -1.10574414 0.61727737 1.14882337] y [0.99205849 0.98286876 0.96301039 0.92578251 0.85541425] tolerância 2.032207748311435 ===================================================== d [-1.84752664 -1.98646314 -1.20476601 0.59330859 2.33176587] y [0.99146394 0.9825501 0.96233128 0.92616163 0.85611982] tolerância 3.8209944788594097 ===================================================== d [-1.53651412 -2.55489682 -0.0997221 0.00485302 2.37341689] y [0.99051249 0.9815271 0.96171084 0.92646717 0.85732064] tolerância 3.8120137949785367 ===================================================== d [-0.31414564 -1.0540112 0.69767916 -0.28036908 0.91714828] y [0.990026 0.98071817 0.96167927 0.92646871 0.85807211] tolerância 1.6174520393046126 ===================================================== d [ 0.08571059 -0.17172927 0.6527566 -0.14258257 0.29898359] y [0.98993852 0.98042466 0.96187355 0.92639063 0.85832751] tolerância 0.756736017330779 ===================================================== d [0.21663002 0.22889247 0.66271715 0.01926446 0.17359957] y [0.98996958 0.98036242 0.96211011 0.92633896 0.85843586] tolerância 0.7543356490048716 ===================================================== d [0.47078307 0.83076863 0.95883213 0.34537133 0.21382929] y [0.99009767 0.98049776 0.96250196 0.92635035 0.85853851] tolerância 1.4128611324826956 ===================================================== d [0.68539105 1.37268151 0.80892236 0.84033611 0.24269536] y [0.99036885 0.9809763 0.96305427 0.92654929 0.85866168] tolerância 1.94253355794032 ===================================================== d [0.44703123 0.75521273 0.04170266 0.71508236 0.19609084] y [0.99062769 0.9814947 0.96335976 0.92686665 0.85875333] tolerância 1.1496593862109044 ===================================================== d [ 0.26383946 0.1905427 -0.20955792 0.41647272 0.22380993] y [0.99076582 0.98172806 0.96337264 0.9270876 0.85881392] tolerância 0.6110423215511258 ===================================================== d [ 0.21741441 -0.06275347 -0.22989808 0.27984275 0.34898616] y [0.99086345 0.98179856 0.9632951 0.92724171 0.85889674] tolerância 0.5515101565924257 ===================================================== d [ 0.19747746 -0.2628373 -0.21529859 0.21719068 0.64821152] y [0.99097375 0.98176672 0.96317846 0.92738369 0.8590738 ] tolerância 0.7885329657425482 ===================================================== d [ 0.03853963 -0.3984274 -0.10483213 0.12368647 1.03407494] y [0.99107997 0.98162535 0.96306266 0.92750051 0.85942245] tolerância 1.1206376712587096 ===================================================== d [-0.27478761 -0.31869151 0.05997216 0.08752498 1.25535983] y [0.99109893 0.98142929 0.96301107 0.92756137 0.85993132] tolerância 1.328253775498252 ===================================================== d [-0.55628984 -0.08856634 0.17008099 0.22997266 1.2657917 ] y [0.99097 0.98127975 0.96303921 0.92760244 0.86052034] tolerância 1.4146893542610128 ===================================================== d [-0.59411719 0.12806952 0.21341418 0.49027139 1.03779366] y [0.99072596 0.9812409 0.96311382 0.92770333 0.86107563] tolerância 1.316170345083139 ===================================================== d [-0.39979781 0.20205525 0.26486892 0.72790014 0.80925309] y [0.9904744 0.98129513 0.96320419 0.92791092 0.86151507] tolerância 1.2064614961063904 ===================================================== d [-0.15391168 0.20436414 0.43380581 1.01953494 0.88131313] y [0.99028681 0.98138994 0.96332847 0.92825246 0.86189477] tolerância 1.438681798104165 ===================================================== d [0.19350801 0.21967289 0.79481674 1.43507992 1.34390475] y [0.99020167 0.98150298 0.96356842 0.9288164 0.86238226] tolerância 2.140787129781857 ===================================================== d [0.64779971 0.32045831 1.1971113 1.66863014 2.05663966] y [0.99030723 0.98162281 0.96400199 0.92959924 0.86311536] tolerância 2.9949134703585596 ===================================================== d [1.05167596 0.6655359 1.52184877 1.79659701 3.07946687] y [0.99064578 0.98179028 0.96462762 0.93047128 0.86419019] tolerância 4.071346342255589 ===================================================== d [1.47297336 1.45843736 1.9679018 2.37204015 5.05513399] y [0.99123552 0.98216349 0.96548101 0.93147875 0.86591703] tolerância 6.272980291552555 ===================================================== d [1.44430468 2.28542485 1.97035 2.96249859 6.79035535] y [0.99203903 0.98295907 0.96655451 0.9327727 0.86867461] tolerância 8.128762853243298 ===================================================== d [0.57967533 1.82860451 1.04864432 2.48834731 5.4875807 ] y [0.99263957 0.98390935 0.96737378 0.93400451 0.87149806] tolerância 6.409749376779684 ===================================================== d [0.01463504 1.07223359 0.60512013 2.243952 4.50479989] y [0.99285849 0.98459994 0.96776981 0.93494425 0.87357047] tolerância 5.1811786327796865 ===================================================== d [-0.20964436 0.90566848 1.109014 3.79333791 7.29906444] y [0.99286614 0.9851603 0.96808605 0.93611697 0.87592474] tolerância 8.352236373711719 ===================================================== d [-0.38236454 1.14957303 3.62307547 9.91078982 19.13252521] y [0.99270699 0.98584781 0.96892793 0.93899658 0.88146564] tolerância 21.883136716654537 ===================================================== d [2.80929780e-02 8.36118287e-01 7.44794563e+00 1.65464799e+01 3.24980958e+01] y [0.99244882 0.98662401 0.97137424 0.94568836 0.89438395] tolerância 37.230149173092634 ===================================================== d [ 0.82139019 0.26308364 5.69092952 9.80142403 19.74932216] y [0.99246029 0.98696529 0.97441429 0.95244218 0.90764878] tolerância 22.786711311507563 ===================================================== d [ 1.05305589 0.63917044 3.58639069 4.97731691 10.05551852] y [0.99272662 0.98705059 0.97625957 0.95562029 0.91405248] tolerância 11.843428566393854 ===================================================== d [ 1.63583993 1.82237229 4.39411231 6.18163465 11.91566131] y [0.99320466 0.98734074 0.9778876 0.95787973 0.91861718] tolerância 14.335300050109936 ===================================================== d [ 3.7976026 5.77416347 9.59076692 15.0782328 27.80907503] y [0.99439654 0.98866854 0.98108919 0.96238372 0.92729901] tolerância 33.770450368878805 ===================================================== d [ 4.77324722 9.24550285 11.83290301 21.50553984 38.38018036] y [0.99664198 0.99208267 0.98676 0.97129915 0.94374191] tolerância 46.731223975939244 ===================================================== d [ 1.88698975 5.08911935 4.70168992 11.14845709 19.49010851] y [0.9981897 0.99508052 0.99059681 0.9782723 0.95618665] tolerância 23.57367555869842 ===================================================== d [0.40202966 1.92694435 1.15823685 4.62026029 8.39092438] y [0.99867199 0.99638122 0.99179849 0.98112168 0.96116803] tolerância 9.847368785881624 ===================================================== d [0.06164683 0.90146988 0.53249116 3.31419795 6.78205574] y [0.99879314 0.99696193 0.99214753 0.98251404 0.96369673] tolerância 7.62103557696199 ===================================================== d [-7.74935190e-03 5.22742369e-01 1.00390049e+00 4.77781179e+00 1.09074821e+01] y [0.9988216 0.99737803 0.99239332 0.98404381 0.96682718] tolerância 11.961679975505987 ===================================================== d [-5.10641102e-03 -1.60901487e-01 2.26111321e+00 6.82759820e+00 1.66362833e+01] y [0.99881761 0.99764723 0.99291031 0.9865043 0.97244436] tolerância 18.125136308613605 ===================================================== d [-0.01875664 -0.67833704 2.46251477 4.62585376 11.47961264] y [0.99881576 0.99758892 0.99372973 0.9889786 0.97847328] tolerância 12.637424634951056 ===================================================== d [-0.09301083 -0.42516895 1.76615497 2.23146446 4.97591886] y [0.99881068 0.9974052 0.99439669 0.99023148 0.98158246] tolerância 5.748732485408903 ===================================================== d [-0.16927966 -0.05757378 1.42731698 1.64304855 2.79595579] y [0.99878336 0.99728031 0.99491546 0.99088693 0.98304405] tolerância 3.5477008688868157 ===================================================== d [-0.23800203 0.35171128 1.46753467 1.92395577 2.55547855] y [0.99872072 0.99725901 0.99544361 0.9914949 0.98407862] tolerância 3.544866746465267 ===================================================== d [-0.16157274 0.79416939 1.2132597 1.9847407 2.2935079 ] y [0.99862539 0.99739988 0.99603142 0.99226553 0.9851022 ] tolerância 3.3657367655371315 ===================================================== d [0.09873622 0.88622691 0.63740024 1.35473918 1.58232619] y [0.99857177 0.99766345 0.99643407 0.99292422 0.98586337] tolerância 2.3538267725923663 ===================================================== d [0.37608235 0.84199431 0.38619442 0.90137071 1.31586509] y [0.99860378 0.99795081 0.99664075 0.9933635 0.98637643] tolerância 1.88242093914852 ===================================================== d [0.73972538 0.88927681 0.45035207 0.77768923 1.60943962] y [0.99875442 0.99828807 0.99679544 0.99372454 0.9869035 ] tolerância 2.17621697246107 ===================================================== d [0.95476765 0.65186718 0.55272586 0.55302705 1.72154693] y [0.999062 0.99865783 0.9969827 0.9940479 0.9875727 ] tolerância 2.216208634274418 ===================================================== d [0.56080051 0.08953585 0.37094926 0.17568782 0.97775395] y [0.99935701 0.99885925 0.99715348 0.99421878 0.98810465] tolerância 1.20290740694856 ===================================================== d [ 0.17869537 -0.14470848 0.16198268 0.04829673 0.43377062] y [0.99950034 0.99888213 0.99724829 0.99426368 0.98835454] tolerância 0.5192306725423275 ===================================================== d [ 0.01297049 -0.19502794 0.06180368 0.06349061 0.30580406] y [0.99955147 0.99884073 0.99729463 0.9942775 0.98847865] tolerância 0.37359202858082363 ===================================================== d [-0.12330473 -0.26927614 -0.00856198 0.15194368 0.41851958] y [0.99955686 0.99875964 0.99732033 0.9943039 0.9886058 ] tolerância 0.5348200887851267 ===================================================== d [-0.33129012 -0.30317122 -0.12415979 0.301904 0.62653404] y [0.9994943 0.99862302 0.99731599 0.99438099 0.98881814] tolerância 0.8371215652039313 ===================================================== d [-0.33285701 -0.07263307 -0.15595979 0.25520412 0.4488851 ] y [0.99937172 0.99851084 0.99727005 0.9944927 0.98904997] tolerância 0.6379810209389942 ===================================================== d [-0.16147197 0.08674985 -0.06744302 0.10081185 0.17442633] y [0.9992841 0.99849172 0.997229 0.99455988 0.98916813] tolerância 0.2805973361713057 ===================================================== d [-0.05817072 0.10804229 0.00124693 0.02893869 0.07399009] y [0.99924283 0.99851389 0.99721176 0.99458564 0.98921271] tolerância 0.14618654106806803 ===================================================== d [-0.00532671 0.11493557 0.04913149 0.0085412 0.05629293] y [0.99922397 0.99854893 0.99721216 0.99459502 0.9892367 ] tolerância 0.1374565657968975 ===================================================== d [0.0563599 0.13567854 0.11238623 0.01075338 0.06660238] y [0.99922168 0.99859847 0.99723334 0.99459871 0.98926096] tolerância 0.19689409314620418 ===================================================== d [0.13551805 0.11924735 0.16020343 0.03967401 0.06961587] y [0.99924683 0.99865903 0.9972835 0.99460351 0.98929069] tolerância 0.2543041660533038 ===================================================== d [0.1537453 0.05062098 0.11738291 0.0778636 0.04571384] y [0.99929698 0.99870315 0.99734278 0.99461819 0.98931645] tolerância 0.2193885686156255 ===================================================== d [0.10224095 0.00200614 0.03640216 0.08733847 0.02304045] y [0.99934448 0.99871879 0.99737905 0.99464225 0.98933057] tolerância 0.14121339649501913 ===================================================== d [ 0.04144222 -0.01106437 -0.01351437 0.06818927 0.01677161] y [0.99937373 0.99871937 0.99738947 0.99466723 0.98933717] tolerância 0.08338812346139755 ===================================================== d [ 0.00311267 -0.01203409 -0.02914989 0.04458833 0.02196602] y [0.99938591 0.99871612 0.9973855 0.99468726 0.98934209] tolerância 0.05894785037840279 ===================================================== d [-0.02077108 -0.01659629 -0.03447059 0.03095108 0.03809414] y [0.99938701 0.99871185 0.99737515 0.99470308 0.98934989] tolerância 0.06560662058103618 ===================================================== d [-0.05092369 -0.03307942 -0.040138 0.02142474 0.07763799] y [0.99937711 0.99870393 0.99735872 0.99471784 0.98936805] tolerância 0.10855968472156217 ===================================================== d [-0.07144543 -0.05176001 -0.0219941 -0.00159636 0.11575182] y [0.99935282 0.99868816 0.99733958 0.99472806 0.98940507] tolerância 0.1472016330358697 ===================================================== d [-0.03667162 -0.03557208 0.01660882 -0.01695719 0.07769944] y [0.99932857 0.99867059 0.99733211 0.99472751 0.98944437] tolerância 0.09597282500621906 ===================================================== d [-0.00468356 -0.01177219 0.02691769 -0.00870907 0.03405411] y [0.99931892 0.99866122 0.99733648 0.99472305 0.98946482] tolerância 0.046050114000651336 ===================================================== d [0.00716195 0.00129418 0.02746872 0.00226382 0.02123832] y [0.99931754 0.99865777 0.99734439 0.99472049 0.98947482] tolerância 0.035548421530906926 ===================================================== d [0.01902524 0.0167561 0.04069506 0.0193136 0.03034407] y [0.99932079 0.99865835 0.99735686 0.99472152 0.98948446] tolerância 0.059938239085400316 ===================================================== d [0.04320083 0.05472464 0.0686807 0.06243759 0.06286663] y [0.99933233 0.99866852 0.99738154 0.99473323 0.98950287] tolerância 0.13201819004836732 ===================================================== d [0.04601271 0.07683388 0.05017698 0.08810437 0.07598137] y [0.99935293 0.99869461 0.99741429 0.99476301 0.98953285] tolerância 0.15515763877523925 ===================================================== d [0.01979485 0.04426654 0.00387263 0.05319782 0.0516418 ] y [0.99936715 0.99871835 0.9974298 0.99479023 0.98955632] tolerância 0.08867492772066822 ===================================================== d [ 0.00776308 0.02012446 -0.01072106 0.02973779 0.04602523] y [0.99937296 0.99873135 0.99743093 0.99480586 0.98957149] tolerância 0.05985698225717067 ===================================================== d [ 0.0065209 0.01052467 -0.01492522 0.02989831 0.07671756] y [0.99937619 0.99873972 0.99742648 0.99481822 0.98959063] tolerância 0.0845904587691879 ===================================================== d [ 0.0092478 -0.00257338 -0.01371785 0.05002931 0.17504329] y [0.99938005 0.99874595 0.99741765 0.9948359 0.98963599] tolerância 0.18282070124662628 ===================================================== d [ 0.00504794 -0.02837481 0.01430185 0.07057531 0.27118321] y [0.99938495 0.99874458 0.99741038 0.99486243 0.98972881] tolerância 0.28205737020381766 ===================================================== d [-0.01036238 -0.03644314 0.04923958 0.07552561 0.24194775] y [0.99938689 0.99873365 0.99741589 0.99488962 0.98983329] tolerância 0.2609652239119539 ===================================================== d [-0.02768557 -0.02730983 0.07268379 0.10196341 0.22911108] y [0.99938298 0.99871989 0.99743448 0.99491814 0.98992466] tolerância 0.26397666095169875 ===================================================== d [-0.05071476 -0.0083289 0.1051454 0.17684238 0.30362246] y [0.99937041 0.99870749 0.99746748 0.99496443 0.99002867] tolerância 0.37034677999851673 ===================================================== d [-0.05677068 0.033187 0.11443606 0.24355415 0.35752723] y [0.99934739 0.99870371 0.99751521 0.99504471 0.9901665 ] tolerância 0.4522877848172451 ===================================================== d [-0.02144586 0.06751384 0.08210714 0.21457687 0.30045593] y [0.99932638 0.99871599 0.99755755 0.99513483 0.99029879] tolerância 0.38480725230866647 ===================================================== d [0.02070445 0.09372757 0.0829592 0.21034799 0.32243648] y [0.99931829 0.99874148 0.99758856 0.99521586 0.99041226] tolerância 0.4053483969546542 ===================================================== d [0.09283842 0.17264921 0.17316766 0.35745508 0.62525917] y [0.99932895 0.99878975 0.99763128 0.99532419 0.99057831] tolerância 0.7662486038010866 ===================================================== d [0.25875137 0.34086441 0.4305953 0.73947648 1.43257756] y [0.9993888 0.99890106 0.99774292 0.99555463 0.9909814 ] tolerância 1.7226889019960712 ===================================================== d [0.30972265 0.30818931 0.52961841 0.7766655 1.65335076] y [0.99951021 0.99906099 0.99794496 0.9959016 0.99165358] tolerância 1.95145650348157 ===================================================== d [0.14803931 0.09979248 0.30298358 0.39547137 0.93177053] y [0.99960591 0.99915622 0.99810861 0.99614158 0.99216445] tolerância 1.071572280754356 ===================================================== d [0.05475215 0.02272388 0.1860828 0.26707223 0.64737068] y [0.99965052 0.99918629 0.99819991 0.99626076 0.99244525] tolerância 0.7270195041011522 ===================================================== d [0.02353924 0.02312854 0.23662551 0.44076262 1.0045519 ] y [0.99967496 0.99919644 0.99828297 0.99637996 0.99273418] tolerância 1.1227096090602644 ===================================================== d [-0.01018759 0.0967341 0.48818377 1.1304233 2.4175223 ] y [0.99969031 0.99921152 0.99843732 0.99666748 0.99338947] tolerância 2.714784625609271 ===================================================== d [-0.05661073 0.22357481 0.59125038 1.64201397 3.3092653 ] y [0.99968499 0.99926208 0.99869245 0.99725825 0.9946529 ] tolerância 3.7483615344481644 ===================================================== d [-0.02952161 0.20399088 0.27610701 0.91622013 1.70460336] y [0.99966707 0.99933287 0.99887965 0.99777815 0.99570067] tolerância 1.9656680349734605 ===================================================== d [0.01513347 0.14676308 0.13029396 0.41741885 0.71078925] y [0.9996593 0.99938656 0.99895233 0.99801931 0.99614935] tolerância 0.8474696408042701 ===================================================== d [0.0566561 0.15955358 0.15857028 0.36633339 0.61573413] y [0.99966432 0.99943527 0.99899557 0.99815784 0.99638525] tolerância 0.7530872847006408 ===================================================== d [0.15742681 0.28560482 0.36282837 0.64328053 1.13508367] y [0.99969393 0.99951865 0.99907844 0.99834929 0.99670704] tolerância 1.3929187073434526 ===================================================== d [0.2958121 0.40036673 0.65020923 0.96233029 1.81564215] y [0.99978221 0.99967881 0.9992819 0.99871002 0.99734355] tolerância 2.212059298742436 ===================================================== d [0.19788834 0.18111806 0.4302954 0.53071036 1.11843043] y [0.99988489 0.99981779 0.99950761 0.99904408 0.99797382] tolerância 1.3377809214701937 ===================================================== d [0.06668533 0.02084334 0.15134037 0.15847767 0.41176778] y [0.99993547 0.99986408 0.99961759 0.99917972 0.99825968] tolerância 0.47164940122811083 ===================================================== d [ 0.0205349 -0.01356554 0.05930693 0.07470206 0.23338238] y [0.99995353 0.99986973 0.99965858 0.99922264 0.9983712 ] tolerância 0.2533194663112077 ===================================================== d [ 0.00171831 -0.02312201 0.03817571 0.09969062 0.29607802] y [0.99996176 0.99986429 0.99968234 0.99925257 0.99846468] tolerância 0.3155873497374015 ===================================================== d [-0.02649294 -0.02427813 0.02563322 0.19513686 0.49995754] y [0.99996269 0.99985168 0.99970316 0.99930695 0.99862619] tolerância 0.5385019039246567 ===================================================== d [-0.04748644 0.00395708 0.00073077 0.20063573 0.42016939] y [0.99995168 0.99984158 0.99971382 0.99938809 0.99883408] tolerância 0.46804718343158824 ===================================================== d [-0.03323452 0.02009902 0.00210348 0.1072057 0.16410258] y [0.99993809 0.99984272 0.99971403 0.99944549 0.99895429] tolerância 0.19983904916635922 ===================================================== d [-0.01962997 0.01914126 0.01601136 0.0625986 0.06794381] y [0.99992909 0.99984816 0.9997146 0.99947452 0.99899873] tolerância 0.09768844749912965 ===================================================== d [-0.01280784 0.01908693 0.03471054 0.05606005 0.05314144] y [0.99992228 0.99985481 0.99972016 0.99949625 0.99902232] tolerância 0.08774915990004305 ===================================================== d [-0.00240967 0.01941543 0.057737 0.05554572 0.05883691] y [0.99991676 0.99986303 0.99973512 0.99952042 0.99904523] tolerância 0.10130861224537022 ===================================================== d [0.01372418 0.01438149 0.05399994 0.03321818 0.04973742] y [0.99991585 0.99987037 0.99975692 0.9995414 0.99906745] tolerância 0.08299658605461788 ===================================================== d [0.02317973 0.01293742 0.03316212 0.01441164 0.03788208] y [0.9999204 0.99987514 0.99977484 0.99955242 0.99908395] tolerância 0.05871243720152239 ===================================================== d [0.03085001 0.01898155 0.01973182 0.01072999 0.03937334] y [0.99992898 0.99987993 0.99978711 0.99955775 0.99909797] tolerância 0.05802379025973966 ===================================================== d [0.03347497 0.0268312 0.00537525 0.01362094 0.04262374] y [0.99994204 0.99988796 0.99979547 0.9995623 0.99911464] tolerância 0.06222293680303519 ===================================================== d [ 0.01607813 0.01935735 -0.00800985 0.01262514 0.02643054] y [0.99995366 0.99989728 0.99979734 0.99956702 0.99912944] tolerância 0.039437776357010956 ===================================================== d [ 0.00123827 0.00684223 -0.00885222 0.00820676 0.01167753] y [0.99995802 0.99990252 0.99979517 0.99957044 0.9991366 ] tolerância 0.018177643532247552 ===================================================== d [-0.00406286 0.0001589 -0.00654463 0.00591771 0.00820716] y [0.99995838 0.99990453 0.99979257 0.99957285 0.99914003] tolerância 0.012717741877580147 ===================================================== d [-0.00878264 -0.00564886 -0.00604648 0.00657968 0.01240486] y [0.99995663 0.9999046 0.99978975 0.99957541 0.99914357] tolerância 0.018514239201634676 ===================================================== d [-0.01462114 -0.0149191 -0.00336965 0.00687733 0.02113411] y [0.99995184 0.99990152 0.99978645 0.999579 0.99915033] tolerância 0.03068647573680273 ===================================================== d [-0.00880105 -0.01348323 0.00386642 0.00140402 0.01581632] y [0.9999462 0.99989577 0.99978515 0.99958164 0.99915848] tolerância 0.022941935662658074 ===================================================== d [-0.00111275 -0.00451664 0.00519598 -0.00118648 0.00571028] y [0.99994382 0.99989212 0.9997862 0.99958203 0.99916276] tolerância 0.009091289530622743 ===================================================== d [ 0.00117412 -0.0002542 0.00417529 -0.00072599 0.00236649] y [0.99994353 0.99989093 0.99978756 0.99958171 0.99916426] tolerância 0.005000356033615216 ===================================================== d [0.00248727 0.00234588 0.00451951 0.00028067 0.00202822] y [0.99994396 0.99989083 0.99978911 0.99958144 0.99916514] tolerância 0.0060256230529989125 ===================================================== d [0.00516785 0.0070921 0.00640895 0.00284701 0.00307775] y [0.99994536 0.99989215 0.99979164 0.9995816 0.99916628] tolerância 0.01164719818765921 ===================================================== d [0.00574999 0.00948116 0.00413844 0.0059509 0.00307871] y [0.9999479 0.99989564 0.9997948 0.999583 0.99916779] tolerância 0.013600482260475331 ===================================================== d [ 0.00238788 0.00418893 -0.00042512 0.00453082 0.00163236] y [0.99994972 0.99989864 0.99979611 0.99958489 0.99916877] tolerância 0.006828086938167825 ===================================================== d [ 0.00070653 0.00070389 -0.00141862 0.00261471 0.00131594] y [0.99995037 0.99989978 0.99979599 0.99958611 0.99916921] tolerância 0.0034022793141303635 ===================================================== d [ 0.00019124 -0.00079513 -0.0014757 0.00196282 0.00196064] y [0.9999506 0.99990001 0.99979552 0.99958698 0.99916964] tolerância 0.0032470396689880687 ===================================================== d [-0.00031909 -0.00246487 -0.00165351 0.00202534 0.00416667] y [0.9999507 0.99989961 0.99979477 0.99958798 0.99917064] tolerância 0.005511322715681922 ===================================================== d [-0.0014938 -0.00404462 -0.00104809 0.00146558 0.00738713] y [0.99995053 0.99989828 0.99979388 0.99958907 0.99917288] tolerância 0.008741077928174567 ===================================================== d [-0.00253211 -0.00269956 0.00051598 0.00016657 0.00737199] y [0.99994991 0.9998966 0.99979345 0.99958968 0.99917595] tolerância 0.00826676668012422 ===================================================== d [-2.56468756e-03 -2.17216605e-04 1.27780227e-03 -5.00646879e-06 5.20078816e-03] y [0.99994903 0.99989566 0.99979363 0.99958973 0.99917851] tolerância 0.005941868944044465 ===================================================== d [-0.00206303 0.0013415 0.00139661 0.00077987 0.00369833] y [0.99994814 0.99989559 0.99979407 0.99958973 0.99918032] tolerância 0.004721451968245091 ===================================================== d [-0.00129905 0.00216301 0.00157679 0.0020128 0.00323243] y [0.99994734 0.9998961 0.99979461 0.99959003 0.99918174] tolerância 0.0048324321011499975 ===================================================== d [-0.00020934 0.00286567 0.00242257 0.0039547 0.00415545] y [0.99994674 0.9998971 0.99979534 0.99959096 0.99918323] tolerância 0.006857995358663242 ===================================================== d [0.0019156 0.00381239 0.00465942 0.00735998 0.00740585] y [0.99994663 0.99989869 0.99979668 0.99959315 0.99918553] tolerância 0.012203687646535321 ===================================================== d [0.00502328 0.00368484 0.0070353 0.00972451 0.0115816 ] y [0.99994763 0.99990068 0.99979911 0.999597 0.9991894 ] tolerância 0.01780467042453831 ===================================================== d [0.00574957 0.00198869 0.00598791 0.00697095 0.0120105 ] y [0.99994968 0.99990218 0.99980198 0.99960097 0.99919413] tolerância 0.016300731984370112 ===================================================== d [0.00459943 0.0013273 0.00374418 0.00409906 0.01172049] y [0.9999518 0.99990292 0.9998042 0.99960354 0.99919857] tolerância 0.013824167665213374 ===================================================== d [0.00395589 0.00212576 0.00256002 0.00409918 0.01544071] y [0.99995379 0.99990349 0.99980581 0.99960531 0.99920363] tolerância 0.016791081466086998 ===================================================== d [0.00286107 0.00375378 0.00163652 0.00646803 0.022364 ] y [0.99995573 0.99990454 0.99980707 0.99960733 0.99921122] tolerância 0.023810471223867948 ===================================================== d [0.00027197 0.00422759 0.00107848 0.00967297 0.02581916] y [0.99995705 0.99990627 0.99980783 0.99961031 0.99922155] tolerância 0.02791603673804625 ===================================================== d [-0.00207888 0.0033103 0.00243448 0.01421513 0.03020088] y [0.99995717 0.99990812 0.9998083 0.99961456 0.99923287] tolerância 0.033695244122931144 ===================================================== d [-0.00436573 0.00221226 0.00772015 0.0262567 0.05062072] y [0.99995607 0.99990988 0.99980959 0.99962209 0.99924889] tolerância 0.05775315356543813 ===================================================== d [-0.00572946 -0.00017129 0.01882167 0.04502306 0.08553997] y [0.99995356 0.99991115 0.99981404 0.99963722 0.99927805] tolerância 0.09864721892499341 ===================================================== d [-0.0014064 -0.00176028 0.02191451 0.0386497 0.07531322] y [0.99995109 0.99991108 0.99982215 0.99965663 0.99931492] tolerância 0.08747115941868641 ===================================================== d [0.00295879 0.00057453 0.01598912 0.02191565 0.04466096] y [0.99995063 0.99991051 0.99982926 0.99966916 0.99933934] tolerância 0.05234151502814533 ===================================================== d [0.00598943 0.00485035 0.01651419 0.02201282 0.04507501] y [0.99995175 0.99991073 0.9998353 0.99967744 0.99935621] tolerância 0.053370766247426295 ===================================================== d [0.01424153 0.01725044 0.03257479 0.0489862 0.09723056] y [0.99995534 0.99991363 0.99984519 0.99969062 0.9993832 ] tolerância 0.11582290347837723 ===================================================== d [0.02905374 0.04512308 0.06322798 0.10944248 0.21047131] y [0.99996506 0.99992541 0.99986743 0.99972407 0.99944959] tolerância 0.25130423024232684 ===================================================== d [0.02050288 0.0423306 0.04561148 0.09608949 0.17912217] y [0.99997803 0.99994555 0.99989565 0.99977291 0.99954353] tolerância 0.21356637853516225 ===================================================== d [0.00484636 0.01763409 0.01449493 0.04284981 0.07864979] y [0.99998405 0.99995798 0.99990905 0.99980114 0.99959614] tolerância 0.09255509781813377 ===================================================== d [0.00016463 0.00791713 0.00734346 0.02786966 0.0536156 ] y [0.99998555 0.99996343 0.99991353 0.99981438 0.99962045] tolerância 0.06138392024712821 ===================================================== d [-0.00127831 0.00656723 0.01299404 0.0426136 0.08803699] y [0.99998563 0.99996733 0.99991714 0.99982809 0.99964683] tolerância 0.09889407222022115 ===================================================== d [-0.00176584 0.00586464 0.03168106 0.08227567 0.17901716] y [0.9999848 0.99997156 0.99992552 0.99985557 0.99970359] tolerância 0.1996437634705954 ===================================================== d [0.00096047 0.0002814 0.03395998 0.06712124 0.15245229] y [0.99998403 0.99997413 0.99993941 0.99989166 0.99978212] tolerância 0.1700036602491262 ===================================================== d [ 0.00263272 -0.00068428 0.01769554 0.02434757 0.05618454] y [0.9999843 0.99997421 0.99994913 0.99991086 0.99982574] tolerância 0.06379684856102312 ===================================================== d [0.00283914 0.00128363 0.01090566 0.0118584 0.0246464 ] y [0.99998502 0.99997403 0.99995392 0.99991746 0.99984095] tolerância 0.029609265525494297 ===================================================== d [0.00396892 0.00458686 0.01218152 0.01417294 0.02494402] y [0.99998611 0.99997452 0.99995812 0.99992203 0.99985045] tolerância 0.03175306803834066 ===================================================== d [0.00708426 0.01225774 0.01909123 0.02657382 0.04248849] y [0.99998837 0.99997713 0.99996505 0.99993008 0.99986463] tolerância 0.05546488416211511 ===================================================== d [0.00684455 0.01559433 0.01516965 0.02734546 0.04220674] y [0.99999164 0.99998279 0.99997386 0.99994235 0.99988424] tolerância 0.05522075530248289 ===================================================== d [0.00323512 0.00858526 0.00390763 0.0123407 0.02048045] y [0.99999375 0.99998761 0.99997855 0.9999508 0.99989728] tolerância 0.025907219205120364 ===================================================== d [1.87522571e-03 3.88953822e-03 4.14281901e-05 4.91578027e-03 1.11134327e-02] y [0.99999468 0.99999006 0.99997967 0.99995433 0.99990314] tolerância 0.012896510298732755 ===================================================== d [ 0.00189342 0.00186204 -0.00059867 0.00286422 0.01038207] y [0.99999534 0.99999144 0.99997968 0.99995607 0.99990708] tolerância 0.011108634975359745 ===================================================== d [ 0.00189389 -0.00011041 -0.0001827 0.00204956 0.01108618] y [0.99999614 0.99999223 0.99997943 0.99995729 0.99991148] tolerância 0.011434009393261066 ===================================================== d [ 0.00078814 -0.00176872 0.00064852 0.00154137 0.00824827] y [0.99999686 0.99999219 0.99997936 0.99995806 0.99991567] tolerância 0.008635970681338338 ===================================================== d [-0.0004968 -0.00216006 0.0009978 0.00185432 0.00573546] y [0.99999713 0.99999158 0.99997958 0.99995859 0.99991853] tolerância 0.0064994053648355235 ===================================================== d [-0.00178386 -0.00218332 0.00111905 0.00302709 0.00560955] y [0.99999693 0.99999069 0.99997999 0.99995935 0.99992087] tolerância 0.007059156183159445 ===================================================== d [-0.00283626 -0.00128297 0.00086258 0.00401712 0.00534406] y [0.99999613 0.99998972 0.99998049 0.9999607 0.99992337] tolerância 0.007425000725648277 ===================================================== d [-0.00198437 0.00040806 0.00027356 0.00262201 0.00274871] y [0.99999517 0.99998928 0.99998078 0.99996207 0.99992519] tolerância 0.0043138626049839595 ===================================================== d [-0.00073111 0.00097989 0.00016817 0.00106749 0.00104471] y [0.99999462 0.9999894 0.99998086 0.9999628 0.99992595] tolerância 0.001937508136558765 ===================================================== d [-9.29446744e-05 1.07136228e-03 3.66182070e-04 5.08345191e-04 6.67410977e-04] y [0.99999439 0.9999897 0.99998091 0.99996313 0.99992628] tolerância 0.001412231356325389 ===================================================== d [0.00048362 0.00137287 0.00084182 0.00042529 0.0008752 ] y [0.99999435 0.99999018 0.99998107 0.99996335 0.99992657] tolerância 0.0019427139376287427 ===================================================== d [0.00133791 0.00143933 0.0014522 0.00048667 0.00120831] y [0.9999946 0.99999088 0.9999815 0.99996357 0.99992702] tolerância 0.00276900615033999 ===================================================== d [0.00148485 0.00052594 0.00112601 0.00050034 0.00090583] y [0.99999512 0.99999144 0.99998207 0.99996376 0.99992749] tolerância 0.0021954844389611874 ===================================================== d [ 0.0009042 -0.00013173 0.00036766 0.00047629 0.00045019] y [0.99999558 0.9999916 0.99998242 0.99996391 0.99992777] tolerância 0.001183057406272025 ===================================================== d [ 4.13572259e-04 -2.60696288e-04 -5.88542993e-05 4.32547538e-04 2.72266263e-04] y [0.99999585 0.99999156 0.99998253 0.99996406 0.99992791] tolerância 0.0007097145106499843 ===================================================== d [ 9.03471928e-05 -2.38635313e-04 -2.45060748e-04 3.70521396e-04 2.68315417e-04] y [0.99999599 0.99999148 0.99998251 0.9999642 0.999928 ] tolerância 0.0005783108730956484 ===================================================== d [-0.00017082 -0.00022694 -0.00032998 0.00029852 0.00038718] y [0.99999603 0.99999138 0.99998241 0.99996435 0.99992811] tolerância 0.0006546667906815461 ===================================================== d [-0.00046571 -0.00027674 -0.00035168 0.00019273 0.00065286] y [0.99999595 0.99999127 0.99998226 0.99996449 0.99992829] tolerância 0.0009383577116962587 ===================================================== d [-6.27615218e-04 -3.03084520e-04 -1.50806987e-04 -2.56314495e-05 8.31601224e-04] y [0.99999573 0.99999114 0.99998209 0.99996458 0.9999286 ] tolerância 0.0010957743387954464 ===================================================== d [-0.00034535 -0.00017652 0.00015551 -0.00014882 0.00053272] y [0.99999551 0.99999104 0.99998204 0.99996457 0.99992889] tolerância 0.0006932167739594191 ===================================================== d [-6.86432244e-05 -4.65735733e-05 2.27321052e-04 -7.74691848e-05 2.27087364e-04] y [0.99999541 0.99999099 0.99998208 0.99996453 0.99992903] tolerância 0.0003407726454475895 ===================================================== d [4.12312683e-05 3.04357313e-05 2.24236239e-04 1.41997874e-05 1.29740818e-04] y [0.99999539 0.99999098 0.99998215 0.99996451 0.9999291 ] tolerância 0.00026446654982838295 ===================================================== d [0.00013649 0.00014086 0.00031012 0.00014496 0.00016426] y [0.99999541 0.99999099 0.99998226 0.99996452 0.99992917] tolerância 0.00042736373954403343 ===================================================== d [0.00029438 0.00037692 0.00043141 0.00041662 0.00028973] y [0.99999549 0.99999107 0.99998244 0.9999646 0.99992926] tolerância 0.0008199724140324023 ===================================================== d [0.00027117 0.00041916 0.00019999 0.00046853 0.00029797] y [0.99999562 0.99999124 0.99998264 0.99996479 0.99992939] tolerância 0.0007730003284927829 ===================================================== d [ 1.25779324e-04 2.17067568e-04 -4.11758698e-05 2.43327452e-04 2.25030506e-04] y [0.99999571 0.99999137 0.9999827 0.99996493 0.99992949] tolerância 0.00041770946537441364 ===================================================== d [ 7.44535662e-05 1.01149896e-04 -9.67950531e-05 1.25597790e-04 2.51672348e-04] y [0.99999575 0.99999144 0.99998268 0.99996501 0.99992956] tolerância 0.0003228895990394889 ===================================================== d [ 8.31272850e-05 4.09678406e-05 -1.17813811e-04 1.03578675e-04 4.52260694e-04] y [0.99999578 0.99999149 0.99998264 0.99996507 0.99992968] tolerância 0.00048758269069999695 ===================================================== d [ 8.45349798e-05 -6.71676912e-05 -7.35960657e-05 1.10467366e-04 8.04599564e-04] y [0.99999583 0.99999151 0.99998257 0.99996513 0.99992993] tolerância 0.000822591964412111 ===================================================== d [-3.59551751e-06 -1.73558663e-04 8.12856411e-05 1.42190386e-04 8.75444796e-04] y [0.99999587 0.99999148 0.99998254 0.99996518 0.9999303 ] tolerância 0.0009073943960291121 ===================================================== d [-0.00012501 -0.00019298 0.00020383 0.00026526 0.00084782] y [0.99999587 0.99999141 0.99998257 0.99996524 0.99993065] tolerância 0.0009399860468374952 ===================================================== d [-0.0002867 -0.00018138 0.00032464 0.00057133 0.00113079] y [0.99999581 0.99999132 0.99998267 0.99996537 0.99993106] tolerância 0.0013511450010537786 ===================================================== d [-4.04516912e-04 -4.57326650e-05 3.74663706e-04 9.47382053e-04 1.41079833e-03] y [0.99999566 0.99999123 0.99998284 0.99996566 0.99993164] tolerância 0.001787171786780419 ===================================================== d [-0.0002229 0.00016892 0.00023492 0.00084203 0.00106407] y [0.9999955 0.99999121 0.99998298 0.99996603 0.99993219] tolerância 0.001405232496242904 ===================================================== d [-3.30213585e-06 2.82823924e-04 1.95494942e-04 6.76714409e-04 8.79267668e-04] y [0.99999542 0.99999127 0.99998307 0.99996633 0.99993257] tolerância 0.0011615819265889625 ===================================================== d [0.0002297 0.00051095 0.00042269 0.0009907 0.00152004] y [0.99999542 0.99999141 0.99998316 0.99996666 0.99993301] tolerância 0.0019453850870991395 ===================================================== d [0.00084785 0.00119458 0.00132639 0.00229716 0.00408955] y [0.99999559 0.99999177 0.99998347 0.99996737 0.99993409] tolerância 0.0050898482517729845 ===================================================== d [0.00151147 0.00161527 0.00243829 0.00339209 0.00688606] y [0.99999611 0.99999252 0.99998429 0.9999688 0.99993664] tolerância 0.008352428531100168 ===================================================== d [0.00087489 0.00067238 0.00165406 0.00191054 0.00456135] y [0.99999667 0.99999311 0.99998519 0.99997005 0.99993918] tolerância 0.005330058419805778 ===================================================== d [0.00030404 0.00015253 0.0009012 0.00107454 0.0028683 ] y [0.99999694 0.99999332 0.9999857 0.99997065 0.99994059] tolerância 0.0032108628933584247 ===================================================== d [0.00011388 0.00010215 0.00095625 0.0015818 0.00397652] y [0.99999708 0.99999339 0.9999861 0.99997112 0.99994185] tolerância 0.004387784831975793 ===================================================== d [-4.03529387e-05 3.86195715e-04 2.00428194e-03 4.48087293e-03 1.02427882e-02] y [0.99999716 0.99999346 0.99998677 0.99997222 0.99994463] tolerância 0.011364895669062609 ===================================================== d [-0.00028741 0.00109548 0.00319427 0.00893515 0.01871523] y [0.99999713 0.99999371 0.99998807 0.99997514 0.99995131] tolerância 0.0210138715610271 ===================================================== d [-0.00014842 0.00114353 0.0019519 0.00659147 0.01231015] y [0.99999702 0.99999413 0.9999893 0.99997859 0.99995852] tolerância 0.014146621134073839 ===================================================== d [0.00014342 0.00082701 0.00106855 0.00345979 0.00564313] y [0.99999697 0.99999447 0.99998988 0.99998052 0.99996214] tolerância 0.0067573160634081695 ===================================================== d [0.00041841 0.00089735 0.00131968 0.00312643 0.00490015] y [0.99999703 0.99999476 0.99999026 0.99998175 0.99996414] tolerância 0.006042177531299166 ===================================================== d [0.0010406 0.0016304 0.00284956 0.00520095 0.00865395] y [0.99999725 0.99999525 0.99999098 0.99998346 0.99996681] tolerância 0.010667787207244372 ===================================================== d [0.00181855 0.00254161 0.00488763 0.00743674 0.01368166] y [0.99999785 0.99999618 0.9999926 0.99998641 0.99997173] tolerância 0.016617724873308794 ===================================================== d [0.00141187 0.00200106 0.0038252 0.00501684 0.01076526] y [0.99999857 0.9999972 0.99999455 0.99998939 0.99997721] tolerância 0.012715712373432183 ===================================================== d [0.00069642 0.00119606 0.0018759 0.00237479 0.00622409] y [0.99999903 0.99999785 0.99999579 0.99999102 0.9999807 ] tolerância 0.007057865078636264 ===================================================== d [0.00036351 0.00084795 0.00088696 0.00151247 0.00441716] y [0.99999927 0.99999825 0.99999643 0.99999182 0.99998281] tolerância 0.004841154149198858 ===================================================== d [0.00021545 0.00061771 0.00038163 0.00141063 0.00381626] y [0.99999941 0.99999858 0.99999677 0.99999241 0.99998451] tolerância 0.004138517884025294 ===================================================== d [0.0001346 0.00031355 0.00015079 0.00153073 0.00349678] y [0.99999949 0.99999883 0.99999693 0.99999297 0.99998604] tolerância 0.0038353369129838014 ===================================================== d [ 7.74642414e-05 -2.63785567e-05 1.93290077e-04 1.72528304e-03 3.42875865e-03] y [0.99999955 0.99999896 0.99999699 0.99999361 0.9999875 ] tolerância 0.003844092223285677 ===================================================== d [-1.21729789e-05 -3.06641927e-04 4.41704448e-04 1.76825644e-03 3.30266500e-03] y [0.99999958 0.99999895 0.99999707 0.99999435 0.99998898] tolerância 0.0037846541721900944 ===================================================== d [-0.0001409 -0.00033256 0.0006476 0.00133617 0.0025088 ] y [0.99999958 0.99999883 0.99999724 0.99999503 0.99999025] tolerância 0.0029375655446182867 ===================================================== d [-0.00022974 -0.00012564 0.0006408 0.00081249 0.00160004] y [0.99999953 0.99999871 0.99999747 0.9999955 0.99999112] tolerância 0.001923399786740663 ===================================================== d [-0.00024978 0.00010427 0.00053396 0.00053359 0.00107568] y [0.99999945 0.99999867 0.99999769 0.99999578 0.99999167] tolerância 0.0013417066881852057 ===================================================== d [-0.00018147 0.00026705 0.0003694 0.00038885 0.00073124] y [0.99999936 0.9999987 0.99999788 0.99999597 0.99999206] tolerância 0.0009626116081757295 ===================================================== d [-4.16281386e-05 3.08259030e-04 2.07718929e-04 2.95628656e-04 4.75360534e-04] y [0.9999993 0.9999988 0.99999801 0.9999961 0.99999231] tolerância 0.0006732515312484294 ===================================================== d [0.00010452 0.00030329 0.00015014 0.00027992 0.0004009 ] y [0.99999928 0.99999891 0.99999809 0.99999621 0.99999249] tolerância 0.0006037604432559046 ===================================================== d [0.000289 0.00032237 0.0001961 0.0003462 0.00050506] y [0.99999933 0.99999905 0.99999815 0.99999634 0.99999267] tolerância 0.0007751439091548438 ===================================================== d [0.0004085 0.00021508 0.00024285 0.00032165 0.00054063] y [0.99999946 0.9999992 0.99999825 0.9999965 0.99999291] tolerância 0.000817219192599958 ===================================================== d [2.33340040e-04 4.51714759e-06 1.51828455e-04 1.21572489e-04 2.95335560e-04] y [0.9999996 0.99999927 0.99999833 0.99999661 0.99999309] tolerância 0.00042370133578414253 ===================================================== d [ 6.55449568e-05 -6.02027171e-05 5.91712463e-05 2.16839977e-05 1.31943765e-04] y [0.99999966 0.99999928 0.99999837 0.99999664 0.99999317] tolerância 0.00017117563426968416 ===================================================== d [-1.41520594e-06 -6.64816705e-05 1.85473355e-05 8.03494188e-06 1.03167195e-04] y [0.99999968 0.99999926 0.99999839 0.99999665 0.99999321] tolerância 0.00012439392819656883 ===================================================== d [-5.77326286e-05 -8.98375785e-05 -1.23988292e-05 2.52108587e-05 1.60719176e-04] y [0.99999968 0.99999923 0.99999839 0.99999665 0.99999326] tolerância 0.0001949969713750119 ===================================================== d [-1.38075322e-04 -9.46055972e-05 -6.61942068e-05 7.16966280e-05 2.43419506e-04] y [0.99999965 0.99999917 0.99999839 0.99999667 0.99999335] tolerância 0.00031111115174790535 ===================================================== d [-1.04351010e-04 -8.72701560e-06 -6.48559988e-05 7.57003570e-05 1.35541886e-04] y [0.9999996 0.99999914 0.99999836 0.9999967 0.99999344] tolerância 0.00019817603682496058 ===================================================== d [-3.42418003e-05 2.91288728e-05 -2.16838430e-05 4.47715926e-05 3.51542372e-05] y [0.99999957 0.99999913 0.99999834 0.99999672 0.99999348] tolerância 7.57066513411841e-05 ===================================================== d [-5.95748306e-06 3.20727334e-05 3.20879248e-06 3.10348356e-05 1.09748214e-05] y [0.99999956 0.99999914 0.99999834 0.99999673 0.99999349] tolerância 4.6454880313499864e-05 ===================================================== d [1.12531599e-05 4.23625239e-05 2.71575167e-05 3.56293857e-05 1.11629560e-05] y [0.99999956 0.99999916 0.99999834 0.99999674 0.99999349] tolerância 6.366170334151979e-05 ===================================================== d [4.24184551e-05 6.05598405e-05 7.22696903e-05 4.74284823e-05 2.40864463e-05] y [0.99999956 0.99999918 0.99999836 0.99999676 0.9999935 ] tolerância 0.00011627272073401437 ===================================================== d [5.44494460e-05 3.21881902e-05 7.35490362e-05 2.21414606e-05 3.13653090e-05] y [0.99999958 0.99999921 0.99999839 0.99999678 0.99999351] tolerância 0.0001043278935671911 ===================================================== d [ 3.23742260e-05 4.98983248e-07 2.80170192e-05 -2.47565660e-06 2.50661731e-05] y [0.9999996 0.99999922 0.99999841 0.99999679 0.99999352] tolerância 4.967629987355578e-05 ===================================================== d [ 1.91475582e-05 -5.62347173e-06 3.49112528e-06 -5.96043738e-06 2.35610468e-05] y [0.99999961 0.99999922 0.99999842 0.99999679 0.99999353] tolerância 3.164000810163825e-05 ===================================================== d [ 1.45639607e-05 -5.77258333e-06 -1.12445962e-05 -4.00382364e-06 3.18091126e-05] y [0.99999962 0.99999922 0.99999842 0.99999679 0.99999354] tolerância 3.7412870264884625e-05 ===================================================== d [ 5.17014318e-06 -3.34453429e-06 -2.47588069e-05 3.37209151e-06 4.13185952e-05] y [0.99999962 0.99999921 0.99999841 0.99999678 0.99999355] tolerância 4.8677634730831075e-05 ===================================================== d [-1.01956504e-05 -1.04073033e-06 -2.16316436e-05 1.22614463e-05 3.46053470e-05] y [0.99999962 0.99999921 0.9999984 0.99999679 0.99999357] tolerância 4.3827337587696545e-05 ===================================================== d [-2.03302978e-05 -3.68794749e-06 -9.91157735e-06 1.73969314e-05 2.97934137e-05] y [0.99999962 0.99999921 0.99999839 0.99999679 0.99999358] tolerância 4.1418136739972076e-05 ===================================================== d [-3.01361550e-05 -1.14582657e-05 3.68558265e-06 2.42463066e-05 3.79195423e-05] y [0.99999961 0.99999921 0.99999839 0.9999968 0.9999936 ] tolerância 5.548728037810568e-05 ===================================================== d [-2.70349755e-05 -1.85170051e-05 2.08205573e-05 2.42996402e-05 4.14449201e-05] y [0.9999996 0.9999992 0.99999839 0.99999681 0.99999362] tolerância 6.176907718513584e-05 ===================================================== d [-6.14802625e-06 -1.17217828e-05 2.27678209e-05 1.32898290e-05 2.64805699e-05] y [0.99999959 0.9999992 0.9999984 0.99999682 0.99999363] tolerância 3.9641042162918815e-05 ===================================================== d [ 5.90332241e-06 -2.52421016e-06 1.84446210e-05 9.92637787e-06 1.97301206e-05] y [0.99999958 0.99999919 0.99999841 0.99999682 0.99999364] tolerância 2.9482800583710433e-05 ===================================================== d [1.62758981e-05 7.55011639e-06 2.34920603e-05 1.85636879e-05 3.17392256e-05] y [0.99999959 0.99999919 0.99999842 0.99999683 0.99999365] tolerância 4.717811954498769e-05 ===================================================== d [3.96631829e-05 3.55725214e-05 4.31388753e-05 5.09231577e-05 7.94188442e-05] y [0.9999996 0.9999992 0.99999843 0.99999684 0.99999367] tolerância 0.0001166192768031099 ===================================================== d [4.63612834e-05 6.63885822e-05 4.25175891e-05 7.71818342e-05 1.18055766e-04] y [0.99999962 0.99999922 0.99999846 0.99999687 0.99999372] tolerância 0.00016810341262843854 ===================================================== d [1.40899507e-05 4.46499579e-05 1.22073624e-05 4.66019386e-05 7.90054823e-05] y [0.99999964 0.99999924 0.99999847 0.9999969 0.99999376] tolerância 0.00010370521714353308 ===================================================== d [-2.44929371e-06 2.25863637e-05 2.62727996e-06 2.72760589e-05 6.06500407e-05] y [0.99999964 0.99999926 0.99999848 0.99999691 0.99999379] tolerância 7.032393827203007e-05 ===================================================== d [-1.00887213e-05 1.58275412e-05 6.73514880e-06 3.59553096e-05 9.76979128e-05] y [0.99999964 0.99999927 0.99999848 0.99999693 0.99999382] tolerância 0.00010599680207318694 ===================================================== d [-2.09864791e-05 9.10772128e-06 2.93746464e-05 8.81624554e-05 2.44627365e-04] y [0.99999964 0.99999928 0.99999848 0.99999695 0.99999388] tolerância 0.0002626812120564404 ===================================================== d [-2.12260129e-05 -1.69051012e-05 7.27429729e-05 1.74944182e-04 4.43113752e-04] y [0.99999962 0.99999928 0.9999985 0.99999701 0.99999404] tolerância 0.00048268326070020136 ===================================================== d [ 1.55716828e-06 -3.26722761e-05 8.40847934e-05 1.94823826e-04 4.10290091e-04] y [0.99999961 0.99999928 0.99999854 0.99999709 0.99999425] tolerância 0.00046307066019641294 ===================================================== d [ 1.75985121e-05 -1.58745734e-05 7.07198934e-05 1.75672030e-04 2.95042088e-04] y [0.99999961 0.99999926 0.99999857 0.99999716 0.9999944 ] tolerância 0.00035138797447619455 ===================================================== d [2.45328493e-05 1.45879254e-05 7.00262588e-05 1.79551024e-04 2.63377089e-04] y [0.99999962 0.99999926 0.9999986 0.99999723 0.99999452] tolerância 0.00032760403898657596 ===================================================== d [3.35366596e-05 5.93362084e-05 9.91533205e-05 2.28123595e-04 3.41702922e-04] y [0.99999963 0.99999927 0.99999863 0.99999732 0.99999464] tolerância 0.00042810995739958347 ===================================================== d [5.84234986e-05 1.40404968e-04 1.90192089e-04 3.63459537e-04 6.09159485e-04] y [0.99999965 0.9999993 0.99999869 0.99999745 0.99999484] tolerância 0.0007499853359733094 ===================================================== d [0.00010763 0.00025723 0.00035618 0.00057347 0.00109159] y [0.99999969 0.99999938 0.9999988 0.99999767 0.99999521] tolerância 0.0013134143573860443 ===================================================== d [0.00012485 0.00024285 0.00038017 0.00053339 0.0011611 ] y [0.99999974 0.99999951 0.99999897 0.99999795 0.99999574] tolerância 0.0013607944539451974 ===================================================== d [8.90219810e-05 1.11028952e-04 2.19887484e-04 2.96388696e-04 7.37293383e-04] y [0.99999978 0.99999959 0.99999911 0.99999814 0.99999615] tolerância 0.0008366902883353365 ===================================================== d [6.90077641e-05 4.89311769e-05 1.29721140e-04 2.19898447e-04 5.62633456e-04] y [0.99999981 0.99999963 0.99999919 0.99999824 0.99999641] tolerância 0.0006236149767781696 ===================================================== d [7.53355945e-05 4.40203506e-05 1.23597094e-04 3.10660675e-04 7.31015199e-04] y [0.99999985 0.99999966 0.99999925 0.99999835 0.99999667] tolerância 0.0008085683399591513 ===================================================== d [6.72290529e-05 6.06804999e-05 1.36424909e-04 4.72265368e-04 1.00235620e-03] y [0.99999989 0.99999968 0.99999931 0.99999851 0.99999707] tolerância 0.0011200741691458568 ===================================================== d [1.01730223e-05 6.31384625e-05 1.21070588e-04 4.57262414e-04 8.82833947e-04] y [0.99999992 0.99999971 0.99999937 0.99999872 0.99999751] tolerância 0.0010036098530415332 ===================================================== d [-3.50338659e-05 5.64061478e-05 1.31631175e-04 3.87401199e-04 7.11527387e-04] y [0.99999992 0.99999973 0.99999942 0.9999989 0.99999785] tolerância 0.0008234602013728332 ===================================================== d [-5.71033077e-05 5.43031263e-05 1.85882286e-04 3.84609643e-04 7.16970703e-04] y [0.99999991 0.99999976 0.99999948 0.99999906 0.99999815] tolerância 0.000838291958934668 ===================================================== d [-3.96701582e-05 4.02340818e-05 1.99808332e-04 2.95690315e-04 5.85580981e-04] y [0.99999988 0.99999978 0.99999956 0.99999923 0.99999846] tolerância 0.0006880797343995653 ===================================================== d [2.71433258e-06 2.42083131e-05 1.27825006e-04 1.37789112e-04 2.98101351e-04] y [0.99999987 0.99999979 0.99999962 0.99999932 0.99999865] tolerância 0.00035324622743888715 ===================================================== d [2.70656580e-05 2.75919129e-05 8.37540706e-05 8.18801112e-05 1.80656152e-04] y [0.99999987 0.9999998 0.99999966 0.99999937 0.99999875] tolerância 0.0002187455269815527 ===================================================== d [5.50833991e-05 5.55053579e-05 9.39297453e-05 1.12535959e-04 2.26629081e-04] y [0.99999988 0.99999981 0.9999997 0.99999941 0.99999883] tolerância 0.0002810033899071647 ===================================================== d [9.48624126e-05 1.10633509e-04 1.20045471e-04 1.95579138e-04 3.55160000e-04] y [0.99999991 0.99999985 0.99999976 0.99999947 0.99999896] tolerância 0.00044725763315118004 ===================================================== d [5.97317404e-05 8.85907829e-05 5.48590813e-05 1.49617793e-04 2.43840836e-04] y [0.99999995 0.99999989 0.99999981 0.99999955 0.99999911] tolerância 0.0003102733697625237 ===================================================== d [1.07144637e-05 2.96244456e-05 1.95332661e-06 5.60459006e-05 8.35982265e-05] y [0.99999997 0.99999992 0.99999982 0.9999996 0.99999918] tolerância 0.00010547999542523876 ===================================================== d [-3.32988120e-06 7.00439470e-06 -6.46178544e-06 2.61190748e-05 4.27538156e-05] y [0.99999997 0.99999993 0.99999982 0.99999961 0.99999921] tolerância 5.11077209378287e-05 ===================================================== d [-9.86734793e-06 -3.35074277e-06 -6.56954844e-06 2.50151794e-05 5.34681485e-05] y [0.99999997 0.99999993 0.99999982 0.99999962 0.99999922] tolerância 6.03021816130848e-05 ===================================================== d [-1.96558380e-05 -1.92930540e-05 -1.63290667e-06 3.58764537e-05 9.94080736e-05] y [0.99999997 0.99999993 0.99999982 0.99999964 0.99999926] tolerância 0.0001092260285595874 ===================================================== d [-1.50875384e-05 -2.47527556e-05 1.25035041e-05 2.32913753e-05 8.61818761e-05] y [0.99999996 0.99999992 0.99999982 0.99999965 0.9999993 ] tolerância 9.469146884397667e-05 ===================================================== d [-3.11970948e-06 -1.02247648e-05 1.45265735e-05 5.84279174e-06 3.01387658e-05] y [0.99999995 0.99999991 0.99999982 0.99999966 0.99999933] tolerância 3.5605942721297803e-05 ===================================================== d [ 6.14658278e-07 -1.42315970e-06 1.14316292e-05 3.07764330e-06 1.02755393e-05] y [0.99999995 0.99999991 0.99999982 0.99999966 0.99999934] tolerância 1.5752584859261367e-05 ===================================================== d [2.02845778e-06 3.67851282e-06 1.20426071e-05 5.61683460e-06 7.28155929e-06] y [0.99999995 0.99999991 0.99999983 0.99999966 0.99999934] tolerância 1.5723880593968436e-05 ===================================================== d [4.55123015e-06 1.17781570e-05 1.58811724e-05 1.27307588e-05 9.72417994e-06] y [0.99999995 0.99999991 0.99999983 0.99999967 0.99999934] tolerância 2.5851154926950155e-05 ===================================================== d [6.33920784e-06 1.58184153e-05 9.68954659e-06 1.52072006e-05 9.36160230e-06] y [0.99999995 0.99999991 0.99999984 0.99999967 0.99999935] tolerância 2.6517799279451432e-05 ===================================================== d [5.55113852e-06 9.27791512e-06 2.60419956e-08 8.02668689e-06 6.05004713e-06] y [0.99999996 0.99999992 0.99999984 0.99999968 0.99999935] tolerância 1.4762327011944892e-05 ===================================================== d [ 5.27903403e-06 4.25682331e-06 -2.68847463e-06 3.06499877e-06 5.40710984e-06] y [0.99999996 0.99999992 0.99999984 0.99999968 0.99999935] tolerância 9.583720317240147e-06 ===================================================== ###Markdown Nota-se que o método apresenta uma grande variação do valor do gradiente, entrando e saindo da região factível até a convergência. Para valores de tolerância de ordem superior a $10^{-3}$, notou-se que a derivada se distanciava muito da solução e o método não convergia. ###Code m fx table ###Output _____no_output_____ ###Markdown Apesar de patinar sobre a região de busca, o código consegue chegar a convergência com uma boa precisão na solução para n = 5. Isso só foi possível quando a tolerância utilizada foi suficentemente pequena, $10^{-5}$. ###Code # Para n = 10 import numpy as np import sympy as sym import pandas as pd variaveis = list(sym.symbols("x:10")) c = variaveis f1 = 0 for i in range(1,10): f1 = f1 + 100*(c[i] - c[i-1]**2)**2 + (1 - c[i-1])**2 x = [] for i in range(1,11): if (i%2 != 0): x.append(-1.2) else: x.append(1) print('x ',x) eps = 1e-8 grad = gradiente_simbolico(f1,c) d1f = eval_gradiente(grad,c,x) p = Parametros(f1,grad,c,x,eps) m,fx,table = gradiente_conjugado(p) ###Output A saída de streaming foi truncada nas últimas 5000 linhas. -143.85042796 22.33489275 -97.7288496 -411.51357843 -406.50138345] y [0.9915688 0.99897552 1.00677059 1.00544947 0.99654015 0.99884727 1.00524607 1.00063539 1.00648278 1.00275285] tolerância 623.6674472318085 ===================================================== d [ -42.58632721 -19.42382177 -68.10544179 118.4739772 41.58168983 -140.40887061 16.42039087 -92.91927256 -412.68730486 -401.04569733] y [0.99147633 0.99893818 1.00665102 1.0056818 0.99660912 0.99857772 1.00528792 1.00045227 1.00571168 1.00199114] tolerância 618.3535282105588 ===================================================== d [ -35.86133145 -18.98572846 -72.19691443 112.8301728 46.24384578 -136.77167371 10.36072853 -88.32736701 -413.67242021 -396.02596154] y [0.99139653 0.99890179 1.0065234 1.0059038 0.99668703 0.99831462 1.00531869 1.00027815 1.00493838 1.00123966] tolerância 613.4370556420771 ===================================================== d [ -29.17419423 -18.60766953 -76.08921692 107.06443477 50.79176116 -132.94756012 4.16826647 -83.96191752 -414.47304373 -391.44411199] y [0.99132933 0.99886621 1.00638812 1.00611522 0.99677369 0.99805833 1.0053381 1.00011264 1.00416323 1.00049757] tolerância 608.9236180944318 ===================================================== d [ -22.52553627 -18.28740551 -79.78240304 101.1788724 55.22002986 -128.93763273 -2.14668987 -79.82917306 -415.07431382 -387.2882488 ] y [0.99127423 0.99883107 1.00624441 1.00631744 0.99686962 0.99780723 1.00534597 0.99995406 1.00338041 0.99975824] tolerância 604.7929965509782 ===================================================== d [ -15.91961228 -18.0244178 -83.28924679 95.19162724 59.53151236 -124.76335828 -8.57180177 -75.94393392 -415.52024365 -383.59485969] y [0.99123185 0.99879666 1.00609431 1.00650778 0.9969735 0.99756467 1.00534194 0.99980388 1.00259954 0.99902965] tolerância 601.1071459388693 ===================================================== d [ -9.35570379 -17.81556533 -86.60771817 89.10089313 63.71940765 -120.42131655 -15.09738507 -72.30920449 -415.78276934 -380.33930988] y [0.99120167 0.99876249 1.00593638 1.00668828 0.99708638 0.99732809 1.00532568 0.99965988 1.00181164 0.99830229] tolerância 597.8261452670453 ===================================================== d [ -2.83560124 -17.65964958 -89.75003705 82.92067525 67.78676664 -115.92891233 -21.71335552 -68.93651569 -415.89760465 -377.54917639] y [0.991184 0.99872884 1.00577281 1.00685657 0.99720673 0.99710065 1.00529717 0.99952331 1.00102634 0.99758393] tolerância 594.9985554534818 ===================================================== d [ 3.64094297 -17.55349572 -92.71504135 76.65033924 71.72721556 -111.28425408 -28.41004199 -65.82970814 -415.841507 -375.20471097] y [0.9911786 0.99869522 1.00560195 1.00701442 0.99733577 0.99687997 1.00525583 0.99939208 1.00023463 0.99686522] tolerância 592.591873694782 ===================================================== d [ 10.07368743 -17.49535184 -95.51245583 70.29933862 75.54236574 -106.49951826 -35.17909745 -62.99819086 -415.6377315 -373.32406759] y [0.99118553 0.9986618 1.00542546 1.00716033 0.99747231 0.99666812 1.00520175 0.99926676 0.99944302 0.99615097] tolerância 590.6377175478939 ===================================================== d [ 16.46287588 -17.482272 -98.14652102 63.87150883 79.22976586 -101.57932426 -42.01135794 -60.44681066 -415.28367299 -371.90205051] y [0.99120471 0.9986285 1.00524364 1.00729416 0.99761612 0.99646539 1.00513478 0.99914684 0.99865181 0.9954403 ] tolerância 589.1308136567826 ===================================================== d [ 22.80846506 -17.51132472 -100.62041334 57.37004998 82.78608501 -96.52731275 -48.89787814 -58.18122405 -414.77460358 -370.93415863] y [0.99123617 0.99859509 1.00505608 1.00741622 0.99776753 0.99627126 1.0050545 0.99903132 0.99785817 0.99472957] tolerância 588.0645371866214 ===================================================== d [ 29.11064764 -17.57983697 -102.9393095 50.7994809 86.20953087 -91.34925729 -55.83062836 -56.20833541 -414.11454734 -370.42398995] y [0.99127993 0.99856149 1.00486303 1.00752629 0.99792636 0.99608607 1.00496068 0.9989197 0.99706241 0.99401793] tolerância 587.44494710626 ===================================================== d [ 35.36965454 -17.68497454 -105.1080989 44.16389536 89.49804956 -86.05046198 -62.80170234 -54.53477622 -413.30627571 -370.3742762 ] y [0.99133599 0.99852764 1.00466477 1.00762412 0.9980924 0.99591014 1.00485316 0.99881144 0.99626484 0.9933045 ] tolerância 587.2765902224749 ===================================================== d [ 41.5855203 -17.82347304 -107.13106159 37.46696817 92.6490734 -80.63562244 -69.80257799 -53.16535154 -412.34862196 -370.78222428] y [0.99140411 0.99849358 1.00446234 1.00770918 0.99826477 0.9957444 1.0047322 0.99870641 0.99546882 0.99259117] tolerância 587.5573355284529 ===================================================== d [ 47.7573493 -17.99166028 -109.010413 30.71170854 95.65820379 -75.10799874 -76.8234988 -52.10387217 -411.23301041 -371.63866805] y [0.99148421 0.99845925 1.00425601 1.00778134 0.99844321 0.9955891 1.00459776 0.99860401 0.99467465 0.99187705] tolerância 588.2747646835517 ===================================================== d [ 53.88400841 -18.1860549 -110.74872091 23.90166907 98.52113417 -69.47158958 -83.85513698 -51.35591967 -409.95464789 -372.94028394] y [0.99157654 0.99842446 1.00404524 1.00784072 0.99862816 0.99544389 1.00444923 0.99850328 0.99387956 0.99115851] tolerância 589.4233510410144 ===================================================== d [ 59.96427039 -18.40315125 -112.34902106 17.0408185 101.2338329 -63.73105524 -90.88833747 -50.92734332 -408.51110992 -374.68600692] y [0.99168112 0.99838917 1.00383029 1.00788711 0.99881937 0.99530905 1.00428648 0.9984036 0.99308389 0.99043468] tolerância 591.0010430970749 ===================================================== d [ 65.99547777 -18.63883197 -113.81222933 10.13306016 103.79037186 -57.89016628 -97.91178353 -50.82154267 -406.89148409 -376.86482167] y [0.99179751 0.99835345 1.00361224 1.00792019 0.99901586 0.99518536 1.00411008 0.99830476 0.99229102 0.98970746] tolerância 592.9921959282053 ===================================================== d [ 71.97497074 -18.8891971 -115.1401946 3.18302033 106.1853601 -51.95386183 -104.91484378 -51.04382664 -405.09052069 -379.47299321] y [0.99192609 0.99831713 1.00339049 1.00793993 0.99921807 0.99507257 1.00391931 0.99820574 0.99149826 0.98897321] tolerância 595.3904667354946 ===================================================== d [ 77.89935327 -19.150146 -116.33425273 -3.80425328 108.4127814 -45.92735222 -111.88611984 -51.59919415 -403.10185322 -382.50555074] y [0.99206686 0.99828019 1.00316531 1.00794615 0.99942575 0.99497096 1.00371412 0.99810591 0.990706 0.98823104] tolerância 598.1875976290473 ===================================================== d [ 83.76261132 -19.41683342 -117.39259539 -10.8230619 110.46373914 -39.81521734 -118.81058888 -52.48960518 -400.90786242 -385.94429857] y [0.99221921 0.99824274 1.00293778 1.00793871 0.99963778 0.99488113 1.0034953 0.99800499 0.98991762 0.98748295] tolerância 601.356855967873 ===================================================== d [ 89.55775288 -19.68430216 -118.31332768 -17.86658125 112.32906585 -33.62316488 -125.67232367 -53.71671305 -398.49197922 -389.77110614] y [0.99238303 0.99820476 1.00270819 1.00791755 0.99985382 0.99480326 1.00326293 0.99790233 0.98913353 0.98672813] tolerância 604.8719271231536 ===================================================== d [ 95.27877049 -19.94797781 -119.09705624 -24.92725009 114.00154644 -27.35878298 -132.45736841 -55.28440161 -395.8483303 -393.97928852] y [0.99255885 0.99816612 1.00247591 1.00788247 1.00007435 0.99473725 1.00301621 0.99779688 0.9883512 0.98596292] tolerância 608.7225952892542 ===================================================== d [ 100.91773548 -20.20287221 -119.74270677 -31.99650984 115.47223598 -21.03000834 -139.14941086 -57.19531264 -392.96619832 -398.55642144] y [0.99274662 0.99812681 1.00224121 1.00783335 1.00029901 0.99468334 1.00275518 0.99768793 0.98757111 0.98518651] tolerância 612.8900821553548 ===================================================== d [ 106.46248311 -20.44313505 -120.24509425 -39.06379286 116.7284007 -14.64503775 -145.72657784 -59.44799111 -389.82142064 -403.47295832] y [0.99294549 0.99808699 1.00200524 1.00777029 1.00052657 0.9946419 1.00248096 0.99757522 0.9867967 0.98440108] tolerância 617.3317653929945 ===================================================== d [ 111.90052542 -20.66299564 -120.60016112 -46.11723266 117.75827057 -8.21390747 -152.1670705 -62.04055362 -386.39501619 -408.70223998] y [0.9931553 0.9980467 1.00176827 1.00769331 1.0007566 0.99461304 1.00219378 0.99745806 0.98602849 0.98360597] tolerância 622.0103220158014 ===================================================== d [ 117.2213613 -20.85717216 -120.80732569 -53.1447499 118.55305549 -1.74850134 -158.4523171 -64.97326462 -382.68129246 -414.23111816] y [0.99337665 0.99800583 1.00152971 1.00760208 1.00098954 0.99459679 1.00189277 0.99733534 0.98526415 0.9827975 ] tolerância 626.9079522087517 ===================================================== d [ 122.40759453 -21.01918432 -120.85957774 -60.13065808 119.09804703 4.73851248 -164.55476965 -68.24035569 -378.65421842 -420.02040882] y [0.99360853 0.99796457 1.00129074 1.00749696 1.00122406 0.99459333 1.00157933 0.99720681 0.98450716 0.9819781 ] tolerância 631.9700439847614 ===================================================== d [ 127.44626754 -21.14346967 -120.75593112 -67.0601539 119.38408758 11.23253611 -170.45341264 -71.83918353 -374.30897385 -426.05245482] y [0.99385158 0.99792284 1.00105076 1.00737756 1.00146054 0.99460274 1.00125259 0.99707132 0.98375531 0.98114412] tolerância 637.1738562810563 ===================================================== d [ 132.31250944 -21.22262423 -120.48461663 -73.91177471 119.39234432 17.71721571 -176.11172973 -75.75628285 -369.60670396 -432.26438177] y [0.99410368 0.99788101 1.00081189 1.00724491 1.00169669 0.99462496 1.00091542 0.99692921 0.98301488 0.98030133] tolerância 642.4335700523832 ===================================================== d [ 136.99777075 -21.25202077 -120.05112526 -80.67127622 119.11992553 24.17532438 -181.51556656 -79.99062737 -364.56565688 -438.65938354] y [0.99436739 0.99783872 1.00057176 1.0070976 1.00193465 0.99466027 1.00056442 0.99677822 0.98227824 0.97943981] tolerância 647.7582437447012 ===================================================== d [ 141.47147573 -21.22378782 -119.44115255 -87.31184486 118.54599187 30.58624567 -186.62200701 -84.52252418 -359.14210659 -445.15938134] y [0.99463941 0.99779652 1.00033339 1.00693742 1.00217117 0.99470827 1.00020401 0.9966194 0.98155436 0.97856882] tolerância 653.042574044081 ===================================================== d [ 145.71450025 -21.13211067 -118.65233832 -93.81183582 117.66148612 36.92889747 -191.40399135 -89.33844605 -353.33109305 -451.7279615 ] y [0.99492031 0.99775438 1.00009623 1.00676406 1.00240655 0.994769 0.99983345 0.99645157 0.98084126 0.97768492] tolerância 658.2432276610917 ===================================================== d [ 149.70858247 -20.97142637 -117.68402049 -100.14939589 116.45903523 43.18105169 -195.83610306 -94.4249374 -347.13440282 -458.33420698] y [0.99521072 0.99771226 0.99985975 1.00657709 1.00264105 0.9948426 0.99945198 0.99627352 0.98013706 0.97678461] tolerância 663.3246472464393 ===================================================== d [ 153.427687 -20.73540902 -116.52981563 -106.29709599 114.92671223 49.31760162 -199.88337903 -99.76005141 -340.53717466 -464.9192041 ] y [0.9955091 0.99767046 0.9996252 1.00637749 1.00287316 0.99492866 0.99906167 0.99608532 0.97944521 0.97587114] tolerância 668.2136818587364 ===================================================== d [ 156.84836977 -20.41847553 -115.18644415 -112.22829147 113.05642366 55.31270178 -203.51494794 -105.32082791 -333.53500666 -471.43119988] y [0.99581488 0.99762914 0.99939296 1.00616563 1.00310221 0.99502695 0.9986633 0.9958865 0.97876651 0.97494454] tolerância 672.849514522388 ===================================================== d [ 159.95127637 -20.01582297 -113.65468995 -117.91838598 110.84437798 61.14046195 -206.70585041 -111.08576058 -326.13702356 -477.83236657] y [0.99612866 0.99758829 0.99916253 1.00594112 1.00332838 0.99513761 0.99825617 0.99567581 0.97809928 0.97400145] tolerância 677.1914751165257 ===================================================== d [ 162.70729875 -19.52209283 -111.92825735 -123.33547137 108.28196241 66.77126018 -209.41933241 -117.02097479 -318.33156469 -484.04687479] y [0.99644744 0.9975484 0.99893601 1.00570611 1.00354929 0.99525946 0.9978442 0.99545441 0.97744928 0.97304912] tolerância 681.1493686698012 ===================================================== d [ 165.09672627 -18.93349479 -110.00860102 -128.45359607 105.368734 72.17754355 -211.63147091 -123.09721925 -310.13099537 -490.02761974] y [0.99677172 0.99750949 0.99871293 1.0054603 1.0037651 0.99539254 0.99742682 0.99522118 0.97681483 0.9720844 ] tolerância 684.674302167587 ===================================================== d [ 167.1017918 -18.24677166 -107.89905968 -133.24777919 102.10666161 77.3316733 -213.3214306 -129.28499927 -301.55437037 -495.73311414] y [0.997102 0.99747161 0.99849286 1.00520333 1.00397589 0.99553693 0.99700346 0.99497493 0.97619442 0.97110411] tolerância 687.7253025838161 ===================================================== d [ 168.69934777 -17.45896979 -105.59896816 -137.68879353 98.49639657 82.2039912 -214.46236934 -135.54477406 -292.60884485 -501.09560549] y [0.99743504 0.99743525 0.99827782 1.00493776 1.00417939 0.99569105 0.9965783 0.99471726 0.97559342 0.9701161 ] tolerância 690.2288775246841 ===================================================== d [ 169.87375473 -16.56841939 -103.1130965 -141.75276168 94.54462157 86.76737694 -215.03768292 -141.84063024 -283.31812791 -506.06857247] y [0.99777126 0.99740045 0.99806736 1.00466334 1.0043757 0.99585489 0.99615087 0.99444712 0.97501024 0.9691174 ] tolerância 692.1412828730611 ===================================================== d [ 170.61068148 -15.57421002 -100.44693937 -145.41668992 90.26016149 90.9952688 -215.03310341 -148.13443657 -273.70833546 -510.60469311] y [0.99810982 0.99736743 0.99786185 1.00438082 1.00456413 0.99602782 0.99572229 0.99416442 0.97444558 0.96810879] tolerância 693.4188128897775 ===================================================== d [ 170.89944912 -14.47622004 -97.60835563 -148.66035236 85.65491526 94.86257039 -214.43955689 -154.38913222 -263.81304617 -514.66573339] y [0.99845113 0.99733627 0.99766091 1.00408992 1.00474469 0.99620985 0.99529212 0.99386808 0.97389803 0.96708733] tolerância 694.0299806490297 ===================================================== d [ 170.73207986 -13.27569656 -94.6056751 -151.46616245 80.74392478 98.34689809 -213.25214756 -160.56440851 -253.66580999 -518.2099135 ] y [0.99879173 0.99730742 0.99746637 1.00379364 1.00491541 0.99639892 0.99486474 0.99356038 0.97337224 0.96606159] tolerância 693.9415525952992 ===================================================== d [ 170.10292984 -11.97456074 -91.44845007 -153.81843853 75.54424437 101.42710968 -211.46940298 -166.62031187 -243.3041646 -521.20026506] y [0.99913201 0.99728096 0.99727782 1.00349176 1.00507633 0.99659492 0.99443972 0.99324037 0.97286668 0.96502878] tolerância 693.1262430921214 ===================================================== d [ 169.01103994 -10.57580495 -88.14822158 -155.70578591 70.07611863 104.08537354 -209.09622733 -172.51849345 -232.7707827 -523.608538 ] y [0.99947103 0.9972571 0.99709556 1.00318519 1.00522689 0.99679707 0.99401826 0.99290829 0.97238176 0.96399001] tolerância 691.568802507784 ===================================================== d [ 167.45930608 -9.08337428 -84.7178304 -157.12055757 64.36224294 106.30699571 -206.14276484 -178.22194668 -222.11140025 -525.41289387] y [0.99980787 0.99723602 0.99691988 1.00287487 1.00536656 0.99700452 0.99360152 0.99256446 0.97191784 0.96294644] tolerância 689.262553657146 ===================================================== d [ 165.45712731 -7.5025142 -81.17218705 -158.06165102 58.42871102 108.0829067 -202.62760176 -183.6972942 -211.37581232 -526.60342676] y [1.00014038 0.99721798 0.99675166 1.00256289 1.00549435 0.9972156 0.99319221 0.99221058 0.97147682 0.96190319] tolerância 686.2172623601496 ===================================================== d [ 163.00834173 -5.83828817 -77.52320753 -158.52284129 52.29939387 109.40051143 -198.56270291 -188.9055375 -200.60692036 -527.15393463] y [1.00047014 0.99720303 0.99658988 1.00224787 1.0056108 0.99743101 0.99278837 0.99184447 0.97105555 0.96085366] tolerância 682.4190053564889 ===================================================== d [ 160.13551482 -4.09773772 -73.7901907 -158.5165551 46.00546951 110.26182334 -193.98325607 -193.8250605 -189.86430383 -527.0869602 ] y [1.0007938 0.99719144 0.99643595 1.00193311 1.00571465 0.99764823 0.9923941 0.99146938 0.97065723 0.95980695] tolerância 677.9185888857999 ===================================================== d [ 156.84729705 -2.28731953 -69.98533545 -158.04211806 39.57285672 110.65989666 -188.90753839 -198.41955448 -179.18989926 -526.38236863] y [1.00111296 0.99718327 0.99628889 1.00161718 1.00580634 0.99786799 0.99200749 0.99108308 0.97027882 0.95875645] tolerância 672.7086918289782 ===================================================== d [ 1.53175360e+02 -4.15099070e-01 -6.61295466e+01 -1.57122506e+02 3.30343346e+01 1.10606238e+02 -1.83381782e+02 -2.02678252e+02 -1.68645149e+02 -5.25086049e+02] y [1.00142439 0.99717873 0.99614993 1.00130338 1.00588491 0.99808771 0.9916324 0.99068911 0.96992302 0.95771128] tolerância 666.8675779651455 ===================================================== d [ 149.13774866 1.51115545 -62.23656077 -155.76784264 26.41785991 110.10399012 -177.4356252 -206.57516434 -158.27281658 -523.19941986] y [1.00172853 0.99717791 0.99601862 1.0009914 1.00595051 0.99830733 0.99126828 0.99028667 0.96958817 0.95666868] tolerância 660.414090509877 ===================================================== d [ 144.76107392 3.48329426 -58.32268437 -153.99767358 19.75272213 109.16395271 -171.10887511 -210.09589199 -148.12105281 -520.75059834] y [1.00202466 0.99718091 0.99589505 1.00068211 1.00600296 0.99852595 0.99091597 0.9898765 0.9692739 0.95562982] tolerância 653.3997416352338 ===================================================== d [ 140.07395937 5.49308486 -54.40374431 -151.83456152 13.06734335 107.80011213 -164.44346665 -213.23035184 -138.2355467 -517.77383182] y [1.00231209 0.99718782 0.99577924 1.00037633 1.00604218 0.9987427 0.99057622 0.98945934 0.9689798 0.95459583] tolerância 645.8820737053724 ===================================================== d [ 135.1065401 7.53235321 -50.49485972 -149.30365818 6.38885587 106.02930387 -157.48277006 -215.97277792 -128.6589752 -514.30885767] y [1.00259022 0.99719873 0.99567122 1.00007485 1.00606813 0.99895675 0.9902497 0.98903595 0.96870532 0.95356775] tolerância 637.9237055682321 ===================================================== d [ 1.29889946e+02 9.59311893e+00 -4.66102344e+01 -1.46432233e+02 -2.57280204e-01 1.03870821e+02 -1.50270892e+02 -2.18321629e+02 -1.19430542e+02 -5.10400179e+02] y [1.00285848 0.99721369 0.99557096 0.9997784 1.00608081 0.99916728 0.98993701 0.98860712 0.96844986 0.95254655] tolerância 629.5913152277093 ===================================================== d [ 124.45578112 11.66772088 -42.76297499 -143.24917581 -6.84761031 101.34598073 -142.8519905 -220.2794099 -110.58561486 -506.09627685] y [1.00311639 0.99723273 0.99547841 0.99948765 1.0060803 0.99937352 0.98963863 0.98817363 0.96821272 0.95153311] tolerância 620.9545896542672 ===================================================== d [ 118.84204451 13.7493068 -38.96682542 -139.79197555 -13.36120579 98.48352441 -135.27687023 -221.8633847 -102.15921506 -501.47243225] y [1.00336258 0.99725581 0.99539382 0.99920428 1.00606676 0.999574 0.98935605 0.98773788 0.96799396 0.95053198] tolerância 612.114784909933 ===================================================== d [ 113.06549212 15.83002365 -35.22772133 -136.07462817 -19.77860153 95.29512058 -127.57161168 -223.05819596 -94.16775472 -496.52666297] y [1.00359855 0.99728311 0.99531645 0.99892671 1.00604023 0.99976955 0.98908745 0.98729736 0.96779112 0.94953627] tolerância 603.0752962800798 ===================================================== d [ 107.1642263 17.9045326 -31.55743803 -132.13706974 -26.08422068 91.81190571 -119.78599675 -223.89083809 -86.64107985 -491.34692745] y [1.00382305 0.99731455 0.9952465 0.99865653 1.00600096 0.99995876 0.98883414 0.98685446 0.96760414 0.94855038] tolerância 593.9485715229965 ===================================================== d [ 101.16698981 19.96741525 -27.96348856 -128.0096804 -32.26478637 88.05867453 -111.95840236 -224.37678992 -79.59769514 -485.98931509] y [1.00403583 0.9973501 0.99518384 0.99839416 1.00594916 1.00014106 0.9885963 0.9864099 0.96743211 0.94757477] tolerância 584.8061851906768 ===================================================== d [ 95.1008222 22.0139655 -24.45185013 -123.72207081 -38.30978856 84.05996499 -104.12459446 -224.53371832 -73.05201808 -480.50982234] y [1.00423671 0.99738974 0.99512832 0.99813998 1.0058851 1.00031591 0.988374 0.98596439 0.96727406 0.9466098 ] tolerância 575.7179439653822 ===================================================== d [ 88.9907739 24.04021189 -21.02697134 -119.30270626 -44.21145212 79.83966214 -96.31742326 -224.38100425 -67.01460353 -474.96365478] y [1.00442554 0.99743345 0.99507976 0.99789432 1.00580903 1.00048282 0.98816725 0.98551856 0.96712901 0.94565571] tolerância 566.7511751767875 ===================================================== d [ 82.85968165 26.04292112 -17.69180666 -114.77858279 -49.96465161 75.42064383 -88.56660416 -223.93926472 -61.4924352 -469.4046008 ] y [1.00460224 0.99748119 0.99503801 0.99765744 1.00572125 1.00064134 0.987976 0.98507303 0.96699595 0.94471263] tolerância 557.9701247298125 ===================================================== d [ 76.72352093 28.01830943 -14.44714023 -110.16868647 -55.56406753 70.81979732 -80.89403145 -223.21771738 -56.48783343 -463.86096172] y [1.00476738 0.99753309 0.99500275 0.99742868 1.00562167 1.00079166 0.98779949 0.98462671 0.96687339 0.94377709] tolerância 549.4070019327812 ===================================================== d [ 70.61033498 29.96765613 -11.29501535 -105.51040141 -61.01568425 66.06690527 -73.33331077 -222.26559503 -52.00722156 -458.43774252] y [1.00491972 0.99758873 0.99497407 0.99720993 1.00551134 1.00093228 0.98763887 0.9841835 0.96676123 0.94285606] tolerância 541.1851215216137 ===================================================== d [ 64.5262797 31.88656922 -8.23273549 -100.81159643 -66.31476629 61.17104201 -65.89470284 -221.07754085 -48.04407103 -453.12752263] y [1.00506045 0.99764845 0.99495156 0.99699965 1.00538973 1.00106395 0.98749271 0.98374051 0.96665758 0.94194238] tolerância 533.2924807654679 ===================================================== d [ 58.48930382 33.77643308 -5.25889217 -96.09862554 -71.46906925 56.15318217 -58.60005021 -219.68976525 -44.59846713 -448.00509394] y [1.00518905 0.997712 0.99493515 0.99679873 1.00525757 1.00118587 0.98736138 0.9832999 0.96656182 0.94103928] tolerância 525.8142275565244 ===================================================== d [ 52.50968095 35.6372459 -2.37006287 -91.38809904 -76.48421584 51.02745105 -51.4626287 -218.12242063 -41.66510478 -443.11040206] y [1.00530562 0.99777932 0.99492467 0.9966072 1.00511513 1.00129778 0.98724459 0.98286205 0.96647294 0.94014639] tolerância 518.7937159157556 ===================================================== d [ 4.65931028e+01 3.74679657e+01 4.38026881e-01 -8.66904336e+01 -8.13638185e+01 4.58031713e+01 -4.44909879e+01 -2.16384796e+02 -3.92375664e+01 -4.38462422e+02] y [1.00541067 0.99785061 0.99491993 0.99642438 1.00496212 1.00139986 0.98714164 0.9824257 0.96638959 0.93925996] tolerância 512.2488658100099 ===================================================== d [ 40.74863593 39.27153013 3.1703038 -82.02353206 -86.12059442 40.49359338 -37.69623928 -214.50785262 -37.31122457 -434.12147619] y [1.00550388 0.99792557 0.9949208 0.99625096 1.00479935 1.00149149 0.98705264 0.98199283 0.96631109 0.93838282] tolerância 506.246295586921 ===================================================== d [ 34.97967457 41.04904145 5.83266803 -77.39721306 -90.7636448 35.10644077 -31.08351661 -212.50721236 -35.87834923 -430.11656111] y [1.00558539 0.99800413 0.99492714 0.99608687 1.00462707 1.0015725 0.98697723 0.98156371 0.96623645 0.93751437] tolerância 500.8157983127333 ===================================================== d [ 29.28775218 42.80175511 8.4315407 -72.81926461 -95.30258951 29.64746628 -24.65589763 -210.39654116 -34.93119065 -426.47303581] y [1.00565537 0.99808625 0.99493881 0.99593204 1.0044455 1.00164273 0.98691504 0.98113859 0.96616468 0.93665393] tolerância 495.9829743436881 ===================================================== d [ 23.67204518 44.52980378 10.97339749 -68.29365709 -99.74454188 24.119321 -18.41451211 -208.18128776 -34.46306975 -423.20246691] y [1.00571418 0.99817219 0.99495574 0.99578582 1.00425414 1.00170226 0.98686554 0.98071613 0.96609454 0.9357976 ] tolerância 491.75705128481997 ===================================================== d [ 18.13245194 46.23646589 13.46574001 -63.82805997 -104.10386014 18.52567041 -12.36049121 -205.88081769 -34.46862754 -420.34336569] y [1.00576171 0.9982616 0.99497778 0.99564869 1.00405386 1.00175069 0.98682856 0.98029812 0.96602534 0.93494784] tolerância 488.1793546858482 ===================================================== d [ 12.6660324 47.92289547 15.915942 -59.42488875 -108.39025148 12.86697906 -6.49240777 -203.50200609 -34.94152544 -417.91019053] y [1.00579812 0.99835444 0.99500481 0.99552053 1.00384483 1.00178789 0.98680374 0.97988472 0.96595613 0.93410382] tolerância 485.2635655578648 ===================================================== d [ 7.26889574 49.58932094 18.33113497 -55.08455221 -112.61130163 7.14185339 -0.80855061 -201.04636377 -35.87756405 -415.91007652] y [1.00582364 0.99845102 0.99503689 0.99540077 1.00362638 1.00181382 0.98679066 0.97947459 0.96588571 0.93326157] tolerância 483.0147749444311 ===================================================== d [ 1.93689535 51.23787771 20.7192359 -50.80889669 -116.77902649 1.34874491 4.69350932 -198.52280993 -37.27338475 -414.36469577] y [1.00583829 0.99855096 0.99507384 0.99528975 1.00339942 1.00182821 0.98678903 0.9790694 0.9658134 0.93242336] tolerância 481.455921073283 ===================================================== d [ -3.33512414 52.86865011 23.08751902 -46.5966438 -120.90083416 -4.51536035 10.01671123 -195.93087709 -39.12548087 -413.27813058] y [1.0058422 0.99865423 0.99511559 0.99518735 1.00316407 1.00183093 0.98679849 0.9786693 0.96573828 0.93158825] tolerância 480.5900100060246 ===================================================== d [ -8.55230021 54.4812132 25.44299618 -42.44607074 -124.98300744 -10.45437588 15.16320106 -193.26793586 -41.43312288 -412.65385558] y [1.00583545 0.99876117 0.99516229 0.9950931 1.00291951 1.00182179 0.98681875 0.97827297 0.96565914 0.93075226] tolerância 480.4193287377365 ===================================================== d [ -13.71999858 56.07536133 27.79274922 -38.35532841 -129.0323887 -16.47239512 20.13552249 -190.53229126 -44.19544377 -412.49668686] y [1.00581815 0.99887138 0.99521376 0.99500723 1.00266669 1.00180065 0.98684942 0.97788202 0.96557533 0.92991753] tolerância 480.9487467133731 ===================================================== d [ -18.84323135 57.6491647 30.14297957 -34.32130288 -133.05192115 -22.57351745 24.93535305 -187.7164004 -47.41169081 -412.80051543] y [1.0057904 0.99898481 0.99526998 0.99492965 1.00240568 1.00176733 0.98689015 0.9774966 0.96548593 0.92908312] tolerância 482.17089791037927 ===================================================== d [ -23.92652689 59.20077817 32.49984281 -30.34160821 -137.04488922 -28.76219403 29.56313219 -184.81377121 -51.08422046 -413.56507273] y [1.00575214 0.99910185 0.99533118 0.99485997 1.00213554 1.0017215 0.98694078 0.97711549 0.96538967 0.92824502] tolerância 484.0851216851656 ===================================================== d [ -28.97361065 60.72658043 34.86839286 -26.41295816 -141.01049451 -35.04182413 34.01848436 -181.81300005 -55.21344167 -414.77805398] y [1.00570356 0.99922205 0.99539716 0.99479836 1.0018573 1.0016631 0.9870008 0.97674026 0.96528595 0.92740536] tolerância 486.67762259186753 ===================================================== d [ -33.98703598 62.2217799 37.25280637 -22.53224813 -144.94529579 -41.4147732 38.29952233 -178.70048377 -59.79972018 -416.42263559] y [1.00564474 0.99934534 0.99546796 0.99474474 1.00157101 1.00159196 0.98706987 0.97637113 0.96517385 0.92656325] tolerância 489.9295855353317 ===================================================== d [ -38.96888086 63.68238092 39.65759739 -18.69783429 -148.84794209 -47.88364694 42.40325129 -175.46662371 -64.84667296 -418.49361271] y [1.00557548 0.99947213 0.99554387 0.99469882 1.00127565 1.00150756 0.98714791 0.97600699 0.965052 0.92571469] tolerância 493.8354720834353 ===================================================== d [ -43.91853099 65.10052302 42.0846274 -14.90717965 -152.70801836 -54.44756884 46.32405584 -172.09227437 -70.35308825 -420.96090491] y [1.00549607 0.9996019 0.99562468 0.99466072 1.00097234 1.00140999 0.98723432 0.97564943 0.96491986 0.92486191] tolerância 498.3611556800851 ===================================================== d [ -48.83392816 66.46792101 44.53507212 -11.15889968 -156.51413365 -61.10407734 50.05486622 -168.55959661 -76.31729788 -423.79575816] y [1.00540658 0.99973456 0.99571044 0.99463035 1.00066116 1.00129904 0.98732871 0.97529875 0.9647765 0.9240041 ] tolerância 503.4739465840651 ===================================================== d [ -53.71099528 67.77504533 47.00885669 -7.45257291 -160.25197056 -67.84832029 53.58664857 -164.8499822 -82.73593722 -426.96511526] y [1.00530707 0.99987 0.99580119 0.99460761 1.00034223 1.00117453 0.98743071 0.97495527 0.96462098 0.92314052] tolerância 509.13581720606146 ===================================================== d [ -58.54578108 69.01418657 49.50687485 -3.78956045 -163.91174422 -74.67612463 56.91014581 -160.95205098 -89.60890156 -430.45367961] y [1.00519722 1.00000861 0.99589733 0.99459236 1.00001448 1.00103576 0.98754031 0.97461813 0.96445177 0.9222673 ] tolerância 515.3289185339095 ===================================================== d [-6.33283683e+01 7.01713628e+01 5.20249991e+01 -1.71318977e-01 -1.67468519e+02 -8.15752774e+01 6.00109521e+01 -1.56841480e+02 -9.69257453e+01 -4.34206648e+02] y [1.00507748 1.00014976 0.99599858 0.99458461 0.99967925 1.00088304 0.9876567 0.97428895 0.9642685 0.92138694] tolerância 521.9884488510628 ===================================================== d [ -68.04783659 71.23340987 54.55888957 3.39900732 -170.89905008 -88.53206346 62.87426258 -152.49939072 -104.67491972 -438.17722242] y [1.00494797 1.00029327 0.99610498 0.99458426 0.99933675 1.0007162 0.98777943 0.97396818 0.96407027 0.92049891] tolerância 529.0583126046347 ===================================================== d [ -72.6907301 72.18608405 57.10258755 6.91683504 -174.17720219 -95.52919365 65.48351748 -147.90778668 -112.84090288 -442.31391194] y [1.0048088 1.00043896 0.99621656 0.99459122 0.99898723 1.00053514 0.98790802 0.97365629 0.96385619 0.91960275] tolerância 536.4758299927045 ===================================================== d [ -77.24103439 73.01419072 59.64846005 10.37608353 -177.27411198 -102.54561357 67.8206055 -143.05006033 -121.40363057 -446.56060248] y [1.00466013 1.00058659 0.99633335 0.99460536 0.998631 1.00033976 0.98804195 0.97335379 0.96362541 0.91869814] tolerância 544.1716020438097 ===================================================== d [ -81.68423439 73.70560656 62.19061226 13.76935651 -180.16820396 -109.5621896 69.86904481 -137.91939686 -130.34621229 -450.88364799] y [1.00450158 1.00073647 0.99645579 0.99462666 0.99826712 1.00012927 0.98818116 0.97306016 0.96337622 0.91778151] tolerância 552.1016171509646 ===================================================== d [ -85.99213856 74.23684707 64.71207579 17.08693939 -182.80875876 -116.54002161 71.6027236 -132.48959955 -139.62479787 -455.17263846] y [1.00433452 1.00088721 0.99658298 0.99465482 0.99789865 0.9999052 0.98832405 0.97277809 0.96310963 0.91685937] tolerância 560.1282456034006 ===================================================== d [ -90.14921101 74.59797966 67.20633455 20.31923904 -185.1779789 -123.45715195 73.00666084 -126.76481213 -149.21705298 -459.40512816] y [1.00415801 1.00103959 0.99671581 0.99468989 0.99752341 0.99966598 0.98847103 0.97250614 0.96282304 0.91592507] tolerância 568.2182192870687 ===================================================== d [ -94.12673483 74.76889685 69.65642632 23.45376872 -187.23162003 -130.27249147 74.05683098 -120.73337961 -159.07483994 -463.49011063] y [1.00397297 1.00119271 0.99685376 0.9947316 0.9971433 0.99941257 0.98862088 0.97224594 0.96251675 0.91498208] tolerância 576.2532249182907 ===================================================== d [ -97.89314185 74.72927623 72.04276477 26.47655959 -188.92279008 -136.93929405 74.72964289 -114.38736369 -169.13955879 -467.3269888 ] y [1.00378046 1.00134563 0.99699622 0.99477957 0.99676038 0.99914614 0.98877234 0.97199902 0.96219141 0.91403416] tolerância 584.1015188174904 ===================================================== d [-101.42840406 74.46972393 74.3546655 29.37511519 -190.2313155 -143.42639532 75.01134396 -107.74096354 -179.37171828 -470.88444799] y [1.00357952 1.00149902 0.9971441 0.99483392 0.99637259 0.99886505 0.98892574 0.97176422 0.96184423 0.91307491] tolerância 591.7138265240687 ===================================================== d [-104.69963957 73.97232492 76.57106853 32.13347585 -191.11333841 -149.68300843 74.88094521 -100.79747671 -189.70363726 -474.06770478] y [1.00337133 1.00165188 0.99729672 0.99489421 0.99598212 0.99857065 0.98907971 0.97154307 0.96147604 0.91210835] tolerância 598.9606542062007 ===================================================== d [-107.67741386 73.22361032 78.67281774 34.73607278 -191.53405784 -155.6615954 74.32262824 -93.57042243 -200.06827081 -476.80109296] y [1.00315642 1.00180372 0.99745389 0.99496017 0.99558983 0.99826341 0.98923341 0.97133617 0.96108665 0.91113526] tolerância 605.7341243586981 ===================================================== d [-110.3287401 72.20941459 80.63700271 37.16618049 -191.45377478 -161.30742221 73.32144559 -86.07515765 -210.38447774 -478.99010725] y [1.0029362 1.00195347 0.99761479 0.99503121 0.99519811 0.99794505 0.98938541 0.9711448 0.96067747 0.91016012] tolerância 611.9011623005838 ===================================================== d [-112.63240117 70.92515314 82.44912853 39.41005207 -190.85637011 -166.58191283 71.87204052 -78.34321139 -220.5904954 -480.59891044] y [1.00270973 1.00210169 0.99778031 0.9951075 0.99480512 0.99761395 0.98953592 0.97096812 0.96024563 0.90917693] tolerância 617.399272909669 ===================================================== d [-114.55601145 69.36039372 84.08476863 41.45087253 -189.70631064 -171.42768411 69.96568998 -70.40007315 -230.59370615 -481.53262752] y [1.00247938 1.00224675 0.99794893 0.9951881 0.99441479 0.99727325 0.98968291 0.97080789 0.95979448 0.90819401] tolerância 622.0906678811588 ===================================================== d [-116.08076224 67.51483271 85.52902014 43.27596745 -187.99333573 -175.80635735 67.604166 -62.28613821 -240.32465844 -481.75846563] y [1.00224424 1.00238912 0.99812153 0.99527319 0.99402539 0.99692138 0.98982652 0.97066338 0.95932116 0.9072056 ] tolerância 625.9132808478798 ===================================================== d [-117.1819424 65.38598385 86.76072664 44.87141747 -185.69688197 -179.66822037 64.78983484 -54.0397638 -249.69221802 -481.20583181] y [1.00200683 1.0025272 0.99829645 0.99536169 0.99364091 0.99656182 0.98996478 0.970536 0.95882965 0.90622032] tolerância 628.756772344226 ===================================================== d [-117.84532924 62.97878206 87.76588408 46.22690257 -182.81489362 -182.97839155 61.53300064 -45.7090108 -258.62341072 -479.84902128] y [1.0017663 1.00266141 0.99847454 0.9954538 0.99325974 0.99619303 0.99009777 0.97042507 0.95831712 0.90523258] tolerância 630.5654432961919 ===================================================== d [-118.05708696 60.29962596 88.52880626 47.33329589 -179.34490971 -185.70018591 57.84831665 -37.34242845 -267.03653483 -477.64895487] y [1.00152528 1.00279021 0.99865404 0.99554834 0.99288585 0.9958188 0.99022362 0.97033159 0.95778819 0.9042512 ] tolerância 631.267286117006 ===================================================== d [-117.80990587 57.35946096 89.03771521 48.18424106 -175.29492477 -187.80580222 53.75606792 -28.99296406 -274.85953974 -474.58867771] y [1.00128383 1.00291354 0.99883509 0.99564514 0.99251906 0.99543901 0.99034193 0.97025522 0.95724205 0.90327432] tolerância 630.8173538310975 ===================================================== d [-117.10151361 54.17269055 89.28336627 48.77600064 -170.68004461 -189.27393501 49.28157961 -20.71540806 -282.02613114 -470.66270174] y [1.00104289 1.00303085 0.99901719 0.99574369 0.99216055 0.99505491 0.99045187 0.97019592 0.95667991 0.9023037 ] tolerância 629.1854117757662 ===================================================== d [-115.93517512 50.75712643 89.25952514 49.10775226 -165.52281663 -190.09085304 44.45501671 -12.56541627 -288.47789452 -465.87853056] y [1.0008034 1.00314164 0.99919979 0.99584345 0.99181147 0.99466781 0.99055266 0.97015356 0.95610312 0.9013411 ] tolerância 626.3584111119566 ===================================================== d [-114.3197739 47.1336846 88.96317184 49.18170261 -159.85292369 -190.2508768 39.31087847 -4.59835405 -294.1656935 -460.25676685] y [1.00056629 1.00324545 0.99938235 0.99594388 0.99147295 0.99427904 0.99064358 0.97012786 0.95551313 0.9003883 ] tolerância 622.3411209074353 ===================================================== d [-112.26972719 43.32598123 88.39461418 49.00311236 -153.70665949 -189.75663037 33.88734625 3.13187373 -299.05083933 -453.83085924] y [1.00033248 1.00334185 0.99956429 0.99604447 0.99114602 0.99388994 0.99072398 0.97011845 0.95491151 0.89944699] tolerância 617.1562810528496 ===================================================== d [-109.80473477 39.35984153 87.55750401 48.5802292 -147.12620055 -188.61905238 28.2255067 10.57396183 -303.10598015 -446.64648936] y [1.00010287 1.00343046 0.99974508 0.99614469 0.99083166 0.99350186 0.99079328 0.97012486 0.95429989 0.89851882] tolerância 610.8442579328256 ===================================================== d [-106.94936867 35.26273903 86.458754 47.92413072 -140.15869963 -186.85716237 22.36847894 17.6802677 -306.3156713 -438.76061734] y [0.9998783 1.00351096 0.99992415 0.99624404 0.99053076 0.9931161 0.99085101 0.97014648 0.95367998 0.89760534] tolerância 603.4622087646271 ===================================================== d [-103.73251943 31.06318699 85.10835754 47.04848301 -132.85523485 -184.49759049 16.36047918 24.4077569 -308.67660301 -430.24022099] y [0.99965957 1.00358307 1.00010097 0.99634206 0.99024411 0.99273394 0.99089676 0.97018264 0.95305351 0.896708 ] tolerância 595.0827847049268 ===================================================== d [-100.19265214 26.79195705 83.52380603 45.97208393 -125.27690876 -181.58418972 10.2472776 30.72104023 -310.21400762 -421.18219789] y [0.99944819 1.00364637 1.0002744 0.99643793 0.98997339 0.99235798 0.9909301 0.97023238 0.95242451 0.89583128] tolerância 585.823454941687 ===================================================== d [ -96.35241404 22.47389085 81.71006268 44.70680064 -117.46318565 -178.13407121 4.06998432 36.58450443 -310.91114039 -411.61908969] y [0.99924328 1.00370117 1.00044522 0.99653195 0.98971717 0.99198661 0.99095105 0.97029521 0.95179007 0.89496989] tolerância 575.7121208464795 ===================================================== d [ -92.25616124 18.13891118 79.69010996 43.27507946 -109.47864638 -174.20414014 -2.12849188 41.97457589 -310.81957655 -401.66673471] y [0.99904622 1.00374713 1.00061233 0.99662338 0.98947694 0.99162229 0.99095938 0.97037003 0.9511542 0.89412805] tolerância 564.8960014090804 ===================================================== d [ -87.9425422 13.81347865 77.48286742 41.69727146 -101.37819008 -169.84293512 -8.30886715 46.87267718 -309.9801043 -391.41543771] y [0.99885754 1.00378423 1.00077531 0.99671189 0.98925304 0.99126601 0.99095503 0.97045588 0.95051851 0.89330657] tolerância 553.4897607276052 ===================================================== d [ -83.45054212 9.52215965 75.10836554 39.99430651 -93.21475762 -165.10170767 -14.43510911 51.26684186 -308.44197876 -380.95670735] y [0.99867768 1.00381248 1.00093378 0.99679717 0.9890457 0.99091865 0.99093803 0.97055174 0.94988455 0.89250605] tolerância 541.6125636733639 ===================================================== d [ -78.81866866 5.28720934 72.58725397 38.18724941 -85.03833778 -160.0332266 -20.47502324 55.1515109 -306.26149569 -370.38148329] y [0.99850701 1.00383195 1.00108739 0.99687896 0.98885506 0.99058099 0.99090851 0.97065659 0.94925372 0.89172692] tolerância 529.3856134342698 ===================================================== d [ -74.08420201 1.12824434 69.94031972 36.29688154 -76.89511589 -154.69061582 -26.40069824 58.52715298 -303.50046679 -359.77851121] y [0.99834581 1.00384277 1.00123584 0.99695706 0.98868114 0.99025369 0.99086663 0.97076939 0.94862736 0.89096943] tolerância 516.9298156405481 ===================================================== d [ -69.28253126 -2.93798958 67.18803127 34.34332144 -68.82678446 -149.1262644 -32.18880479 61.39973476 -300.2246583 -349.23291262] y [0.99819429 1.00384507 1.00137889 0.9970313 0.98852387 0.98993732 0.99081264 0.97088908 0.94800665 0.89023361] tolerância 504.363643095699 ===================================================== d [ -64.44043462 -6.89749336 64.34412905 32.34258125 -60.86414278 -143.37765786 -37.81809525 63.77356961 -296.47604787 -338.79629219] y [0.99805208 1.00383904 1.0015168 0.99710179 0.9883826 0.98963122 0.99074657 0.97101512 0.9473904 0.88951676] tolerância 491.7582762461732 ===================================================== d [ -59.60115645 -10.74021525 61.44010633 30.31913739 -53.05133973 -137.52132201 -43.28078339 65.67650507 -292.38078921 -328.60751218] y [0.99792029 1.00382494 1.00164839 0.99716794 0.98825812 0.98933798 0.99066922 0.97114554 0.94678405 0.88882386] tolerância 479.3165947207435 ===================================================== d [ -54.77895898 -14.45717633 58.48061531 28.2830442 -45.40249486 -131.5742433 -48.56054978 67.11341146 -287.95045525 -318.67094819] y [0.99779795 1.00380289 1.00177451 0.99723017 0.98814922 0.9890557 0.99058038 0.97128035 0.9461839 0.88814935] tolerância 467.04582235997844 ===================================================== d [ -50.0038701 -18.0444602 55.48791997 26.25242516 -37.94403088 -125.59236099 -53.65677342 68.11193644 -283.28336586 -309.08305543] y [0.99768551 1.00377322 1.00189455 0.99728823 0.98805603 0.98878563 0.99048071 0.97141811 0.94559284 0.88749524] tolerância 455.09050201473246 ===================================================== d [ -45.29557371 -21.50015335 52.47565595 24.24027103 -30.69053542 -119.61225731 -58.56897762 68.69470097 -278.44397515 -299.89751359] y [0.99758287 1.00373618 1.00200844 0.99734211 0.98797814 0.98852783 0.99037057 0.97155792 0.94501136 0.8868608 ] tolerância 443.53476176816065 ===================================================== d [ -40.67018018 -24.82484314 49.45546664 22.25780468 -23.65140196 -113.66597367 -63.30042539 68.8852263 -273.4931825 -291.16084865] y [0.99748989 1.00369205 1.00211616 0.99739187 0.98791515 0.98828231 0.99025035 0.97169893 0.94443982 0.88624522] tolerância 432.453709367551 ===================================================== d [ -36.13689257 -28.01915552 46.43259852 20.31262844 -16.82937413 -107.7712333 -67.85229409 68.70023134 -268.46455035 -282.88968397] y [0.99740611 1.0036409 1.00221804 0.99743772 0.98786642 0.98804815 0.99011994 0.97184084 0.9438764 0.88564541] tolerância 421.8777635941671 ===================================================== d [ -31.70955514 -31.09025663 43.4199124 18.41498492 -10.22744238 -101.96304101 -72.24032678 68.17021099 -263.43898279 -275.14784565] y [0.99733166 1.00358318 1.00231369 0.99747957 0.98783175 0.98782613 0.98998016 0.97198237 0.94332334 0.88506263] tolerância 411.9099899386808 ===================================================== d [ -27.39416009 -34.04470366 40.42271172 16.57097828 -3.84141583 -96.25933192 -76.47650566 67.31737176 -258.46320115 -267.96166002] y [0.99726634 1.00351913 1.00240314 0.99751751 0.98781068 0.98761608 0.98983134 0.9721228 0.94278063 0.8844958 ] tolerância 402.59862431723775 ===================================================== d [ -23.19190787 -36.88775431 37.44115964 14.78408038 2.33642862 -90.66660478 -80.56897157 66.15623884 -253.55975011 -261.33379837] y [0.9972097 1.00344874 1.00248672 0.99755177 0.98780274 0.98741706 0.98967322 0.97226198 0.94224625 0.88394178] tolerância 393.95530300119145 ===================================================== d [ -19.10620994 -39.63143126 34.4806287 13.05925428 8.31606082 -85.20377978 -84.54008698 64.71191791 -248.78971492 -255.30576779] y [0.99716175 1.00337248 1.00256413 0.99758233 0.98780757 0.98722961 0.98950664 0.97239876 0.94172201 0.88340146] tolerância 386.05182603124905 ===================================================== d [ -15.13520727 -42.28549521 31.54051856 11.39851923 14.11085874 -79.87620644 -88.40623028 63.00058028 -244.18368369 -249.88834623] y [0.99712225 1.00329054 1.00263542 0.99760933 0.98782477 0.98705344 0.98933185 0.97253256 0.94120763 0.88287361] tolerância 378.91395391827683 ===================================================== d [ -11.27439425 -44.85759957 28.61659542 9.80234557 19.73509394 -74.68130911 -92.1791382 61.03108096 -239.75343722 -245.076014 ] y [0.99709084 1.0032028 1.00270087 0.99763299 0.98785405 0.9868877 0.98914841 0.97266328 0.94070095 0.8823551 ] tolerância 372.5410490658559 ===================================================== d [ -7.51975835 -47.36139896 25.70763815 8.2718554 25.20647116 -69.62441373 -95.88271925 58.81921877 -235.53914302 -240.89258681] y [0.99706745 1.00310972 1.00276025 0.99765332 0.987895 0.98673274 0.98895714 0.97278992 0.94020346 0.88184657] tolerância 366.97820497873386 ===================================================== d [ -3.86433119 -49.8050907 22.80700188 6.80622936 30.54167039 -64.69840058 -99.52954081 56.36903252 -231.54645644 -237.33009274] y [0.99705179 1.00301109 1.00281378 0.99767055 0.98794749 0.98658775 0.98875747 0.97291241 0.93971297 0.88134492] tolerância 362.2203370953164 ===================================================== d [ -0.30096395 -52.20181805 19.90982769 5.40492921 35.7607844 -59.90139599 -103.14205397 53.68908015 -227.80219681 -234.40306194] y [0.99704374 1.00290737 1.00286127 0.99768472 0.98801109 0.98645302 0.98855021 0.97302979 0.93923079 0.8808507 ] tolerância 358.2976175947653 ===================================================== d [ 3.17884844 -54.56113932 17.00827186 4.06640689 40.88264183 -55.22455041 -106.73549159 50.78077128 -224.31296709 -232.10785658] y [0.99704312 1.00279867 1.00290273 0.99769598 0.98808556 0.98632828 0.98833542 0.9731416 0.9387564 0.88036257] tolerância 355.2119051139167 ===================================================== d [ 6.58369325 -56.89063285 14.0930888 2.78899524 45.92474261 -50.65646187 -110.32153933 47.64052796 -221.07751337 -230.43660497] y [0.99704976 1.00268464 1.00293828 0.99770448 0.988171 0.98621287 0.98811236 0.97324772 0.93828761 0.87987749] tolerância 352.95637075077775 ===================================================== d [ 9.92233471 -59.19971381 11.15518056 1.57117937 50.90599991 -46.18732313 -113.91573161 44.26456113 -218.10261694 -229.39219144] y [0.99706357 1.00256532 1.00296784 0.99771033 0.98826732 0.98610662 0.98788097 0.97334764 0.93782394 0.87939418] tolerância 351.5399400298066 ===================================================== d [ 13.20382425 -61.49713711 8.18480194 0.41120201 55.8450851 -41.80589363 -117.53238779 40.64717082 -215.39151924 -228.97509886] y [0.99708438 1.00244116 1.00299123 0.99771362 0.98837408 0.98600975 0.98764205 0.97344048 0.9373665 0.87891307] tolerância 350.96860589531826 ===================================================== d [ 16.43657735 -63.78836143 5.17124482 -0.69266133 60.75777565 -37.49837609 -121.17978193 36.77833759 -212.93642148 -229.17665813] y [0.99711207 1.00231218 1.0030084 0.99771448 0.98849121 0.98592207 0.98739555 0.97352573 0.93691476 0.87843283] tolerância 351.2335677047036 ===================================================== d [ 19.62834048 -66.07883912 2.1033852 -1.74148507 65.65944579 -33.25202187 -124.86683974 32.64482865 -210.73354847 -229.99709815] y [0.99714679 1.00217744 1.00301932 0.99771302 0.98861955 0.98584287 0.98713959 0.97360342 0.93646498 0.87794876] tolerância 352.3360719873257 ===================================================== d [ 22.78608205 -68.37085243 -1.02963629 -2.73661005 70.56226596 -29.05217902 -128.59640036 28.23305813 -208.76814903 -231.42316136] y [0.9971881 1.00203836 1.00302375 0.99770936 0.98875775 0.98577288 0.98687677 0.97367213 0.93602143 0.87746466] tolerância 354.2602857876286 ===================================================== d [ 25.91583495 -70.66669738 -4.23881251 -3.6785974 75.47782311 -24.88577408 -132.3720463 23.52683876 -207.03058922 -233.45109657] y [0.99723623 1.00189395 1.00302158 0.99770358 0.98890679 0.98571151 0.98660514 0.97373176 0.93558046 0.87697584] tolerância 357.0014928639835 ===================================================== d [ 29.02249022 -72.96696737 -7.53495681 -4.56758165 80.41552389 -20.73995851 -136.1945553 18.50844582 -205.50900339 -236.07607159] y [0.99729117 1.00174415 1.00301259 0.99769578 0.98906678 0.98565876 0.98632456 0.97378163 0.93514162 0.876481 ] tolerância 360.5527777076651 ===================================================== d [ 32.10841232 -75.26707271 -10.92790084 -5.40321894 85.37860859 -16.6012315 -140.05527612 13.15914198 -204.17907334 -239.27889812] y [0.99735268 1.00158949 1.00299662 0.9976861 0.98923724 0.9856148 0.98603587 0.97382086 0.93470601 0.87598059] tolerância 364.88619092096894 ===================================================== d [ 35.17442106 -77.56113826 -14.42646704 -6.18460634 90.36796754 -12.45767145 -143.94361951 7.46051326 -203.01741174 -243.04140149] y [0.99742074 1.00142994 1.00297346 0.99767464 0.98941821 0.98557961 0.98573899 0.97384876 0.93427321 0.87547339] tolerância 369.97502979484386 ===================================================== d [ 38.2210874 -79.84520918 -18.0392263 -6.91034911 95.38587829 -8.29964639 -147.85313761 1.39409758 -202.0108815 -247.35823674] y [0.99749556 1.00126496 1.00294277 0.99766149 0.98961044 0.98555311 0.9854328 0.97386463 0.93384137 0.87495641] tolerância 375.8106076304086 ===================================================== d [ 41.24385446 -82.10576812 -21.77182453 -7.57812624 100.42259561 -4.11755085 -151.75987157 -5.0575631 -201.12441539 -252.19603034] y [0.99757687 1.00109512 1.0029044 0.99764679 0.98981334 0.98553546 0.9851183 0.97386759 0.93341166 0.87443024] tolerância 382.3427921527068 ===================================================== d [ 4.42395362e+01 -8.43349275e+01 -2.56300760e+01 -8.18548010e+00 1.05473977e+02 9.51966206e-02 -1.55650844e+02 -1.19108317e+01 -2.00342973e+02 -2.57545031e+02] y [0.99766491 1.00091986 1.00285792 0.99763061 0.9900277 0.98552667 0.98479435 0.9738568 0.93298234 0.87389191] tolerância 389.5561468326739 ===================================================== d [ 47.19807375 -86.51288741 -29.61506106 -8.72877234 110.52020716 4.34392865 -159.49123537 -19.17831792 -199.62552223 -263.35985646] y [0.99775934 1.00073984 1.00280321 0.99761314 0.99025284 0.98552687 0.9844621 0.97383137 0.93255469 0.87334215] tolerância 397.38267731147977 ===================================================== d [ 50.10865739 -88.62112677 -33.72609082 -9.20417365 115.54145568 8.63103891 -163.24891242 -26.86890123 -198.93936325 -269.60177255] y [0.99786009 1.00055517 1.00274 0.99759451 0.99048876 0.98553614 0.98412165 0.97379043 0.93212857 0.87277999] tolerância 405.7649587599389 ===================================================== d [ 52.95783798 -90.63861368 -37.95949449 -9.60740644 120.51316263 12.95613688 -166.88736214 -34.98689227 -198.25057702 -276.22613906] y [0.99796705 1.000366 1.00266801 0.99757486 0.99073539 0.98555457 0.98377318 0.97373308 0.93170392 0.8722045 ] tolerância 414.63724589751615 ===================================================== d [ 55.73353955 -92.549067 -42.31159332 -9.93435871 125.41557178 17.31671634 -170.37917659 -43.53495098 -197.54101216 -283.2059355 ] y [0.99808049 1.00017184 1.00258669 0.99755428 0.99099353 0.98558232 0.98341571 0.97365814 0.93127926 0.87161281] tolerância 423.9597423313271 ===================================================== d [ 58.41397377 -94.32124687 -46.76980035 -10.17957366 130.20740191 21.70577115 -173.66933488 -52.50413344 -196.76240669 -290.46621158] y [0.99819987 0.9999736 1.00249606 0.997533 0.99126218 0.98561941 0.98305075 0.97356488 0.93085612 0.87100617] tolerância 433.62267302326364 ===================================================== d [ 60.97819535 -95.92779207 -51.31987018 -10.33810193 134.85039859 26.11351094 -176.70997859 -61.8799934 -195.87923551 -297.94244629] y [0.998325 0.99977156 1.00239588 0.9975112 0.99154109 0.98566591 0.98267874 0.97345242 0.93043464 0.87038398] tolerância 443.53304139070184 ===================================================== d [ 63.40313541 -97.33994428 -55.94356861 -10.40507804 139.30230277 30.52673604 -179.45059331 -71.64009979 -194.85647393 -305.56325362] y [0.99845561 0.99956608 1.00228595 0.99748905 0.99182994 0.98572184 0.98230022 0.97331987 0.93001506 0.86974577] tolerância 453.5887462842493 ===================================================== d [ 65.66394652 -98.52806337 -60.61859939 -10.37590653 143.51740532 34.92884603 -181.83884887 -81.75341829 -193.66021885 -313.25065524] y [0.99859143 0.99935757 1.00216612 0.99746676 0.99212834 0.98578723 0.98191583 0.97316641 0.92959767 0.86909125] tolerância 463.6791119154536 ===================================================== d [ 67.73444265 -99.46223699 -65.31863986 -10.24646208 147.44727163 39.29994059 -183.82160656 -92.17987792 -192.25837565 -320.9206284 ] y [0.99873208 0.99914652 1.00203627 0.99744454 0.99243576 0.98586205 0.98152632 0.97299129 0.92918284 0.86842025] tolerância 473.6857976788042 ===================================================== d [ 69.58763242 -100.11297614 -70.01350272 -10.01329781 151.04163674 43.61702467 -185.34608174 -102.87016941 -190.62139532 -328.48395179] y [0.99887717 0.99893347 1.00189635 0.99742259 0.99275159 0.98594623 0.98113257 0.97279384 0.92877102 0.86773282] tolerância 483.4841143683213 ===================================================== d [ 71.19633888 -100.45198248 -74.66943738 -9.67385459 154.24946234 47.85432606 -186.36113955 -113.76581924 -188.72304054 -335.84736029] y [0.99902623 0.99871902 1.00174638 0.99740114 0.99307513 0.98603966 0.98073555 0.97257349 0.9283627 0.8670292 ] tolerância 492.9447558901971 ===================================================== d [ 72.52962701 -100.44666238 -79.2445784 -9.22629244 157.01035267 51.98081486 -186.80677859 -124.79132935 -186.5279176 -342.89144442] y [0.9991782 0.9985046 1.00158699 0.99738049 0.99340439 0.98614181 0.98033774 0.97233064 0.92795985 0.8663123 ] tolerância 501.90239840267213 ===================================================== d [ 73.57018035 -100.08530132 -83.70849838 -8.67132308 159.29350068 55.97201979 -186.66135884 -135.88557741 -184.04209285 -349.56100877] y [0.99933357 0.99828944 1.00141725 0.99736073 0.99374071 0.98625316 0.9799376 0.97206333 0.9275603 0.86557781] tolerância 510.28518860245094 ===================================================== d [ 74.28722857 -99.33759931 -88.01143706 -8.00944483 161.03658439 59.7915295 -185.86739975 -146.95210538 -181.23265442 -355.72200223] y [0.99949061 0.9980758 1.00123856 0.99734222 0.99408074 0.98637263 0.97953915 0.97177327 0.92716745 0.86483164] tolerância 517.9088271098927 ===================================================== d [ 74.66386024 -98.19331809 -92.11528144 -7.24392036 162.20625525 63.41047956 -184.40396161 -157.90920119 -178.1031126 -361.29925147] y [0.99964918 0.99786375 1.00105069 0.99732512 0.99442449 0.98650027 0.9791424 0.97145959 0.92678059 0.86407232] tolerância 524.6752157905759 ===================================================== d [ 74.68317583 -96.6424688 -95.9781288 -6.37918558 162.76738267 66.79789604 -182.24955047 -168.66525362 -174.65433267 -366.20641433] y [0.99980856 0.99765415 1.00085407 0.99730966 0.99477073 0.98663562 0.97874877 0.97112252 0.92640041 0.8633011 ] tolerância 530.4706954822301 ===================================================== d [ 74.33250502 -94.68079406 -99.55920042 -5.42132218 162.69198723 69.92408767 -179.39230354 -179.12684703 -170.89391112 -370.36501864] y [0.99996798 0.99744786 1.00064919 0.99729604 0.99511817 0.98677821 0.97835974 0.97076249 0.9260276 0.86251939] tolerância 535.1936674913181 ===================================================== d [ 73.60253438 -92.30804878 -102.81745621 -4.37809035 161.95660774 72.76001979 -175.82652378 -189.19600067 -166.83084517 -373.6959353 ] y [1.00012609 0.99724646 1.00043742 0.99728451 0.99546424 0.98692695 0.97797815 0.97038146 0.92566408 0.86173158] tolerância 538.7429306072671 ===================================================== d [ 72.49238454 -89.53477158 -105.71982224 -3.2588404 160.55446387 75.28266794 -171.56598276 -198.78634276 -162.49025784 -376.15140534] y [1.00028266 0.99705011 1.00021871 0.9972752 0.99580875 0.98708172 0.97760414 0.96997901 0.92530921 0.86093667] tolerância 541.0619170630604 ===================================================== d [ 71.0049242 -86.3756287 -108.23522559 -2.07412878 158.48468247 77.47117018 -166.63123979 -207.8126067 -157.89948754 -377.68889778] y [1.00043686 0.99685965 0.99999383 0.99726826 0.99615027 0.98724185 0.9772392 0.96955617 0.92496357 0.86013655] tolerância 542.1014596874588 ===================================================== d [ 69.14896879 -82.85161768 -110.3377842 -0.83581223 155.75663508 79.30928266 -161.05373933 -216.19644602 -153.09029883 -378.27944848] y [1.00058737 0.99667656 0.9997644 0.99726387 0.99648621 0.98740607 0.97688599 0.96911567 0.92462887 0.85933596] tolerância 541.832426392558 ===================================================== d [ 66.93733008 -78.98833035 -112.00614016 0.44367709 152.38746051 80.78430461 -154.87326444 -223.86726865 -148.10040396 -377.90910682] y [1.00073445 0.99650033 0.9995297 0.99726209 0.99681752 0.98757477 0.97654341 0.96865579 0.92430322 0.85853131] tolerância 540.245019824484 ===================================================== d [ 64.39266859 -74.82116719 -113.23152381 1.75090652 148.41273638 81.89375481 -148.14733958 -230.77641245 -142.97554413 -376.5958235 ] y [1.00087584 0.99633348 0.99929311 0.99726303 0.9971394 0.98774541 0.97621628 0.96818293 0.9239904 0.85773307] tolerância 537.3767776403065 ===================================================== d [ 61.53409313 -70.38103272 -113.99959756 3.07250477 143.85981106 82.63121206 -140.92472145 -236.86475925 -137.75264449 -374.3378638 ] y [1.00101233 0.99617489 0.9990531 0.99726674 0.99745399 0.987919 0.97590225 0.96769375 0.92368734 0.85693481] tolerância 533.2336860923872 ===================================================== d [ 58.3910003 -65.70930619 -114.31271782 4.39504277 138.77743512 83.00294531 -133.27423113 -242.10595257 -132.48216573 -371.18055943] y [1.00114277 0.9960257 0.99881146 0.99727325 0.99775893 0.98809415 0.97560354 0.96719167 0.92339534 0.85614133] tolerância 527.8886681939885 ===================================================== d [ 54.99696829 -60.85035897 -114.18169648 5.70554099 133.22264606 83.02184117 -125.27061485 -246.49133837 -127.21533817 -367.18709333] y [1.0012661 0.99588691 0.99857 0.99728253 0.99805206 0.98826947 0.97532203 0.96668029 0.92311551 0.8553573 ] tolerância 521.4395035460603 ===================================================== d [ 51.38253723 -55.84396509 -113.61354658 6.99139702 127.24518302 82.69816428 -116.97978108 -250.00555578 -121.99283872 -362.40254322] y [1.00138227 0.99575838 0.99832882 0.99729459 0.99833346 0.98844483 0.97505743 0.96615964 0.9228468 0.85458172] tolerância 513.9560849663358 ===================================================== d [ 47.58245186 -50.73317446 -112.62586183 8.24102381 120.90451839 82.05000234 -108.47478912 -252.65686611 -116.85987012 -356.89876403] y [1.0014908 0.99564042 0.99808884 0.99730935 0.99860223 0.98861951 0.97481034 0.96563157 0.92258912 0.85381624] tolerância 505.54503090499867 ===================================================== d [ 43.6320885 -45.56023057 -111.2411487 9.4439305 114.26202607 81.09904868 -99.82795984 -254.4660066 -111.85924761 -350.75596255] y [1.00159131 0.99553326 0.99785095 0.99732676 0.99885761 0.98879282 0.97458121 0.96509789 0.92234229 0.85306238] tolerância 496.3234305020145 ===================================================== d [ 39.56659452 -40.36564175 -109.48600832 10.59088827 107.37944916 79.86990619 -91.109257 -255.46501895 -107.03063405 -344.06046166] y [1.00168347 0.99543703 0.99761598 0.99734671 0.99909896 0.98896412 0.97437035 0.9645604 0.92210601 0.8523215 ] tolerância 486.4156986662286 ===================================================== d [ 35.42319079 -35.19025784 -107.39854851 11.67486204 100.32542933 78.3954126 -82.39130678 -255.71472675 -102.41637637 -336.92648546] y [1.00176675 0.99535206 0.99738554 0.997369 0.99932497 0.98913223 0.97417859 0.9640227 0.92188074 0.85159733] tolerância 475.98514627257236 ===================================================== d [ 31.22780456 -30.06253366 -104.99308508 12.68768099 93.14111556 76.69079418 -73.72110751 -255.22421367 -98.03265744 -329.39224294] y [1.00184157 0.99527773 0.99715868 0.99739366 0.99953688 0.98929782 0.97400456 0.96348257 0.92166441 0.85088566] tolerância 465.0849973479462 ===================================================== d [ 27.01394235 -25.01699561 -102.31255732 13.62529459 85.89064979 74.79186521 -65.16181725 -254.07227704 -93.91609168 -321.57911109] y [1.00190753 0.99521423 0.99693691 0.99742046 0.99973362 0.98945981 0.97384884 0.96294348 0.92145734 0.8501899 ] tolerância 453.88575585530356 ===================================================== d [ 22.80888675 -20.08058938 -99.39087489 14.4841417 78.62446095 72.72751052 -56.76207667 -252.32024394 -90.08951669 -313.57643405] y [1.00196459 0.99516139 0.9967208 0.99744924 0.99991504 0.98961779 0.9737112 0.96240681 0.92125897 0.84951065] tolerância 442.5117492989743 ===================================================== d [ 18.63678963 -15.27598684 -96.26160736 15.26199197 71.38789204 70.52599957 -48.5635032 -250.03240041 -86.57135936 -305.46982209] y [1.00201277 0.99511898 0.99651087 0.99747983 1.00008111 0.9897714 0.97359131 0.96187385 0.92106868 0.8488483 ] tolerância 431.08129913300036 ===================================================== d [ 14.5167056 -10.62034523 -92.94831391 15.95634237 64.21433919 68.20796739 -40.59654405 -247.2515881 -83.36942418 -297.31352399] y [1.00205227 0.9950866 0.99630682 0.99751218 1.00023243 0.9899209 0.97348837 0.96134386 0.92088517 0.8482008 ] tolerância 419.66718744132845 ===================================================== d [ 10.46827795 -6.1297114 -89.4921459 16.56880462 57.14576962 65.80650033 -32.89416531 -244.06956907 -80.5032375 -289.21460519] y [1.00208304 0.99506409 0.9961098 0.99754601 1.00036855 0.99006548 0.97340232 0.96081977 0.92070845 0.84757059] tolerância 408.417068572249 ===================================================== d [ 6.50534272 -1.8139828 -85.9217304 17.10041477 50.21006743 63.34517238 -25.47655194 -240.55035266 -77.97948335 -281.24085707] y [1.00210523 0.99505109 0.99592011 0.99758113 1.00048968 0.99020497 0.97333259 0.96030241 0.92053781 0.84695754] tolerância 397.4229309796274 ===================================================== d [ 2.638403 2.32072815 -82.26309686 17.55297162 43.42929086 60.84533882 -18.35741497 -236.75525423 -75.80180453 -273.45292606] y [1.00211902 0.99504725 0.99573798 0.99761737 1.00059611 0.99033924 0.97327859 0.95979252 0.92037252 0.8463614 ] tolerância 386.76658446592523 ===================================================== d [ -1.1251873 6.2717469 -78.53939956 17.92886868 36.81990992 58.32597051 -11.54452911 -232.74197191 -73.97128192 -265.90414199] y [1.00212462 0.99505217 0.99556361 0.99765458 1.00068817 0.99046821 0.97323968 0.95929067 0.92021184 0.84578176] tolerância 376.51955060827794 ===================================================== d [ -4.78086079 10.03864035 -74.76357126 18.22918384 30.39005049 55.798372 -5.03981916 -228.54316465 -72.48168855 -258.61828921] y [1.00212222 0.99506551 0.99539654 0.99769272 1.00076649 0.99059228 0.97321512 0.9587956 0.9200545 0.84521615] tolerância 366.71096481933085 ===================================================== d [ -8.32665227 13.62483232 -70.96127826 18.4589165 24.15051951 53.28299656 1.15723396 -224.23258919 -71.33836726 -251.66329224] y [1.00211205 0.99508686 0.99523751 0.9977315 1.00083113 0.99071097 0.9732044 0.95830946 0.91990032 0.84466603] tolerância 357.4334870508552 ===================================================== d [ -11.76244728 17.03429979 -67.14015758 18.61961004 18.10133905 50.78691764 7.05244149 -219.83479582 -70.53278941 -245.0515503 ] y [1.00209428 0.99511594 0.99508604 0.9977709 1.00088268 0.99082471 0.97320687 0.95783081 0.91974804 0.84412883] tolerância 348.700722289309 ===================================================== d [ -15.09147 20.27536528 -63.31836157 18.71637426 12.24340738 48.32488744 12.65520652 -215.41076039 -70.06760048 -238.8348619 ] y [1.00206917 0.99515231 0.99494272 0.99781064 1.00092132 0.99093312 0.97322192 0.95736155 0.91959748 0.84360575] tolerância 340.5834201673736 ===================================================== d [ -18.31689093 23.35488344 -59.49850902 18.75074102 6.57055445 45.90012328 17.97651054 -210.976448 -69.93370048 -233.01630777] y [1.00203684 0.99519574 0.99480709 0.99785073 1.00094755 0.99103663 0.97324903 0.95690013 0.91944739 0.84309415] tolerância 333.0828037239612 ===================================================== d [ -21.44560541 26.2841878 -55.69152531 18.72734058 1.07609638 43.52205136 23.03156662 -206.57899703 -70.13248571 -227.63309528] y [1.00199761 0.99524576 0.99467964 0.9978909 1.00096162 0.99113495 0.97328754 0.95644821 0.91929759 0.84259502] tolerância 326.25003513668423 ===================================================== d [ -24.48432018 29.07353104 -51.89933481 18.64868779 -4.25049437 41.19335306 27.83609892 -202.23918614 -70.65884871 -222.69444811] y [1.00195167 0.99530207 0.99456035 0.99793101 1.00096393 0.99122818 0.97333687 0.95600571 0.91914737 0.84210742] tolerância 320.09694254135474 ===================================================== d [ -27.43888409 31.73145869 -48.11834396 18.51563935 -9.42091913 38.91267762 32.40437273 -197.96108999 -71.50522869 -218.19412048] y [1.00189904 0.99536456 0.99444879 0.9979711 1.00095479 0.99131672 0.9733967 0.955571 0.91899548 0.84162874] tolerância 314.6128421014767 ===================================================== d [ -30.31731034 34.26870257 -44.34758468 18.33019446 -14.44794767 36.68086277 36.75278773 -193.7614934 -72.67047106 -214.14167155] y [1.00183986 0.995433 0.994345 0.99801104 1.00093447 0.99140066 0.9734666 0.95514401 0.91884125 0.84115811] tolerância 309.8101887448536 ===================================================== d [ -33.13071522 36.69935782 -40.58811046 18.095811 -19.34710061 34.50031696 40.90196341 -189.66888163 -74.15800806 -210.55911851] y [1.00177469 0.99550666 0.99424968 0.99805044 1.00090341 0.9914795 0.9735456 0.95472752 0.91868505 0.84069781] tolerância 305.72165482547484 ===================================================== d [ -35.88224621 39.02763428 -36.82717697 17.81015501 -24.13033233 32.36288271 44.86130552 -185.65779761 -75.9568917 -207.41488802] y [1.00170298 0.99558609 0.99416183 0.9980896 1.00086154 0.99155417 0.97363413 0.954317 0.91852454 0.84024208] tolerância 302.3003527054483 ===================================================== d [ -38.58277174 41.26640917 -33.06202068 17.47538885 -28.81396781 30.26849857 48.64982437 -181.74769033 -78.07308553 -204.72681168] y [1.00162532 0.99567056 0.99408212 0.99812815 1.00080931 0.99162422 0.97373122 0.95391517 0.91836014 0.83979315] tolerância 299.5729667902676 ===================================================== d [ -41.23883838 43.42344959 -29.28359605 17.09095828 -33.41201897 28.21199565 52.28063618 -177.93141647 -80.50442817 -202.48556682] y [1.00154181 0.99575988 0.99401056 0.99816597 1.00074695 0.99168973 0.97383652 0.95352179 0.91819116 0.83935004] tolerância 297.5270894235365 ===================================================== d [ -43.85536266 45.50445274 -25.48148113 16.65533592 -37.93692349 26.18723359 55.76384441 -174.19599288 -83.24897908 -200.67863132] y [1.00145225 0.99585419 0.99394696 0.99820309 1.00067438 0.991751 0.97395007 0.95313535 0.91801632 0.83891028] tolerância 296.1448824122744 ===================================================== d [ -46.43826909 47.51616021 -21.64634813 16.16751851 -42.40172684 24.18893242 59.11056844 -170.53398211 -86.30772601 -199.30061121] y [1.001357 0.99595302 0.99389162 0.99823927 1.00059199 0.99180788 0.97407118 0.95275703 0.91783551 0.83847443] tolerância 295.41990082135925 ===================================================== d [ -48.99067298 49.46213437 -17.76729266 15.62521206 -46.81673866 22.21046671 62.32748232 -166.9287953 -89.67973214 -198.33880352] y [1.0012558 0.99605657 0.99384444 0.9982745 1.00049958 0.99186059 0.9742 0.95238538 0.91764742 0.8380401 ] tolerância 295.33404418556484 ===================================================== d [ -51.51522318 51.34553342 -13.83400305 15.02619438 -51.19149401 20.2454115 65.42045911 -163.3644273 -93.36416289 -197.78161199] y [1.00114903 0.99616436 0.99380572 0.99830855 1.00039756 0.991909 0.97433583 0.9520216 0.91745198 0.83760786] tolerância 295.8715928477274 ===================================================== d [ -54.01196417 53.16687893 -9.83603614 14.36753951 -55.53259962 18.28677349 68.391589 -159.81906808 -97.35752703 -197.61191517] y [1.00103677 0.99627626 0.99377558 0.9983413 1.00028599 0.99195312 0.9744784 0.95166558 0.91724852 0.83717684] tolerância 297.0083497933226 ===================================================== d [ -56.48015317 54.9258648 -5.76366405 13.64615045 -59.84534037 16.32808866 71.24127603 -156.27254745 -101.65810349 -197.81672695] y [1.00091866 0.99639252 0.99375407 0.99837272 1.00016456 0.99199311 0.97462795 0.95131609 0.91703562 0.83674471] tolerância 298.72499523115783 ===================================================== d [ -58.91750328 56.62076789 -1.60792073 12.8587531 -64.13297284 14.36306036 73.96764955 -152.70359839 -106.2631205 -198.38242628] y [1.00079473 0.99651304 0.99374142 0.99840266 1.00003325 0.99202893 0.97478427 0.9509732 0.91681256 0.83631066] tolerância 301.0008175639934 ===================================================== d [ -61.31608675 58.24477777 2.63885476 12.00145493 -68.3921919 12.38457602 76.56205266 -149.0789211 -111.15870878 -199.27762473] y [1.00066589 0.99663686 0.9937379 0.99843078 0.999893 0.99206034 0.97494602 0.95063928 0.91658019 0.83587685] tolerância 303.78954494727657 ===================================================== d [ -63.67103951 59.79412189 6.98273358 11.07123788 -72.6223255 10.38749492 79.01901069 -145.37908769 -116.34011178 -200.49045776] y [1.00053135 0.99676466 0.99374369 0.99845711 0.99974294 0.99208752 0.97511401 0.95031217 0.91633629 0.83543959] tolerância 307.07181541776373 ===================================================== d [ -65.97118927 61.25940477 11.42774308 10.06451047 -76.81511236 8.36621831 81.32538645 -141.57306958 -121.79076033 -201.99138206] y [1.00039164 0.99689586 0.99375902 0.9984814 0.99958359 0.99211031 0.97528739 0.94999318 0.91608101 0.83499968] tolerância 310.80156934341016 ===================================================== d [ -68.20433749 62.63068655 15.97550415 8.97826086 -80.9603356 6.31620801 83.4668758 -137.63298548 -127.49254245 -203.75270145] y [1.00024689 0.99703027 0.99378409 0.99850349 0.99941504 0.99212866 0.97546584 0.94968254 0.91581378 0.83455647] tolerância 314.934597423396 ===================================================== d [ -70.35938741 63.89939707 20.625935 7.81020654 -85.04842636 4.23414634 85.43045714 -133.53852623 -133.43057541 -205.75604929] y [1.00009673 0.99716816 0.99381926 0.99852325 0.9992368 0.99214257 0.9756496 0.94937952 0.91553309 0.83410788] tolerância 319.4391005443264 ===================================================== d [ -72.41773014 65.05061223 25.37473735 6.55807279 -89.06015371 2.11705981 87.19444666 -129.25860295 -139.574267 -207.96241312] y [0.99994182 0.99730885 0.99386467 0.99854045 0.99904955 0.99215189 0.97583769 0.94908552 0.91523932 0.83365488] tolerância 324.2514264704488 ===================================================== d [-7.43610579e+01 6.60702949e+01 3.02147176e+01 5.22073952e+00 -9.29756920e+01 -3.65939745e-02 8.87380106e+01 -1.24768652e+02 -1.45892367e+02 -2.10337259e+02] y [0.99978238 0.99745206 0.99392054 0.99855489 0.99885347 0.99215655 0.97602966 0.94880093 0.91493203 0.83319702] tolerância 329.3137002351204 ===================================================== d [ -76.16936632 66.94350101 35.13517933 3.79807226 -96.77221274 -2.22710169 90.03883749 -120.04649826 -152.34867873 -212.8437865 ] y [0.99961867 0.99759753 0.99398706 0.99856638 0.99864877 0.99215647 0.97622503 0.94852624 0.91461082 0.83273393] tolerância 334.56342205127385 ===================================================== d [ -77.82124097 67.65469253 40.12176057 2.29108448 -100.42414686 -4.45340484 91.0736172 -115.07298847 -158.90198247 -215.44305238] y [0.99945097 0.99774491 0.99406442 0.99857474 0.99843571 0.99215157 0.97642326 0.94826194 0.91427541 0.83226532] tolerância 339.9336627485657 ===================================================== d [ -79.2942147 68.1880977 45.15634909 0.7020867 -103.90354365 -6.7130318 91.81859413 -109.83263137 -165.50611201 -218.09419361] y [0.99927963 0.99789387 0.99415275 0.99857979 0.99821461 0.99214176 0.97662378 0.94800859 0.91392556 0.83179099] tolerância 345.35344310306436 ===================================================== d [ -80.56519504 68.52811517 50.21708965 -0.96518497 -107.18053145 -9.00205688 92.25018743 -104.31421806 -172.11019733 -220.75475837] y [0.99910506 0.99804399 0.99425217 0.99858133 0.99798586 0.99212699 0.97682593 0.94776678 0.91356117 0.83131083] tolerância 350.7483154419247 ===================================================== d [ -81.61095724 68.65975451 55.27849814 -2.70546842 -110.22387921 -11.31509205 92.34566776 -98.51140289 -178.65909485 -223.38115131] y [0.99892768 0.99819487 0.99436273 0.99857921 0.99774988 0.99210717 0.97702903 0.94753711 0.91318225 0.8308248 ] tolerância 356.04115884395657 ===================================================== d [ -82.40869471 68.56910297 60.31169374 -4.51191116 -113.00165425 -13.6453165 92.0838748 -92.42321703 -185.09401421 -225.92919356] y [0.998748 0.99834603 0.99448443 0.99857325 0.99750721 0.99208225 0.97723234 0.94732022 0.9127889 0.830333 ] tolerância 361.15319015737566 ===================================================== d [ -82.93661414 68.24380501 65.28475703 -6.37604937 -115.48196526 -15.98454852 91.44595489 -86.05448648 -191.3533443 -228.3547923 ] y [0.99856657 0.998497 0.99461722 0.99856332 0.99725842 0.99205221 0.97743508 0.94711674 0.91238139 0.82983558] tolerância 366.0051821147172 ===================================================== d [ -83.17455815 67.67353839 70.16321678 -8.28783651 -117.6337738 -18.3233625 90.41609373 -79.41612607 -197.37367143 -230.61470766] y [0.99838397 0.99864724 0.99476095 0.99854928 0.99700417 0.99201702 0.97763641 0.94692728 0.9119601 0.82933282] tolerância 370.51886746116884 ===================================================== d [ -83.10463433 66.85046808 74.91066172 -10.23572648 -119.42775129 -20.65125261 88.98221547 -72.5252827 -203.09097131 -232.66739802] y [0.99820085 0.99879624 0.99491543 0.99853103 0.99674518 0.99197668 0.97783548 0.94675243 0.91152555 0.82882509] tolerância 374.6184948848005 ===================================================== d [ -82.70929305 65.76758568 79.48694356 -12.2063595 -120.83331192 -22.95610117 87.13407903 -65.40266904 -208.43453703 -234.46468128] y [0.9980185 0.99894292 0.9950798 0.99850858 0.99648313 0.99193137 0.97803072 0.9465933 0.91107993 0.82831457] tolerância 378.21890960825925 ===================================================== d [ -81.98148102 64.42688404 83.8582868 -14.18633521 -121.83381647 -25.22710879 84.87358211 -58.08207724 -213.35538775 -235.98584961] y [0.9978364 0.99908772 0.9952548 0.9984817 0.9962171 0.99188082 0.97822256 0.94644931 0.91062103 0.82779836] tolerância 381.2768445327544 ===================================================== d [ -80.91159546 62.82860635 87.9854106 -16.1604328 -122.40738243 -27.45145784 82.20114886 -50.59667951 -217.79124237 -237.19438028] y [0.99765591 0.99922956 0.99543942 0.99845047 0.99594887 0.99182528 0.97840942 0.94632143 0.9101513 0.82727881] tolerância 383.72531070373384 ===================================================== d [ -79.495243 60.97734126 91.83085017 -18.11272801 -122.53803612 -29.61628927 79.12392677 -42.9851771 -221.68566035 -238.05913721] y [0.99747837 0.99936742 0.99563248 0.99841501 0.99568028 0.99176505 0.97858979 0.94621041 0.90967342 0.82675836] tolerância 385.507985269419 ===================================================== d [ -77.73485579 58.88320703 95.36204572 -20.02754245 -122.21888527 -31.70988873 75.65685741 -35.29237657 -224.9957594 -238.56357255] y [0.99730395 0.99950122 0.99583398 0.99837527 0.99541141 0.99170006 0.9787634 0.94611609 0.909187 0.82623601] tolerância 386.59166683473154 ===================================================== d [ -75.63650441 56.55911976 98.54860484 -21.88909775 -121.4473898 -33.72085829 71.81929404 -27.56546668 -227.68364482 -238.69462025] y [0.99713338 0.99963042 0.99604322 0.99833132 0.99514323 0.99163049 0.9789294 0.94603865 0.90869332 0.82571255] tolerância 386.94982816880315 ===================================================== d [ -73.21052253 54.02123114 101.36384294 -23.68203357 -120.22677343 -35.63875521 67.63529239 -19.85399841 -229.72010763 -238.44709099] y [0.99696686 0.99975494 0.99626019 0.99828313 0.99487585 0.99155625 0.97908753 0.94597796 0.90819204 0.82518703] tolerância 386.5684886247622 ===================================================== d [ -70.47433906 51.29040771 103.78887943 -25.39238753 -118.56994675 -37.45511087 63.13639081 -12.20773275 -231.09031331 -237.82480127] y [0.99680676 0.99987307 0.99648185 0.99823134 0.99461295 0.99147831 0.97923543 0.94593455 0.9076897 0.82466561] tolerância 385.4537098275053 ===================================================== d [ -67.44431517 48.38670441 105.80343635 -27.00590392 -116.48807441 -39.1609875 58.35303888 -4.67607222 -231.7764907 -236.82844545] y [0.99665213 0.99998561 0.99670958 0.99817563 0.99435278 0.99139613 0.97937396 0.94590776 0.90718264 0.82414377] tolerância 383.6036316785285 ===================================================== d [ -64.14489646 45.33548652 107.39889614 -28.51115961 -114.00498253 -40.75116259 53.32312545 2.69245882 -231.78395405 -235.47794922] y [0.99650414 1.00009178 0.99694173 0.99811637 0.99409718 0.9913102 0.979502 0.9458975 0.90667408 0.82362413] tolerância 381.050321413097 ===================================================== d [ -60.6022675 42.16272671 108.57067952 -29.89802891 -111.14742638 -42.22166609 48.08566303 9.85241835 -231.12400325 -233.79556237] y [0.9963634 1.00019126 0.99717739 0.99805381 0.99384703 0.99122079 0.979619 0.94590341 0.9061655 0.82310744] tolerância 377.8309217225496 ===================================================== d [ -56.847195 38.89672947 109.32393071 -31.15937466 -107.95032856 -43.57184771 42.68285598 16.76297396 -229.82507531 -231.81598592] y [0.99623087 1.00028346 0.9974148 0.99798843 0.99360398 0.99112846 0.97972415 0.94592495 0.90566009 0.82259619] tolerância 374.0053569265116 ===================================================== d [ -52.9074242 35.56302995 109.66145834 -32.28794139 -104.44404589 -44.80016837 37.15346826 23.38686202 -227.9082117 -229.56390933] y [0.99610656 1.00036852 0.99765387 0.99792029 0.99336792 0.99103318 0.97981749 0.94596161 0.90515752 0.82208927] tolerância 369.61703423296274 ===================================================== d [ -48.81146565 32.1870766 109.59028474 -33.27812931 -100.66123733 -45.90672757 31.53589386 29.69061488 -225.40125721 -227.06864491] y [0.99599047 1.00044655 0.99789449 0.99784945 0.99313875 0.99093488 0.97989901 0.94601293 0.90465744 0.82158556] tolerância 364.71644504275275 ===================================================== d [ -44.595651 28.79825541 109.13823909 -34.13121188 -96.65103873 -46.89970259 25.87251227 35.65112662 -222.37054843 -224.39268494] y [0.99588374 1.00051693 0.99813413 0.99777668 0.99291863 0.99083449 0.97996797 0.94607785 0.90416455 0.82108902] tolerância 359.41110087948994 ===================================================== d [ -40.2874151 25.41920919 108.32009427 -34.84538778 -92.44728589 -47.78221236 20.19760865 41.24704017 -218.85299532 -221.56728917] y [0.99578622 1.00057991 0.99837279 0.99770204 0.99270728 0.99073193 0.98002455 0.94615581 0.90367828 0.82059833] tolerância 353.7573395406203 ===================================================== d [ -35.91571675 22.07264863 107.15986512 -35.42245977 -88.08925798 -48.56135883 14.54446786 46.46434501 -214.90172085 -218.6377047 ] y [0.99569812 1.00063549 0.99860966 0.99762584 0.99250512 0.99062745 0.98006871 0.94624601 0.90319971 0.82011382] tolerância 347.8343330705852 ===================================================== d [ -31.50537604 18.77793032 105.67746395 -35.86367007 -83.61003065 -49.24283254 8.94190411 51.29159097 -210.5598046 -215.63829172] y [0.99561931 1.00068392 0.99884479 0.99754812 0.99231184 0.99052089 0.98010063 0.94634796 0.90272817 0.81963408] tolerância 341.7026653506677 ===================================================== d [ -27.08561606 15.55567526 103.91500506 -36.17858463 -79.05709728 -49.84225702 3.41745677 55.73297746 -205.91091166 -212.64105035] y [0.99555042 1.00072499 0.99907588 0.9974697 0.992129 0.99041321 0.98012018 0.94646012 0.90226773 0.81916254] tolerância 335.4854195516098 ===================================================== d [ -22.6756656 12.4196852 101.89368821 -36.37068315 -74.45796869 -50.36645791 -2.00840758 59.78774649 -200.99555938 -209.6745756 ] y [0.99549119 1.000759 0.99930311 0.99739058 0.99195612 0.99030422 0.98012765 0.946582 0.90181745 0.81869754] tolerância 329.2348777700029 ===================================================== d [ -18.29425607 9.3826378 99.64230588 -36.44682442 -69.84338716 -50.8262474 -7.31850316 63.46290549 -195.86780924 -206.78107795] y [0.9954416 1.00078616 0.99952593 0.99731105 0.9917933 0.99019408 0.98012326 0.94671274 0.90137793 0.81823904] tolerância 323.0244475863857 ===================================================== d [ -13.95694828 6.45455822 97.18895477 -36.41446181 -65.24073204 -51.23273901 -12.49990502 66.76924974 -190.57902069 -204.0003608 ] y [0.9954016 1.00080668 0.99974382 0.99723135 0.99164057 0.99008294 0.98010726 0.94685151 0.90094961 0.81778686] tolerância 316.9230206532159 ===================================================== d [ -9.67611046 3.64293161 94.560425 -36.28137627 -60.67380091 -51.59711843 -17.54377657 69.72062784 -185.1770094 -201.3694133 ] y [0.99537108 1.00082079 0.99995635 0.99715172 0.99149791 0.9899709 0.98007992 0.94699752 0.90053287 0.81734077] tolerância 310.99420978298895 ===================================================== d [ -5.46039484 0.95290662 91.77570641 -36.05306823 -56.15921116 -51.92746313 -22.4439552 72.32832214 -179.6944306 -198.91129881] y [0.99534985 1.00082879 1.00016383 0.99707211 0.99136478 0.98985769 0.98004143 0.9471505 0.90012655 0.81689892] tolerância 305.2778355442895 ===================================================== d [ -1.31732975 -1.6123457 88.87052662 -35.74230926 -51.72137408 -52.2413223 -27.20141595 74.62017881 -174.19489132 -196.68237456] y [0.99533791 1.00083087 1.00036452 0.99699327 0.99124197 0.98974414 0.97999235 0.94730867 0.89973361 0.81646395] tolerância 299.8666023766127 ===================================================== d [ 2.75032381 -4.05162687 85.85281611 -35.35158518 -47.36600422 -52.54233917 -31.81469935 76.60473588 -168.68908462 -194.68524473] y [0.99533502 1.00082733 1.00055952 0.99691485 0.99112849 0.98962951 0.97993267 0.9474724 0.89935139 0.81603239] tolerância 294.766804055478 ===================================================== d [ 6.74139107 -6.36592919 82.7514155 -34.89239279 -43.10894754 -52.84657051 -36.29147082 78.31082219 -163.22746179 -192.96716605] y [0.99534103 1.00081847 1.00074726 0.99683754 0.99102491 0.98951461 0.97986309 0.94763991 0.89898251 0.81560667] tolerância 290.05443593847207 ===================================================== d [ 10.65666897 -8.5563093 79.57042218 -34.36653801 -38.95123969 -53.15687254 -40.6340877 79.74709088 -157.81393133 -191.52693946] y [0.99535582 1.0008045 1.00092883 0.99676098 0.99093032 0.98939866 0.97978346 0.94781174 0.89862436 0.81518326] tolerância 285.7283172196595 ===================================================== d [ 14.49936387 -10.62589364 76.32721164 -33.78180492 -34.89969383 -53.48491581 -44.85264635 80.93620562 -152.4784657 -190.39504593] y [0.9953792 1.00078573 1.00110343 0.99668557 0.99084485 0.98928202 0.9796943 0.94798672 0.89827808 0.81476301] tolerância 281.8354216199376 ===================================================== d [ 18.27334546 -12.57752968 73.02730013 -33.14126424 -30.95438259 -53.83603776 -48.95433958 81.89048348 -147.22922454 -189.5796864 ] y [0.99541113 1.00076234 1.00127147 0.9966112 0.99076802 0.98916426 0.97959556 0.94816492 0.89794238 0.81434383] tolerância 278.38716355473554 ===================================================== d [ 21.98561091 -14.41650278 69.68393364 -32.45188003 -27.11742392 -54.2213633 -52.95327429 82.63242005 -142.089401 -189.10866726] y [0.99545122 1.00073474 1.00143171 0.99653848 0.9907001 0.98904614 0.97948814 0.9483446 0.89761933 0.81392785] tolerância 275.4253813569094 ===================================================== d [ 25.64097635 -16.14566275 66.29415741 -31.71348907 -23.3838965 -54.64198575 -56.85530361 83.16691568 -137.0522654 -188.97630559] y [0.99549963 1.000703 1.00158513 0.99646703 0.99064039 0.98892676 0.97937156 0.94852653 0.8973065 0.8135115 ] tolerância 272.93952618236756 ===================================================== d [ 29.24711025 -17.76975707 62.86232122 -30.92949973 -19.75131632 -55.10484456 -60.67265585 83.5078834 -132.1262155 -189.19770826] y [0.99555608 1.00066745 1.00173108 0.99639721 0.99058891 0.98880646 0.97924638 0.94870963 0.89700476 0.81309544] tolerância 270.9501225115445 ===================================================== d [ 32.81079064 -19.29262619 59.38703456 -30.10080716 -16.21474902 -55.61319324 -64.41451769 83.66272854 -127.30912504 -189.77612773] y [0.99562047 1.00062833 1.00186948 0.99632911 0.99054542 0.98868514 0.9791128 0.94889349 0.89671386 0.8126789 ] tolerância 269.4599254641884 ===================================================== d [ 36.33845334 -20.71763104 55.86498311 -29.2274478 -12.76866109 -56.16917848 -68.08903458 83.6362251 -122.59591778 -190.71218939] y [0.99569271 1.00058585 1.00200023 0.99626284 0.99050972 0.9885627 0.97897098 0.94907768 0.89643357 0.81226108] tolerância 268.4676736666609 ===================================================== d [ 39.83566763 -22.04711179 52.29077241 -28.30837812 -9.40743583 -56.77374876 -71.70253484 83.42952931 -117.97875876 -192.00430979] y [0.99577298 1.00054009 1.00212364 0.99619828 0.99048152 0.98843861 0.97882057 0.94926244 0.89616274 0.81183978] tolerância 267.9678814025093 ===================================================== d [ 43.30817533 -23.2835597 48.65941862 -27.34281021 -6.12554716 -57.42809149 -75.26178879 83.04410188 -113.4508143 -193.6523468 ] y [0.99586098 1.00049138 1.00223916 0.99613574 0.99046074 0.9883132 0.97866217 0.94944675 0.89590212 0.81141562] tolerância 267.9581626082016 ===================================================== d [ 46.7596195 -24.42810991 44.96348506 -26.32858442 -2.91741113 -58.13078845 -78.76995334 82.47671529 -109.00043292 -195.64861516] y [0.99595666 1.00043995 1.00234665 0.99607534 0.9904472 0.98818633 0.97849591 0.9496302 0.89565149 0.81098782] tolerância 268.4257679071319 ===================================================== d [ 50.19207088 -25.48094259 41.19489991 -25.2630214 0.22194515 -58.87913837 -82.22789395 81.72163615 -104.61469953 -197.98237916] y [0.99605995 1.00038598 1.00244598 0.99601718 0.99044076 0.98805791 0.9783219 0.9498124 0.8954107 0.81055562] tolerância 269.35418098407894 ===================================================== d [ 53.60666807 -26.44142252 37.34591352 -24.14337823 3.2963917 -59.67041537 -85.63538197 80.7717768 -100.28215005 -200.64464508] y [0.99617121 1.0003295 1.00253729 0.99596118 0.99044125 0.98792741 0.97813964 0.94999354 0.89517882 0.81011678] tolerância 270.72892367050895 ===================================================== d [ 57.0016941 -27.30773889 33.40813047 -22.96626832 6.30909197 -60.49921795 -88.98814979 79.61688928 -95.98835528 -203.61791807] y [0.99629003 1.00027089 1.00262007 0.99590766 0.99044856 0.98779514 0.97794982 0.95017257 0.89495654 0.80967204] tolerância 272.52555692957066 ===================================================== d [ 60.37311701 -28.07705812 29.373672 -21.72819528 9.26191603 -61.35871038 -92.27901643 78.24487207 -91.71884274 -206.88087863] y [0.99641637 1.00021036 1.00269412 0.99585676 0.99046254 0.98766104 0.97775257 0.95034905 0.89474378 0.80922071] tolerância 274.7154107189343 ===================================================== d [ 63.71441619 -28.74557325 25.23560173 -20.42572658 12.15526792 -62.24060341 -95.49779408 76.64213517 -87.4595295 -210.40815021] y [0.99655019 1.00014813 1.00275923 0.9958086 0.99048307 0.98752504 0.97754803 0.95052248 0.89454048 0.80876215] tolerância 277.2654477171637 ===================================================== d [ 67.01643906 -29.30858096 20.98834996 -19.05567373 14.98794378 -63.13513896 -98.63123441 74.79400894 -83.19713255 -214.17004607] y [0.99669142 1.00008441 1.00281517 0.99576332 0.99051001 0.98738708 0.97733636 0.95069236 0.89434662 0.80829577] tolerância 280.1380828433424 ===================================================== d [ 70.26987665 -29.7614477 16.62879409 -17.61590268 17.75735456 -64.03375866 -101.66679175 72.68758111 -78.92320807 -218.14202032] y [0.99684047 1.00001923 1.00286184 0.99572094 0.99054335 0.98724667 0.977117 0.9508587 0.89416159 0.80781946] tolerância 283.30287121565357 ===================================================== d [ 73.45864273 -30.09735202 12.15460064 -16.10374373 20.45859054 -64.92233104 -104.58298313 70.30552799 -74.62503051 -222.28023916] y [0.99699675 0.99995304 1.00289883 0.99568176 0.99058284 0.98710426 0.9768909 0.95102036 0.89398606 0.80733431] tolerância 286.7061530407843 ===================================================== d [ 76.56228687 -30.30866585 7.56627753 -14.51725864 23.08458233 -65.7838369 -107.35334482 67.63033349 -70.29032424 -226.53020999] y [0.99715957 0.99988633 1.00292577 0.99564607 0.99062819 0.98696035 0.97665908 0.9511762 0.89382065 0.80684161] tolerância 290.28275501582357 ===================================================== d [ 79.56614867 -30.39045582 2.86853363 -12.85722957 25.62829836 -66.60795782 -109.9608556 64.65277862 -65.9188142 -230.86124474] y [0.99732984 0.99981892 1.00294259 0.99561378 0.99067953 0.98681405 0.97642033 0.9513266 0.89366433 0.80633781] tolerância 293.99811957800466 ===================================================== d [ 82.44552867 -30.33490654 -1.93180448 -11.12405656 28.07914577 -67.37630251 -112.37548572 61.3585508 -61.50466448 -235.21367674] y [0.9975068 0.99975134 1.00294897 0.99558519 0.99073652 0.98666591 0.97617578 0.95147039 0.89351772 0.80582438] tolerância 297.7824236747581 ===================================================== d [ 85.17585285 -30.1353972 -6.82394333 -9.32003702 30.42558863 -68.07160737 -114.56873089 57.73798426 -57.04739968 -239.53062382] y [0.99769016 0.99968387 1.00294468 0.99556045 0.99079897 0.98651607 0.97592585 0.95160685 0.89338094 0.80530126] tolerância 301.57080764267516 ===================================================== d [ 87.73079884 -29.78586719 -11.7933147 -7.44901876 32.65471973 -68.6759957 -116.5109606 53.78457251 -52.54985258 -243.7516805 ] y [0.99787959 0.99961685 1.0029295 0.99553972 0.99086664 0.98636468 0.97567105 0.95173526 0.89325406 0.80476855] tolerância 305.2951639684137 ===================================================== d [ 90.08279682 -29.28112629 -16.82149262 -5.51651732 34.75252963 -69.17129913 -118.17210413 49.49562857 -48.01835016 -247.81368639] y [0.9980747 0.99955061 1.00290327 0.99552315 0.99093926 0.98621194 0.97541193 0.95185488 0.89313719 0.80422644] tolerância 308.88500240763807 ===================================================== d [ 92.20362948 -28.61716853 -21.88622864 -3.52979016 36.70423894 -69.53943006 -119.52242904 44.87288944 -43.46281446 -251.65169896] y [0.99827505 0.99948549 1.00286586 0.99551089 0.99101655 0.98605811 0.97514912 0.95196496 0.8930304 0.8036753 ] tolerância 312.2685264613878 ===================================================== d [ 94.06210335 -27.7907666 -26.9608226 -1.49778886 38.49372089 -69.76030823 -120.52944357 39.92203867 -38.89477137 -255.19045695] y [0.99847942 0.99942205 1.00281735 0.99550306 0.99109791 0.98590397 0.97488419 0.95206442 0.89293406 0.80311751] tolerância 315.3624755286728 ===================================================== d [ 95.63605249 -26.8025922 -32.01727526 0.56855122 40.10770466 -69.82144446 -121.17350985 34.65725061 -34.3343214 -258.38081729] y [0.99868861 0.99936025 1.00275739 0.99549973 0.99118352 0.98574882 0.97461613 0.95215321 0.89284756 0.80254996] tolerância 318.11532753038875 ===================================================== d [ 96.8952456 -25.65280753 -37.02212296 2.6569048 41.5300294 -69.70408639 -121.42540603 29.09468755 -29.79925755 -261.14656823] y [0.9989006 0.99930084 1.00268642 0.99550099 0.99127242 0.98559406 0.97434755 0.95223003 0.89277146 0.80197725] tolerância 320.443459613019 ===================================================== d [ 97.81834608 -24.34549501 -41.94148709 4.75339966 42.74797035 -69.39636601 -121.26815717 23.25850973 -25.31358 -263.43425898] y [0.99911537 0.99924398 1.00260436 0.99550688 0.99136447 0.98543956 0.9740784 0.95229452 0.89270541 0.80139841] tolerância 322.2908159457408 ===================================================== d [ 98.38538698 -22.8865133 -46.7394219 6.84298415 43.74956595 -68.8873384 -120.68712033 17.17757093 -20.90291285 -265.19072216] y [0.99933219 0.99919001 1.0025114 0.99551742 0.99145923 0.98528574 0.9738096 0.95234607 0.8926493 0.80081449] tolerância 323.601402357194 ===================================================== d [ 98.58041177 -21.28410284 -51.3793005 8.90969666 44.52486297 -68.16877881 -119.67312856 10.88586802 -16.59469743 -266.36974782] y [0.99955026 0.99913929 1.0024078 0.99553258 0.9915562 0.98513305 0.9735421 0.95238415 0.89260296 0.80022668] tolerância 324.32723909310187 ===================================================== d [ 98.39213355 -19.54900629 -55.82474978 10.93704694 45.06655198 -67.23533865 -118.22296168 4.42237339 -12.41682879 -266.93210767] y [0.99976804 0.99909227 1.00229429 0.99555227 0.99165456 0.98498245 0.97327773 0.95240819 0.89256631 0.79963825] tolerância 324.4286299655435 ===================================================== d [ 97.81442376 -17.69401259 -60.04056474 12.90834826 45.36988102 -66.08512541 -116.33989957 -2.17050743 -8.39828211 -266.84952294] y [0.9999854 0.99904908 1.00217097 0.99557643 0.99175411 0.98483392 0.97301656 0.95241796 0.89253888 0.79904857] tolerância 323.8777342175407 ===================================================== d [ 96.84680679 -15.73392735 -63.99377253 14.80717728 45.43320365 -64.71974405 -114.03395546 -8.84750317 -4.56765332 -266.1045711 ] y [1.00020148 0.99900999 1.00203834 0.99560494 0.99185434 0.98468794 0.97275955 0.95241317 0.89252032 0.79845907] tolerância 322.6587433612464 ===================================================== d [ 95.49454225 -13.68525205 -67.65451071 16.61779008 45.2581024 -63.1442556 -111.32170741 -15.56158248 -0.95235458 -264.69108271] y [1.00041542 0.99897524 1.00189697 0.99563765 0.99195471 0.98454496 0.97250764 0.95239362 0.89251023 0.79787122] tolerância 320.7681904003216 ===================================================== d [ 93.77125358 -11.56629208 -70.99882416 18.32607675 44.85087725 -61.36869715 -108.22895107 -22.26525543 2.42285768 -262.62047704] y [1.00062567 0.99894511 1.00174802 0.99567424 0.99205435 0.98440594 0.97226255 0.95235936 0.89250814 0.79728846] tolerância 318.2229659556965 ===================================================== d [ 91.6906257 -9.39532318 -74.00359702 19.91841733 44.21828607 -59.40280578 -104.78104706 -28.91129696 5.53521553 -259.90201494] y [1.00083212 0.99891964 1.0015917 0.99571459 0.9921531 0.98427083 0.97202427 0.95231034 0.89251347 0.79671027] tolerância 315.03538802831724 ===================================================== d [ 89.2741303 -7.19150888 -76.65270401 21.38339358 43.37139071 -57.26096538 -101.01161624 -35.45459916 8.3650073 -256.5634228 ] y [1.00103399 0.99889896 1.00142877 0.99575844 0.99225045 0.98414005 0.97179358 0.95224669 0.89252566 0.79613805] tolerância 311.23866865556806 ===================================================== d [ 86.55197002 -4.97437561 -78.93936929 22.71277768 42.32621441 -54.96259452 -96.9630894 -41.85452407 10.89733883 -252.6536335 ] y [1.00122987 0.99888318 1.00126058 0.99580536 0.99234561 0.9840144 0.97157194 0.95216889 0.89254401 0.7955751 ] tolerância 306.89074578465556 ===================================================== d [ 83.54911292 -2.76209319 -80.85488504 23.89838179 41.0966201 -52.52338139 -92.6710575 -48.07149618 13.11972352 -248.20625435] y [1.00141978 0.99887226 1.00108737 0.9958552 0.99243849 0.98389381 0.97135918 0.95207706 0.89256792 0.79502073] tolerância 302.0291902728771 ===================================================== d [ 80.29795405 -0.57238146 -82.39985683 24.93531991 39.70055982 -49.96333592 -88.17838568 -54.07226302 15.02401887 -243.275362 ] y [1.00160311 0.9988662 1.00090996 0.99590763 0.99252866 0.98377856 0.97115584 0.95197158 0.89259671 0.79447612] tolerância 296.71522583426287 ===================================================== d [ 76.83281505 1.57834222 -83.58025829 25.82102168 38.15735939 -47.30324979 -83.52886023 -59.82911179 16.60613505 -237.92089948] y [1.0017793 0.99886494 1.00072916 0.99596235 0.99261577 0.98366893 0.97096236 0.95185293 0.89262967 0.79394233] tolerância 291.0162368760119 ===================================================== d [ 73.19438826 3.6754145 -84.41307782 26.5571432 36.48998141 -44.56737081 -78.77173468 -65.3243715 17.86784171 -232.22260913] y [1.00194731 0.9988684 1.00054639 0.99601881 0.99269921 0.98356549 0.97077971 0.9517221 0.89266599 0.79342205] tolerância 285.02351770855597 ===================================================== d [ 69.40649473 5.70561264 -84.90075743 27.1413595 34.71305101 -41.76933503 -73.93711352 -70.5335359 18.80915481 -226.21216932] y [1.00210791 0.99887646 1.00036117 0.99607708 0.99277928 0.9834677 0.97060687 0.95157877 0.89270519 0.79291251] tolerância 278.7681601043395 ===================================================== d [ 65.51538163 7.65866912 -85.07598087 27.5817438 32.85278415 -38.93603376 -69.07766098 -75.4575924 19.4402779 -219.99284097] y [1.00225969 0.99888894 1.00017552 0.99613643 0.99285519 0.98337636 0.97044519 0.95142453 0.89274632 0.79241784] tolerância 272.3697150693507 ===================================================== d [ 61.54780156 9.52544897 -84.95338642 27.88126454 30.9257001 -36.08224491 -64.22412805 -80.08749558 19.76884184 -213.61100552] y [1.00240295 0.99890568 0.99998948 0.99619675 0.99292703 0.98329122 0.97029413 0.95125952 0.89278884 0.79193677] tolerância 265.8775103574237 ===================================================== d [ 57.53463598 11.29925654 -84.55711328 28.04653394 28.95044318 -33.22498515 -59.41062622 -84.42551196 19.80616794 -207.13087272] y [1.00253754 0.99892651 0.9998037 0.99625772 0.99299465 0.98321232 0.97015369 0.95108439 0.89283207 0.79146966] tolerância 259.3631066832991 ===================================================== d [ 53.50406631 12.9754596 -83.91276064 28.0852158 26.94431231 -30.37939567 -54.66740482 -88.47910245 19.56515266 -200.61329489] y [1.00266336 0.99895122 0.9996188 0.99631905 0.99305796 0.98313966 0.97002377 0.95089978 0.89287538 0.79101672] tolerância 252.89453940598236 ===================================================== d [ 49.48118378 14.55137032 -83.04664797 28.00571614 24.92300842 -27.5585498 -50.02057905 -92.26032723 19.05968771 -194.11481497] y [1.00278036 0.9989796 0.9994353 0.99638046 0.99311688 0.98307323 0.96990423 0.95070629 0.89291816 0.79057803] tolerância 246.535363008716 ===================================================== d [ 45.48773157 16.02607778 -81.98512957 27.81688159 22.90045705 -24.77334074 -45.4920136 -95.78518067 18.3040988 -187.68695203] y [1.00288856 0.99901142 0.9992537 0.99644171 0.99317138 0.98301297 0.96979485 0.95050454 0.89295984 0.79015355] tolerância 240.3439509131286 ===================================================== d [ 41.54197191 17.40024399 -80.75398139 27.52771541 20.88870369 -22.03244301 -41.09934646 -99.07288894 17.31262157 -181.37571852] y [1.00298803 0.99904646 0.99907442 0.99650253 0.99322146 0.98295879 0.96969537 0.95029509 0.89299987 0.78974313] tolerância 234.37304190179967 ===================================================== d [ 37.65533369 18.6743168 -79.37104469 27.14465982 18.89616217 -19.34057606 -36.85309249 -102.13711087 16.09675096 -175.20735557] y [1.00307918 0.99908464 0.99889723 0.99656293 0.99326729 0.98291045 0.96960519 0.9500777 0.89303785 0.78934515] tolerância 228.65090216775715 ===================================================== d [ 33.84642259 19.85467546 -77.874057 26.68140926 16.93475681 -16.70595158 -32.76988065 -105.01914023 14.67305035 -169.24761901] y [1.00316152 0.99912548 0.99872367 0.99662229 0.99330861 0.98286816 0.9695246 0.94985435 0.89307305 0.78896202] tolerância 223.2596458391227 ===================================================== d [ 30.11679176 20.94193684 -76.27029115 26.14111039 15.00759693 -14.12759084 -28.84977085 -107.7261466 13.04935347 -163.49553115] y [1.00323579 0.99916904 0.9985528 0.99668084 0.99334577 0.9828315 0.9694527 0.94962392 0.89310525 0.78859066] tolerância 218.19538008366888 ===================================================== d [ 26.47487936 21.94250456 -74.58631384 25.53370242 13.12108898 -11.60777967 -25.09911504 -110.29325445 11.23695535 -157.99052387] y [1.00330187 0.99921499 0.99838544 0.9967382 0.9933787 0.9828005 0.96938939 0.94938755 0.89313388 0.78823191] tolerância 213.50931312217597 ===================================================== d [ 22.9227792 22.86138887 -72.83882925 24.86574144 11.27827408 -9.14533471 -21.5180344 -112.74590586 9.24379888 -152.75023415] y [1.00335996 0.99926314 0.99822179 0.99679422 0.99340749 0.98277503 0.96933432 0.94914554 0.89315854 0.78788525] tolerância 209.22590807251046 ===================================================== d [ 19.4587825 23.70218784 -71.03742153 24.14115265 9.48012793 -6.73709866 -18.10335385 -115.10212832 7.07456726 -147.77888378] y [1.00341043 0.99931347 0.99806142 0.99684897 0.99343232 0.9827549 0.96928695 0.94889732 0.89317889 0.78754895] tolerância 205.35259705110997 ===================================================== d [ 16.08235101 24.47188204 -69.19992405 23.36670981 7.72804274 -4.38005687 -14.85285897 -117.39512164 4.73405129 -143.09764905] y [1.00345327 0.99936566 0.99790502 0.99690212 0.99345319 0.98274006 0.96924709 0.9486439 0.89319446 0.78722359] tolerância 201.92232563974508 ===================================================== d [ 12.7889335 25.17541064 -67.33552214 22.5460781 6.02128193 -2.06922541 -11.76079435 -119.64764732 2.22401697 -138.71078266] y [1.00348868 0.99941953 0.99775267 0.99695356 0.99347021 0.98273042 0.96921439 0.94838544 0.89320489 0.78690854] tolerância 198.94671925559103 ===================================================== d [ 9.57203213 25.81610128 -65.44799161 21.68078474 4.35805891 0.20125021 -8.82019016 -121.87586395 -0.45663559 -134.61409947] y [1.00351693 0.99947515 0.99760392 0.99700337 0.99348351 0.98272585 0.96918841 0.94812113 0.8932098 0.78660212] tolerância 196.42634843121786 ===================================================== d [ 6.42526274 26.39956586 -63.54604619 20.77394518 2.73651488 2.43759005 -6.02428554 -124.10648756 -3.31000038 -130.81428432] y [1.00353808 0.99953218 0.99745934 0.99705126 0.99349314 0.98272629 0.96916892 0.94785189 0.89320879 0.78630474] tolerância 194.37940626368328 ===================================================== d [ 3.34026614 26.92832608 -61.62992294 19.82548746 1.15380705 4.6466073 -3.36521532 -126.35332996 -6.34097608 -127.30366972] y [1.00355232 0.9995907 0.99731848 0.99709731 0.9934992 0.9827317 0.96915557 0.9475768 0.89320145 0.78601479] tolerância 192.80413033883104 ===================================================== d [ 0.30849423 27.40629417 -59.70266164 18.83616512 -0.3930864 6.8355078 -0.83515795 -128.63696587 -9.55572529 -124.08140741] y [1.00355972 0.99965038 0.99718188 0.99714126 0.99350176 0.982742 0.96914811 0.94729673 0.8931874 0.78573261] tolerância 191.7106198490159 ===================================================== d [ -2.67926749 27.83481933 -57.76142783 17.80445461 -1.90746222 9.01141301 1.57353941 -130.96817647 -12.96253087 -121.13780536] y [1.00356041 0.99971134 0.9970491 0.99718315 0.99350089 0.9827572 0.96914625 0.94701065 0.89316615 0.78545665] tolerância 191.09608874831108 ===================================================== d [ -5.63237293 28.21566515 -55.8042138 16.72901193 -3.39272139 11.18158836 3.86839201 -133.36022195 -16.57072064 -118.46603075] y [1.00355445 0.99977324 0.99692064 0.99722274 0.99349664 0.98277724 0.96914975 0.94671937 0.89313732 0.78518724] tolerância 190.96350033100524 ===================================================== d [ -8.55998727 28.54874919 -53.82565436 15.60710498 -4.85207321 13.35284927 6.055959 -135.82022043 -20.39125186 -116.05543039] y [1.00354188 0.9998362 0.99679611 0.99726007 0.99348907 0.98280219 0.96915838 0.94642179 0.89310034 0.78492289] tolerância 191.3097003412978 ===================================================== d [ -11.47105512 28.8336419 -51.82017616 14.43576236 -6.28856446 15.53182739 8.14206877 -138.35529187 -24.43635353 -113.89693854] y [1.00352272 0.99990012 0.9966756 0.99729502 0.99347821 0.98283209 0.96917194 0.9461177 0.89305469 0.78466306] tolerância 192.1343219286475 ===================================================== d [ -14.37365545 29.06790046 -49.77858748 13.21110063 -7.70480966 17.72387304 10.13196046 -140.96254414 -28.7163781 -111.9734965 ] y [1.00349712 0.99996446 0.99655997 0.99732723 0.99346418 0.98286675 0.96919011 0.94580897 0.89300016 0.7844089 ] tolerância 193.4251154576151 ===================================================== d [ -17.27564473 29.24975912 -47.69439703 11.92951727 -9.10311242 19.93471753 12.02925269 -143.646927 -33.24579084 -110.27855906] y [1.00346483 1.00002976 0.99644815 0.99735691 0.99344687 0.98290656 0.96921287 0.94549232 0.89293565 0.78415737] tolerância 195.18481132535157 ===================================================== d [ -20.18294699 29.37439017 -45.55619892 10.58632224 -10.48478743 22.16774873 13.83627044 -146.39791373 -38.03520843 -108.79392059] y [1.00342602 1.00009546 0.99634101 0.9973837 0.99342642 0.98295134 0.96923989 0.94516963 0.89286097 0.78390965] tolerância 197.39698386503503 ===================================================== d [ -23.10023173 29.43647668 -43.35301541 9.17707649 -11.8504541 24.42532654 15.55399769 -149.2035283 -43.09461657 -107.50289845] y [1.00338069 1.00016145 0.99623868 0.99740748 0.99340287 0.98300114 0.96927097 0.94484077 0.89277553 0.78366526] tolerância 200.04624024295694 ===================================================== d [ -26.03160794 29.43103895 -41.07570794 7.69781062 -13.20032837 26.70957327 17.1823059 -152.05542659 -48.4353024 -106.39450041] y [1.00332862 1.00022779 0.99614097 0.99742817 0.99337616 0.98305619 0.96930603 0.9445045 0.8926784 0.78342297] tolerância 203.12509283033637 ===================================================== d [ -28.97762559 29.34992341 -38.71145854 6.14419285 -14.53282844 29.01903274 18.7186366 -154.92849325 -54.06233915 -105.44733719] y [1.00326995 1.00029412 0.99604839 0.99744552 0.99334641 0.98311638 0.96934476 0.9441618 0.89256924 0.78318317] tolerância 206.60556403427873 ===================================================== d [ -31.93909837 29.18671557 -36.25150496 4.51301527 -15.84641465 31.35288184 20.16005546 -157.80757968 -59.98318032 -104.65019171] y [1.00320443 1.00036049 0.99596085 0.99745941 0.99331355 0.983182 0.96938708 0.94381146 0.89244699 0.78294473] tolerância 210.47547585152498 ===================================================== d [ -34.91173797 28.93137954 -33.68350931 2.80122244 -17.13705991 33.70533177 21.50050662 -160.65658442 -66.19563114 -103.97922128] y [1.0031322 1.00042649 0.99587888 0.99746962 0.99327771 0.9832529 0.96943267 0.94345462 0.89231135 0.78270809] tolerância 214.6959663638525 ===================================================== d [ -37.89229777 28.57646478 -31.00020379 1.00716573 -18.40120598 36.07191577 22.73428649 -163.45300121 -72.70067049 -103.42271924] y [1.003053 1.00049213 0.99580246 0.99747597 0.99323884 0.98332937 0.96948145 0.94309014 0.89216118 0.78247219] tolerância 219.24787982420497 ===================================================== d [ -40.87088402 28.11078786 -28.1913972 -0.86989774 -19.63217331 38.44184652 23.85204843 -166.1501092 -79.48582418 -102.95474725] y [1.00296703 1.00055696 0.99573213 0.99747826 0.99319709 0.9834112 0.96953303 0.94271931 0.89199624 0.78223756] tolerância 224.07991916901844 ===================================================== d [ -43.83656988 27.52464369 -25.25063336 -2.82896075 -20.82291962 40.80357377 24.84423048 -168.7054921 -86.53604767 -102.55446576] y [1.00287431 1.00062073 0.99566817 0.99747628 0.99315255 0.98349842 0.96958714 0.94234237 0.89181591 0.78200398] tolerância 229.1463698919207 ===================================================== d [ -46.77565555 26.80852781 -22.17366834 -4.86727306 -21.96524318 43.14307877 25.70018943 -171.07303686 -93.82990311 -102.20064476] y [1.00277486 1.00068318 0.99561089 0.99746986 0.99310531 0.98359099 0.9696435 0.94195962 0.89161959 0.78177132] tolerância 234.3956577386532 ===================================================== d [ -49.67517888 25.95524285 -18.96046623 -6.98062155 -23.05139899 45.44714281 26.41002324 -173.21658651 -101.34690019 -101.88028569] y [1.00266839 1.0007442 0.99556042 0.99745879 0.99305531 0.98368919 0.969702 0.94157024 0.89140601 0.78153869] tolerância 239.7886530474327 ===================================================== d [ -52.51331789 24.95483267 -15.61068266 -9.16203404 -24.06984891 47.69439418 26.9601578 -175.07371815 -109.04604782 -101.56525054] y [1.00255532 1.00080328 0.99551726 0.9974429 0.99300284 0.98379263 0.96976211 0.94117597 0.89117533 0.7813068 ] tolerância 245.2475054484933 ===================================================== d [ -55.26856942 23.80028459 -12.12891788 -11.40242716 -25.00959611 49.86386836 27.33843948 -176.5911702 -116.88426055 -101.23401495] y [1.00243579 1.00086008 0.99548173 0.99742204 0.99294806 0.98390119 0.96982348 0.94077747 0.89092713 0.78107562] tolerância 250.70334538345588 ===================================================== d [ -57.91728852 22.48616513 -8.52341255 -13.69034258 -25.85909758 51.93271588 27.53298516 -177.71531455 -124.81090757 -100.86547955] y [1.00230999 1.00091425 0.99545412 0.99739609 0.99289113 0.98401469 0.96988571 0.94037553 0.89066108 0.7808452 ] tolerância 256.0824472495629 ===================================================== d [ -60.43410146 21.00898866 -4.80625564 -16.01193205 -26.60653755 53.87658436 27.5326385 -178.39344264 -132.76808483 -100.4393016 ] y [1.00217816 1.00096543 0.99543472 0.99736493 0.99283227 0.9841329 0.96994837 0.93997102 0.89037699 0.78061561] tolerância 261.30713477518646 ===================================================== d [ -62.79613509 19.3686504 -0.99376716 -18.35209474 -27.24165565 55.67339651 27.32878137 -178.58648706 -140.7001999 -99.94334301] y [1.00204016 1.00101341 0.99542375 0.99732836 0.99277151 0.98425593 0.97001125 0.93956364 0.8900738 0.78038625] tolerância 266.3139777637698 ===================================================== d [ -64.96937248 17.56434464 2.89466521 -20.69062401 -27.75016937 57.29123243 26.91037097 -178.2273093 -148.52097147 -99.34759956] y [1.00189722 1.00105749 0.99542148 0.99728659 0.99270951 0.98438265 0.97007345 0.93915715 0.88975355 0.78015876] tolerância 270.990645848022 ===================================================== d [ -66.92994052 15.60144051 6.83504151 -23.0081218 -28.12286622 58.70731078 26.27277449 -177.28422714 -156.16230873 -98.64162008] y [1.00174934 1.00109747 0.99542807 0.9972395 0.99264634 0.98451305 0.97013471 0.93875148 0.88941549 0.77993263] tolerância 275.2695933854523 ===================================================== d [ -68.65464692 13.48772558 10.80042642 -25.28386013 -28.35140809 59.89950043 25.41333727 -175.73136414 -163.55374498 -97.81732938] y [1.0015965 1.0011331 0.99544368 0.99718695 0.99258212 0.98464712 0.9701947 0.93834663 0.88905888 0.77970737] tolerância 279.0850444758609 ===================================================== d [ -70.11373866 11.2325007 14.76015047 -27.4930963 -28.42575085 60.84016674 24.32982433 -173.5285732 -170.60256709 -96.8553326 ] y [1.00144023 1.0011638 0.99546826 0.9971294 0.99251759 0.98478346 0.97025255 0.93794664 0.88868661 0.77948473] tolerância 282.33966144939683 ===================================================== d [ -71.28780077 8.84979215 18.68189051 -29.61322249 -28.34089771 61.51092042 23.02602969 -170.66650697 -177.23567355 -95.75178362] y [1.00128064 1.00118937 0.99550186 0.99706683 0.99245289 0.98492194 0.97030792 0.93755166 0.88829829 0.77926427] tolerância 284.9757320227672 ===================================================== d [ -72.15834735 6.35597048 22.53146258 -31.62083496 -28.09303651 61.89443211 21.50828324 -167.1401883 -183.37715769 -94.50267604] y [1.00111838 1.00120951 0.99554438 0.99699942 0.99238838 0.98506195 0.97036034 0.9371632 0.88789487 0.77904632] tolerância 286.93430506413307 ===================================================== d [ -72.71096923 3.77003758 26.27393864 -33.49320842 -27.68080534 61.97722016 19.78646737 -162.95613815 -188.95617553 -93.10761323] y [1.00095414 1.00122398 0.99559567 0.99692745 0.99232444 0.98520283 0.97040929 0.93678276 0.88747748 0.77883122] tolerância 288.16711654722707 ===================================================== d [ -72.93593842 1.11322996 29.87459149 -35.2089477 -27.10545716 61.75019974 17.87394313 -158.13268379 -193.90909334 -91.56989296] y [1.00078864 1.00123256 0.99565547 0.99685121 0.99226143 0.9853439 0.97045433 0.93641185 0.88704739 0.77861929] tolerância 288.6382682309661 ===================================================== d [ -72.82865995 -1.59148318 33.2998733 -36.74861879 -26.37093045 61.20908824 15.78733443 -152.69983749 -198.18146725 -89.8964726 ] y [1.00062262 1.00123509 0.99572347 0.99677107 0.99219973 0.98548445 0.97049501 0.93605191 0.88660602 0.77841087] tolerância 288.3255409670374 ===================================================== d [ -72.3899418 -4.31998672 36.51838038 -38.09532651 -25.48381805 60.35463987 13.5461751 -146.6987312 -201.72974796 -88.09780995] y [1.00045685 1.00123147 0.99579927 0.99668743 0.99213971 0.98562377 0.97053095 0.93570435 0.88615493 0.77820625] tolerância 287.221258666442 ===================================================== d [ -71.62769145 -7.04766227 39.50276114 -39.23615935 -24.45391086 59.19417515 11.17300181 -140.18352695 -204.52659132 -86.18865188] y [1.00029262 1.00122167 0.99588212 0.996601 0.99208189 0.9857607 0.97056168 0.93537153 0.88569726 0.77800638] tolerância 285.33831088746354 ===================================================== d [ -70.5497568 -9.74990631 42.2263688 -40.15857222 -23.29118557 57.73520628 8.69069018 -133.20705481 -206.54381622 -84.18186564] y [1.00012959 1.00120563 0.99597203 0.99651169 0.99202623 0.98589544 0.97058711 0.93505245 0.88523173 0.7778102 ] tolerância 282.68449097914544 ===================================================== d [ -69.17769935 -12.40356272 44.67216324 -40.85864517 -22.01078484 55.99733801 6.12524084 -125.84559889 -207.79049726 -82.10138638] y [0.99996953 1.00118351 0.99606783 0.99642058 0.99197339 0.98602642 0.97060683 0.93475024 0.88476314 0.77761922] tolerância 279.31482551941815 ===================================================== d [ -67.52974865 -14.98621443 46.82349032 -41.33182179 -20.62667303 53.99768935 3.50172725 -118.16575107 -208.26730852 -79.96450326] y [0.99981259 1.00115537 0.99616918 0.99632789 0.99192346 0.98615346 0.97062072 0.93446474 0.88429173 0.77743296] tolerância 275.2668211186539 ===================================================== d [ -65.62692815 -17.47692295 48.66755162 -41.57632745 -19.15366671 51.7559402 0.84508793 -110.23636808 -207.98438899 -77.78944987] y [0.99965888 1.00112126 0.99627575 0.99623381 0.99187651 0.98627637 0.97062869 0.93419578 0.88381768 0.77725094] tolerância 270.5854283050714 ===================================================== d [ -63.50517035 -19.85984943 50.20565968 -41.60159458 -17.61104016 49.30429628 -1.81961839 -102.14717239 -206.99972632 -75.6074428 ] y [0.99950999 1.00108161 0.99638617 0.99613949 0.99183305 0.98639379 0.97063061 0.93394568 0.88334583 0.77707446] tolerância 265.3722131755769 ===================================================== d [ -61.18998316 -22.1191802 51.4352267 -41.41243869 -16.01436288 46.66663425 -4.46959904 -93.96528133 -205.34321589 -73.43574916] y [0.99936592 1.00103655 0.99650007 0.9960451 0.9917931 0.98650564 0.97062648 0.93371394 0.88287621 0.77690293] tolerância 259.68514848348264 ===================================================== d [ -58.71252773 -24.24313072 52.36153333 -41.01923399 -14.38030322 43.8713819 -7.08380675 -85.76253476 -203.06699063 -71.29635452] y [0.9992271 1.00098637 0.99661676 0.99595115 0.99175677 0.98661152 0.97061634 0.93350076 0.88241034 0.77673633] tolerância 253.60491879504934 ===================================================== d [ -56.1043434 -26.22307183 52.99420666 -40.43474072 -12.72487429 40.94717341 -9.64358405 -77.6061561 -200.22995046 -69.21027891] y [0.99909389 1.00093137 0.99673555 0.99585809 0.99172414 0.98671105 0.97060027 0.93330619 0.88194964 0.77657458] tolerância 247.21469297867029 ===================================================== d [ -53.39167955 -28.05117808 53.34169954 -39.6698466 -11.06194311 37.91835966 -12.13220919 -69.55112228 -196.87855059 -67.19194211] y [0.99896619 1.00087168 0.99685617 0.99576606 0.99169518 0.98680425 0.97057832 0.93312955 0.88149389 0.77641705] tolerância 240.57683209206996 ===================================================== d [ -50.61490438 -29.72997265 53.43075791 -38.74826965 -9.40760601 34.8195493 -14.53834472 -61.66564841 -193.11655201 -65.27147233] y [0.99884506 1.00080804 0.99697719 0.99567606 0.99167008 0.98689027 0.9705508 0.93297176 0.88104723 0.77626461] tolerância 233.8189504629926 ===================================================== d [ -47.79181284 -31.25426618 53.26954751 -37.67877795 -7.77188768 31.66819447 -16.84954362 -53.98327381 -188.97325993 -63.45268454] y [0.99872986 1.00074037 0.99709881 0.99558786 0.99164867 0.98696953 0.97051771 0.9328314 0.88060767 0.77611604] tolerância 226.9729839047727 ===================================================== d [ -44.95852021 -32.63308293 52.88983523 -36.48645696 -6.16723628 28.49420822 -19.06151694 -46.55461097 -184.55460068 -61.76230463] y [0.99862143 1.00066947 0.99721966 0.99550238 0.99163104 0.98704137 0.97047948 0.93270893 0.88017895 0.77597208] tolerância 220.15992754048688 ===================================================== d [ -42.12837704 -33.86488793 52.30173087 -35.18001072 -4.60085165 25.31091689 -21.16634502 -39.3994736 -179.88592364 -60.20008339] y [0.9985191 1.00059519 0.99734005 0.99541933 0.991617 0.98710623 0.97043609 0.93260296 0.87975887 0.7758315 ] tolerância 213.4007347391683 ===================================================== d [ -39.32713936 -34.96015856 51.53316745 -33.77987481 -3.08067154 22.13907937 -23.16413799 -32.54656031 -175.04975906 -58.78348961] y [0.99842321 1.00051811 0.99745909 0.99533925 0.99160653 0.98716384 0.97038791 0.93251328 0.87934942 0.77569448] tolerância 206.78246035841 ===================================================== d [ -36.57218717 -35.92729745 50.60582247 -32.3012809 -1.61207348 18.99291934 -25.05544674 -26.01108481 -170.10290822 -57.52105363] y [0.99833369 1.00043853 0.99757639 0.99526237 0.99159952 0.98721423 0.97033519 0.9324392 0.87895098 0.77556068] tolerância 200.35960245687713 ===================================================== d [ -33.874353 -36.77209448 49.53528969 -30.75481278 -0.19872244 15.88168401 -26.84035586 -19.79938794 -165.08022116 -56.41463712] y [0.99825018 1.00035649 0.99769195 0.9951886 0.99159583 0.98725761 0.97027797 0.9323798 0.87856254 0.77542932] tolerância 194.15983286907723 ===================================================== d [ -31.24959653 -37.50889173 48.34738271 -29.15697338 1.15674466 12.81621259 -28.52598532 -13.91731149 -160.04804187 -55.47649525] y [0.99817282 1.00027252 0.99780507 0.99511837 0.99159538 0.98729387 0.97021668 0.93233459 0.87818556 0.77530049] tolerância 188.24740834138723 ===================================================== d [ -28.70650259 -38.14832022 47.06038896 -27.51874236 2.45335194 9.8019991 -30.11815398 -8.36161489 -155.04632249 -54.71080614] y [0.99810146 1.00018686 0.99791548 0.99505179 0.99159802 0.98732314 0.97015154 0.9323028 0.87782007 0.77517381] tolerância 182.65560864186318 ===================================================== d [ -26.24855284 -38.69745915 45.68625191 -25.84672798 3.69096831 6.84098332 -31.6209519 -3.12435264 -150.09487166 -54.11637128] y [0.99803569 1.00009946 0.9980233 0.99498874 0.99160364 0.9873456 0.97008254 0.93228365 0.87746485 0.77504846] tolerância 177.3947221581869 ===================================================== d [ -23.88283731 -39.17110414 44.2450448 -24.15182847 4.8708803 3.9351044 -33.0453592 1.80451073 -145.23954646 -53.70178266] y [0.99797555 1.0000108 0.99812797 0.99492952 0.9916121 0.98736127 0.97001009 0.93227649 0.87712098 0.77492448] tolerância 172.50672447027188 ===================================================== d [ -21.61087001 -39.57941194 42.74939389 -22.44000852 5.99500392 1.08280468 -34.39963547 6.43936658 -140.50247314 -53.4688421 ] y [0.99792084 0.99992106 0.99822933 0.99487419 0.99162326 0.98737029 0.96993438 0.93228062 0.87678823 0.77480145] tolerância 168.00714016977787 ===================================================== d [ -19.43082791 -39.92874443 41.20632902 -20.71372685 7.06529895 -1.71941731 -35.68953844 10.79564157 -135.88957996 -53.41545069] y [0.99787116 0.99983009 0.99832759 0.99482261 0.99163704 0.98737278 0.96985531 0.93229543 0.87646528 0.77467855] tolerância 163.8939715995985 ===================================================== d [ -17.34322276 -40.23186672 39.6288277 -18.97821948 8.08521867 -4.47612711 -36.92693771 14.89154211 -131.42688448 -53.54818497] y [0.9978265 0.99973831 0.99842231 0.994775 0.99165328 0.98736882 0.96977328 0.93232024 0.87615294 0.77455577] tolerância 160.19115014140914 ===================================================== d [ -15.34494531 -40.49673399 38.02347268 -17.23491266 9.05797345 -7.19349446 -38.12016434 18.74576614 -127.1212005 -53.86818447] y [0.99778664 0.99964584 0.99851339 0.99473138 0.99167186 0.98735854 0.9696884 0.93235447 0.87585085 0.77443269] tolerância 156.9018939923172 ===================================================== d [ -13.43039594 -40.72541739 36.39015978 -15.48166778 9.98566056 -9.87794703 -39.27260322 22.37401431 -122.96067388 -54.37158965] y [0.99775114 0.99955215 0.99860136 0.99469151 0.99169282 0.98734189 0.96960022 0.93239783 0.87555677 0.77430807] tolerância 154.00869826071087 ===================================================== d [ -11.59791956 -40.93261524 34.74023543 -13.72141063 10.87356051 -12.53868288 -40.39979098 25.7991954 -118.97112443 -55.07054025] y [0.99772017 0.99945824 0.99868527 0.99465581 0.99171584 0.98731912 0.96950966 0.93244943 0.87527323 0.77418269] tolerância 151.5428623534511 ===================================================== d [ -9.84059721 -41.11834539 33.07052627 -11.95015389 11.72361308 -15.18303178 -41.50381915 29.03619528 -115.13342456 -55.96074633] y [0.99769334 0.99936355 0.99876564 0.99462407 0.991741 0.98729011 0.9694162 0.93250911 0.874998 0.77405529] tolerância 149.4823312181731 ===================================================== d [ -8.1532865 -41.28754566 31.38191794 -10.16550642 12.53892519 -17.81982596 -42.59172054 32.10189539 -111.44392987 -57.04610426] y [0.9976705 0.99926812 0.99884239 0.99459633 0.99176821 0.98725487 0.96931987 0.9325765 0.87473079 0.77392542] tolerância 147.82646754117314 ===================================================== d [ -6.53061922 -41.44520027 29.6748519 -8.36467221 13.3227353 -20.45847766 -43.6708529 35.01329663 -107.89874629 -58.33159584] y [0.99765158 0.99917229 0.99891522 0.99457274 0.99179731 0.98721351 0.96922102 0.932651 0.87447215 0.77379302] tolerância 146.57677726104134 ===================================================== d [ -4.96661647 -41.59192726 27.94594654 -6.54330642 14.07686159 -23.10708445 -44.74435014 37.78306791 -104.4830746 -59.81865274] y [0.99763637 0.9990758 0.99898432 0.99455326 0.99182833 0.98716588 0.96911934 0.93273253 0.87422092 0.77365721] tolerância 145.72248641753436 ===================================================== d [ -3.45572077 -41.72970575 26.19277732 -4.69737055 14.80346559 -25.77436507 -45.81683075 40.42438991 -101.18632574 -61.51151059] y [0.99762481 0.99897896 0.99904938 0.99453803 0.9918611 0.98711208 0.96901516 0.9328205 0.87397766 0.77351793] tolerância 145.26028170997122 ===================================================== d [ -1.99248609 -41.85753487 24.41070459 -2.82215524 15.50347885 -28.46761562 -46.88985657 42.94649321 -97.99177612 -63.41245443] y [0.99761674 0.99888149 0.99911056 0.99452706 0.99189568 0.98705188 0.96890815 0.93291492 0.87374131 0.77337425] tolerância 145.17995995987332 ===================================================== d [ -0.5718118 -41.97438098 22.59527555 -0.91319578 16.17764645 -31.19378922 -47.964901 45.35774994 -94.8836 -65.52439275] y [0.99761208 0.99878372 0.99916758 0.99452046 0.99193189 0.98698538 0.96879862 0.93301523 0.87351242 0.77322614] tolerância 145.47379682855913 ===================================================== d [ 0.8109851 -42.07710027 20.74101039 1.03388972 16.82566839 -33.95810758 -49.04102895 47.66319252 -91.84243541 -67.84829278] y [0.99761075 0.99868568 0.99922036 0.99451833 0.99196968 0.98691252 0.96868659 0.93312117 0.8732908 0.77307309] tolerância 146.12967155155738 ===================================================== d [ 2.15986075 -42.16232135 18.84260476 3.02319016 17.44677279 -36.76524157 -50.11696894 49.86603256 -88.85032037 -70.3864968 ] y [0.99761265 0.99858708 0.99926896 0.99452075 0.99200911 0.98683295 0.96857168 0.93323286 0.87307559 0.77291411] tolerância 147.1378699773396 ===================================================== d [ 3.47818818 -42.22585425 16.89477567 5.05832952 18.03954683 -39.61865512 -51.19036152 51.9673666 -85.88899195 -73.14089473] y [0.99761773 0.99848797 0.99931325 0.99452786 0.99205012 0.98674653 0.96845387 0.93335008 0.87286674 0.77274865] tolerância 148.4882941841696 ===================================================== d [ 4.76868052 -42.2595404 14.89132278 7.14171429 18.60071132 -42.51714294 -52.25376093 53.96279972 -82.93300051 -76.10566504] y [0.99762588 0.99838903 0.99935284 0.99453971 0.99209239 0.98665369 0.96833392 0.93347185 0.87266548 0.77257727] tolerância 150.1580922269704 ===================================================== d [ 6.03337139 -42.26019447 12.82864057 9.27577688 19.12848842 -45.46349702 -53.3055775 55.85184366 -79.96977165 -79.28591169] y [0.99763712 0.99828938 0.99938795 0.99455655 0.99213625 0.98655343 0.9682107 0.9335991 0.87246992 0.7723978 ] tolerância 152.14500154887943 ===================================================== d [ 7.27300635 -42.21578058 10.70104085 11.46030091 19.61708594 -48.45009934 -54.33280506 57.62232941 -76.97012237 -82.66846451] y [0.9976513 0.99819004 0.99941811 0.99457836 0.99218121 0.98644657 0.9680854 0.93373039 0.87228193 0.77221143] tolerância 154.41509973150121 ===================================================== d [ 8.48809244 -42.12072067 8.50604954 13.69521412 20.06299707 -51.4746815 -55.33008628 59.26848776 -73.91998002 -86.2526229 ] y [0.99766845 0.99809049 0.99944334 0.99460538 0.99222747 0.98633232 0.96795727 0.93386627 0.87210043 0.77201649] tolerância 156.95901302577028 ===================================================== d [ 9.67763877 -41.96395706 6.24095654 15.97770883 20.45989335 -54.52708166 -56.28450938 60.77617645 -70.79670077 -90.02528268] y [0.99768847 0.99799116 0.9994634 0.99463768 0.99227478 0.98621093 0.9678268 0.93400603 0.87192612 0.77181309] tolerância 159.74695913715328 ===================================================== d [ 10.83991517 -41.73538649 3.90491777 18.30350987 20.80144922 -57.59598561 -57.18366324 62.13109999 -67.58163701 -93.97311329] y [0.99771129 0.99789221 0.99947812 0.99467536 0.99232303 0.98608235 0.96769407 0.93414934 0.87175917 0.7716008 ] tolerância 162.75158850366853 ===================================================== d [ 11.97225447 -41.42458795 1.49873961 20.66640384 21.08077803 -60.66728902 -58.01393784 63.31715981 -64.25777131 -98.07955575] y [0.99773685 0.99779379 0.99948733 0.99471852 0.99237208 0.98594653 0.96755923 0.93429586 0.87159981 0.77137921] tolerância 165.94257179074697 ===================================================== d [ 13.07108118 -41.02102897 -0.97491339 23.05811781 21.29054963 -63.72407673 -58.76069774 64.31678439 -60.8101138 -102.3245932 ] y [0.99776508 0.99769611 0.99949086 0.99476725 0.99242179 0.98580347 0.96742242 0.93444517 0.87144828 0.77114792] tolerância 169.28647718901874 ===================================================== d [ 14.13269042 -40.51687152 -3.5114515 25.46976491 21.42436815 -66.75083924 -59.4122601 65.11512107 -57.23015069 -106.6918617 ] y [0.997796 0.99759907 0.99948856 0.9948218 0.99247216 0.98565273 0.96728342 0.93459731 0.87130443 0.77090587] tolerância 172.75794124784954 ===================================================== d [ 15.15046571 -39.89735162 -6.10386229 27.88606601 21.47211723 -69.71780689 -59.94581875 65.68573606 -53.50088756 -111.1415013 ] y [0.99782933 0.99750352 0.99948027 0.99488186 0.99252268 0.98549532 0.96714332 0.93475086 0.87116948 0.77065428] tolerância 176.29731958417338 ===================================================== d [ 16.12004806 -39.15773172 -8.74343693 30.29529893 21.4284375 -72.60809107 -60.35206043 66.01649264 -49.62258928 -115.65568649] y [0.99786517 0.99740915 0.99946584 0.99494782 0.99257347 0.9853304 0.96700152 0.93490624 0.87104292 0.77039137] tolerância 179.88256512319833 ===================================================== d [ 17.03334599 -38.28501916 -11.41797224 32.6774704 21.28354592 -75.38676015 -60.6080001 66.08122263 -45.58599488 -120.18645923] y [0.99790318 0.99731681 0.99944522 0.99501926 0.992624 0.98515919 0.9668592 0.93506192 0.8709259 0.77011864] tolerância 183.4479305764251 ===================================================== d [ 17.88521147 -37.27639261 -14.11465942 35.0173456 21.03277402 -78.03351656 -60.70489588 65.86923999 -41.39785289 -124.70990707] y [0.99794348 0.99722624 0.99941821 0.99509656 0.99267435 0.98498085 0.96671583 0.93521824 0.87081806 0.76983433] tolerância 186.96694877842614 ===================================================== d [ 18.66682129 -36.12211635 -16.81626389 37.29112151 20.66775833 -80.51044006 -60.62161241 65.35762942 -37.05821647 -129.17190031] y [0.99798565 0.99713834 0.99938492 0.99517914 0.99272395 0.98479684 0.96657268 0.93537356 0.87072044 0.76954025] tolerância 190.37082241681037 ===================================================== d [ 19.37191714 -34.82033169 -19.50529718 39.47884901 20.18436049 -82.79102178 -60.34804663 64.53606271 -32.57890849 -133.53585011] y [0.99802967 0.99705316 0.99934527 0.99526707 0.99277268 0.98460699 0.96642973 0.93552768 0.87063306 0.76923565] tolerância 193.61953200004274 ===================================================== d [ 19.99359535 -33.3693824 -22.16194523 41.55808351 19.57861094 -84.84516894 -59.87297866 63.393995 -27.97387776 -137.75782845] y [0.99807535 0.99697105 0.99929927 0.99536017 0.99282028 0.98441176 0.96628742 0.93567987 0.87055623 0.76892076] tolerância 196.6642816359603 ===================================================== d [ 20.5245914 -31.7684817 -24.76406158 43.5042098 18.84726463 -86.63993155 -59.18485126 61.92201418 -23.25995608 -141.78660406] y [0.99812235 0.99689261 0.99924718 0.99545786 0.9928663 0.98421232 0.96614668 0.93582888 0.87049047 0.76859693] tolerância 199.4485310236851 ===================================================== d [ 20.96055912 -30.02335545 -27.29110145 45.29771528 17.99081466 -88.15451183 -58.28221278 60.12210828 -18.46193437 -145.58971183] y [0.99817075 0.9968177 0.99918878 0.99556045 0.99291075 0.98400801 0.96600711 0.9359749 0.87043563 0.76826259] tolerância 201.94309334062092 ===================================================== d [ 21.2952459 -28.13750193 -29.71878313 46.91374171 17.00897723 -89.35802485 -57.15785378 57.99212413 -13.60406612 -149.11434768] y [0.99822002 0.99674712 0.99912463 0.99566693 0.99295304 0.98380079 0.96587011 0.93611623 0.87039223 0.76792035] tolerância 204.0924782190635 ===================================================== d [ 21.52540525 -26.12007256 -32.02522748 48.33292063 15.90503598 -90.23138385 -55.81371515 55.54057379 -8.71636484 -152.32493913] y [0.99827008 0.99668098 0.99905477 0.9957772 0.99299302 0.98359074 0.96573575 0.93625255 0.87036025 0.76756984] tolerância 205.86585124200252 ===================================================== d [ 21.64842798 -23.98170158 -34.18845031 49.53645347 14.68358394 -90.75686407 -54.25339858 52.77902465 -3.83071806 -155.18533264] y [0.99832068 0.99661958 0.99897949 0.99589082 0.99303041 0.98337864 0.96560456 0.93638311 0.87033976 0.76721177] tolerância 207.23204719493606 ===================================================== d [ 21.66288224 -21.73496456 -36.18736272 50.50743568 13.35100306 -90.92010992 -52.483217 49.72341175 1.01946455 -157.66130931] y [0.9983714 0.99656339 0.99889938 0.99600689 0.99306481 0.98316597 0.96547743 0.93650678 0.87033078 0.76684814] tolerância 208.16331233938865 ===================================================== d [ 21.56929994 -19.39509065 -38.00337978 51.23299452 11.91567139 -90.71412285 -50.51436521 46.39548071 5.79934161 -159.72887626] y [0.99842216 0.99651246 0.99881459 0.99612524 0.9930961 0.98295293 0.96535445 0.93662329 0.87033317 0.76647871] tolerância 208.6449236113639 ===================================================== d [ 21.3694774 -16.97864994 -39.61981878 51.70295343 10.38734281 -90.13613708 -48.36057097 42.82051185 10.47406277 -161.3685103 ] y [0.9984727 0.99646701 0.99872554 0.99624529 0.99312402 0.98274037 0.96523608 0.936732 0.87034676 0.76610443] tolerância 208.66748145710537 ===================================================== d [ 21.06694331 -14.50357375 -41.02320186 51.91126792 8.77732465 -89.18955174 -46.03875598 39.0279122 15.00993545 -162.56811847] y [0.99852262 0.99642735 0.99863299 0.99636606 0.99314828 0.98252983 0.96512313 0.93683202 0.87037123 0.76572751] tolerância 208.23089692663913 ===================================================== d [ 20.66591502 -11.9881702 -42.20155542 51.85368214 7.0974354 -87.88035645 -43.56704434 35.04847391 19.37392062 -163.31915239] y [0.99857198 0.99639337 0.99853687 0.99648769 0.99316885 0.98232084 0.96501525 0.93692347 0.8704064 0.76534658] tolerância 207.33759850076558 ===================================================== d [ 20.17378101 -9.45210386 -43.15012674 51.53445882 5.36129969 -86.22697017 -40.96897471 30.91860662 23.53767745 -163.63156803] y [0.9986201 0.99636545 0.99843861 0.99660843 0.99318537 0.98211623 0.96491381 0.93700508 0.87045151 0.76496633] tolerância 206.01364263434857 ===================================================== d [ 19.59623808 -6.9133662 -43.86150906 50.95410131 3.58161327 -84.24082726 -38.26400897 26.67092358 27.47259932 -163.50287281] y [0.99866722 0.99634338 0.99833782 0.9967288 0.9931979 0.98191482 0.96481812 0.9370773 0.87050648 0.76458412] tolerância 204.26627280399597 ===================================================== d [ 18.9428231 -4.39073784 -44.3379573 50.12400568 1.77224456 -81.94948268 -35.4780594 22.34316771 31.15736399 -162.95835949] y [0.99871285 0.99632728 0.9982357 0.99684743 0.99320624 0.98171869 0.96472903 0.93713939 0.87057045 0.76420344] tolerância 202.13697384019653 ===================================================== d [ 1.82217716e+01 -1.90112250e+00 -4.45804810e+01 4.90532662e+01 -5.38137750e-02 -7.93752633e+01 -3.26333393e+01 1.79694947e+01 3.45719795e+01 -1.62013586e+02] y [0.99875695 0.99631706 0.99813247 0.99696414 0.99321036 0.98152788 0.96464642 0.93719141 0.87064299 0.76382402] tolerância 199.6526641095682 ===================================================== d [ 17.44256938 0.53961673 -44.59474383 47.75550542 -1.88393151 -76.54630319 -29.75324994 13.58392125 37.70167047 -160.69537275] y [0.99879938 0.99631263 0.99802867 0.99707835 0.99321024 0.98134307 0.96457044 0.93723325 0.87072349 0.7634468 ] tolerância 196.85247572047794 ===================================================== d [ 16.61497726 2.91730361 -44.38917683 46.24650776 -3.70631037 -73.49258456 -26.86035851 9.21847398 40.53611162 -159.03494715] y [0.99883999 0.99631389 0.99792484 0.99718954 0.99320585 0.98116485 0.96450117 0.93726488 0.87081127 0.76307266] tolerância 193.77913518888556 ===================================================== d [ 15.74970243 5.21992295 -43.97724213 44.54643972 -5.51034301 -70.24915224 -23.97724689 4.90305447 43.07197771 -157.07510624] y [0.99887855 0.99632066 0.99782182 0.99729687 0.99319725 0.98099428 0.96443883 0.93728628 0.87090535 0.76270356] tolerância 190.48811954047773 ===================================================== d [ 14.85530259 7.43719654 -43.36892843 42.67086167 -7.28639379 -66.84254773 -21.1217054 0.66322568 45.30510204 -154.84307167] y [0.9989151 0.99633277 0.99771975 0.99740026 0.99318446 0.98083125 0.96438318 0.93729766 0.87100531 0.76233901] tolerância 187.01382686789282 ===================================================== d [ 13.94102536 9.56100456 -42.57840109 40.63876931 -9.02630058 -63.30342679 -18.31133693 -3.47804426 47.23743568 -152.37720549] y [0.99894958 0.99635003 0.9976191 0.99749929 0.99316755 0.98067611 0.96433416 0.93729919 0.87111046 0.76197964] tolerância 183.4027881114395 ===================================================== d [ 13.01558883 11.58534915 -41.62083171 38.4691922 -10.7233304 -59.66126341 -15.5615427 -7.501408 48.87433631 -149.71632084] y [0.99898193 0.99637222 0.99752028 0.99759361 0.9931466 0.9805292 0.96429166 0.93729112 0.87122009 0.761626 ] tolerância 179.70076411704235 ===================================================== d [ 12.08705158 13.50628448 -40.51194143 36.18073084 -12.37219698 -55.94378862 -12.88539959 -11.39120804 50.22406928 -146.89880126] y [0.99901214 0.99639911 0.99742369 0.99768289 0.99312171 0.98039073 0.96425555 0.93727371 0.87133352 0.76127853] tolerância 175.9518856139853 ===================================================== d [ 11.16350386 15.32281816 -39.27055191 33.79383902 -13.96983687 -52.18036256 -10.2943209 -15.13631429 51.30111623 -143.97134187] y [0.9990401 0.99643036 0.99732997 0.99776659 0.99309309 0.98026131 0.96422574 0.93724736 0.87144971 0.76093869] tolerância 172.20973205889152 ===================================================== d [ 10.24968679 17.03252921 -37.90600336 31.3195376 -15.51189448 -48.38571029 -7.79500801 -18.72629898 52.10996076 -140.94791871] y [0.99906601 0.99646592 0.99723882 0.99784502 0.99306067 0.98014021 0.96420184 0.93721223 0.87156877 0.76060455] tolerância 168.48720340330263 ===================================================== d [ 9.35288631 18.63909314 -36.43941371 28.7783235 -16.99914396 -44.58798805 -5.3950283 -22.15821217 52.67241498 -137.88266533] y [0.99908972 0.99650532 0.99715113 0.99791747 0.99302478 0.98002827 0.96418381 0.93716891 0.87168932 0.76027848] tolerância 164.8456248116628 ===================================================== d [ 8.47641713 20.14240485 -34.87899102 26.17884634 -18.42906937 -40.7973358 -3.09753898 -25.42661145 52.99454533 -134.78490501] y [0.99911143 0.99654858 0.99706656 0.99798426 0.99298533 0.97992479 0.96417129 0.93711748 0.87181157 0.75995848] tolerância 161.2916461226435 ===================================================== d [ 7.62511217 21.54759413 -33.24095532 23.53555259 -19.80376053 -37.03229674 -0.90541504 -28.53321511 53.09534475 -131.69340638] y [0.9991311 0.99659533 0.99698561 0.99804502 0.99294256 0.9798301 0.9641641 0.93705847 0.87193456 0.75964566] tolerância 157.86746810364093 ===================================================== d [ 6.80262123 22.8607639 -31.53953641 20.86026208 -21.12612215 -33.30701835 1.18089198 -31.48240245 52.99321193 -128.64025395] y [0.99914874 0.99664518 0.99690871 0.99809947 0.99289675 0.97974443 0.96416201 0.93699246 0.87205739 0.759341 ] tolerância 154.60784325900147 ===================================================== d [ 6.01041744 24.0838451 -29.78027665 18.15770714 -22.39551232 -29.62523994 3.16210711 -34.27388332 52.69387009 -125.6273655 ] y [0.99916453 0.99669823 0.99683551 0.99814788 0.99284772 0.97966713 0.96416475 0.9369194 0.87218038 0.75904244] tolerância 151.51077400254687 ===================================================== d [ 5.2509759 25.22418961 -27.97502942 15.43638354 -23.61625972 -25.99649578 5.04015842 -36.91541665 52.21459374 -122.68261322] y [0.99917848 0.99675413 0.9967664 0.99819002 0.99279574 0.97959838 0.96417209 0.93683985 0.87230267 0.75875088] tolerância 148.60619361776253 ===================================================== d [ 4.52582913 26.2891876 -26.13339254 12.70235815 -24.79278712 -22.42656701 6.81845047 -39.41576374 51.57067635 -119.82675897] y [0.99919063 0.99681248 0.99670168 0.99822573 0.99274111 0.97953824 0.96418375 0.93675445 0.87242347 0.75846707] tolerância 145.9164909980249 ===================================================== d [ 3.83505749 27.2798075 -24.25657438 9.95618811 -25.92391496 -18.91332477 8.49917522 -41.77466817 50.76227114 -117.04948619] y [0.99920117 0.99687369 0.99664083 0.99825531 0.99268338 0.97948602 0.96419962 0.93666268 0.87254354 0.75818807] tolerância 143.4256432258461 ===================================================== d [ 3.17993594 28.20694245 -22.35516673 7.20288424 -27.01766395 -15.4618785 10.08785928 -44.00719707 49.80890028 -114.38197922] y [0.99921007 0.996937 0.99658454 0.99827842 0.99262321 0.97944212 0.96421935 0.93656573 0.87266135 0.75791642] tolerância 141.16998435901277 ===================================================== d [ 2.56028662 29.07427521 -20.43150582 4.44267993 -28.07550766 -12.06986582 11.58842097 -46.11798609 48.7145175 -111.82159541] y [0.99921745 0.99700247 0.99653266 0.99829513 0.99256051 0.97940624 0.96424276 0.93646359 0.87277695 0.75765095] tolerância 139.14489473009806 ===================================================== d [ 1.97599238 29.88563805 -18.48773826 1.67541745 -29.09924962 -8.73486431 13.00469981 -48.11201202 47.48261476 -109.36730177] y [0.99922341 0.99707016 0.99648508 0.99830548 0.99249514 0.97937814 0.96426974 0.93635622 0.87289037 0.7573906 ] tolerância 137.34779538732178 ===================================================== d [ 1.42695397 30.6473795 -16.52733635 -1.09913696 -30.09317653 -5.45482026 14.34180313 -49.99851382 46.12041423 -107.026657 ] y [0.99922801 0.99713974 0.99644204 0.99830938 0.99242739 0.9793578 0.96430002 0.9362442 0.87300093 0.75713596] tolerância 135.78788595224006 ===================================================== d [ 0.91278929 31.36359971 -14.55169305 -3.88216389 -31.0595404 -2.22617827 15.60403466 -51.78336274 44.6305971 -104.79837669] y [0.99923133 0.9972111 0.99640356 0.99830682 0.99235732 0.9793451 0.96433341 0.93612778 0.87310831 0.75688677] tolerância 134.46402852448003 ===================================================== d [ 0.43308124 32.03777661 -12.56161497 -6.67514778 -32.00009246 0.95492998 16.79538239 -53.47152777 43.01436331 -102.679386 ] y [0.99923346 0.99728413 0.99636968 0.99829778 0.99228501 0.97933992 0.96436974 0.93600722 0.87321222 0.75664277] tolerância 133.3733750769584 ===================================================== d [-1.24858465e-02 3.26720051e+01 -1.05571413e+01 -9.47965623e+00 -3.29153520e+01 4.09236967e+00 1.79189226e+01 -5.50656758e+01 4.12702725e+01 -1.00663341e+02] y [0.99923447 0.99735896 0.99634034 0.99828219 0.99221026 0.97934215 0.96440897 0.93588232 0.8733127 0.75640293] tolerância 132.50888706547647 ===================================================== d [ -0.42424308 33.26965096 -8.53887821 -12.29741478 -33.80711476 7.18994944 18.9784093 -56.57068915 39.39859277 -98.74815187] y [0.99923444 0.99743527 0.99631568 0.99826005 0.99213338 0.97935171 0.96445083 0.9357537 0.87340909 0.75616781] tolerância 131.86991465214248 ===================================================== d [ -0.80246393 33.8321268 -6.50675934 -15.12967484 -34.67527285 10.25116575 19.97643007 -57.98818998 37.39675942 -96.92640186] y [0.99923345 0.99751298 0.99629573 0.99823132 0.99205442 0.9793685 0.96449516 0.93562156 0.87350112 0.75593715] tolerância 131.4492130248167 ===================================================== d [ -1.14730276 34.36012095 -4.46077851 -17.97716972 -35.5189997 13.27891338 20.91498 -59.31853265 35.26131735 -95.18963813] y [0.99923157 0.99759201 0.99628053 0.99819598 0.99197342 0.97939244 0.96454182 0.93548612 0.87358847 0.75571076] tolerância 131.2383049404404 ===================================================== d [ -1.4586755 34.8535537 -2.40109225 -20.84002768 -36.33670555 16.27524722 21.79529345 -60.56061236 32.98772846 -93.52890933] y [0.99922889 0.99767252 0.99627008 0.99815386 0.99189019 0.97942356 0.96459082 0.93534712 0.87367109 0.75548771] tolerância 131.22758655870575 ===================================================== d [ -1.73647955 35.31231182 -0.3283131 -23.71776863 -37.12669976 19.24162581 22.61851436 -61.7132936 30.57195863 -91.93577343] y [0.99922547 0.99775419 0.99626446 0.99810503 0.99180505 0.9794617 0.9646419 0.93520522 0.87374839 0.75526855] tolerância 131.40854597276999 ===================================================== d [ -1.98044551 35.73499748 1.7565548 -26.60859015 -37.88590427 22.17821812 23.38480131 -62.77315032 28.00907603 -90.39946807] y [0.9992214 0.99783693 0.99626369 0.99804945 0.99171805 0.97950678 0.9646949 0.93506061 0.87382003 0.75505313] tolerância 131.769189743867 ===================================================== d [ -2.19015673 36.1195797 3.85186753 -29.50949453 -38.61049316 25.08396581 24.09370992 -63.73558011 25.29418376 -88.9088032 ] y [0.99921676 0.99792067 0.9962678 0.9979871 0.99162928 0.97955875 0.96474969 0.93491352 0.87388566 0.7548413 ] tolerância 132.2964305852476 ===================================================== d [ -2.36499073 36.4640042 5.95530285 -32.41671777 -39.29652848 27.9568309 24.74449706 -64.59578172 22.42270069 -87.45412802] y [0.99921161 0.99800557 0.99627686 0.99791773 0.99153852 0.97961771 0.96480633 0.9347637 0.87394512 0.75463231] tolerância 132.9783242313468 ===================================================== d [ -2.50428233 36.76512343 8.0636674 -35.32459313 -39.9387777 30.79311339 25.33563903 -65.34708873 19.39047499 -86.0236883 ] y [0.99920605 0.99809128 0.99629086 0.99784153 0.99144615 0.97968343 0.96486449 0.93461186 0.87399782 0.75442673] tolerância 133.7995486439848 ===================================================== d [ -2.60727268 37.0193488 10.17282134 -38.2259545 -40.53139632 33.58766695 25.86515551 -65.98202916 16.1942123 -84.60575998] y [0.99920016 0.99817771 0.99630981 0.9977585 0.99135226 0.97975582 0.96492405 0.93445825 0.8740434 0.75422452] tolerância 134.74378272323926 ===================================================== d [ -2.67313994 37.22272902 12.27761629 -41.11208837 -41.06799847 36.33388367 26.33068055 -66.49249064 12.83174729 -83.1887821 ] y [0.99919403 0.99826473 0.99633372 0.99766864 0.99125699 0.97983477 0.96498485 0.93430315 0.87408147 0.75402564] tolerância 135.79372504418944 ===================================================== d [ -2.70103549 37.37104652 14.37185851 -43.97272756 -41.54175287 39.02371795 26.72955001 -66.86992386 9.30231194 -81.76151273] y [0.99918775 0.99835222 0.99636258 0.997572 0.99116045 0.97992018 0.96504674 0.93414685 0.87411163 0.75383009] tolerância 136.93116198370262 ===================================================== d [ -2.69008856 37.46094685 16.44866106 -46.79732879 -41.94661406 41.64874432 27.05953909 -67.10728547 5.6066473 -80.31581487] y [0.99918138 0.99844035 0.99639647 0.99746831 0.99106249 0.9800122 0.96510977 0.93398916 0.87413357 0.75363729] tolerância 138.14092410288666 ===================================================== d [ -2.63951166 37.48637695 18.49918244 -49.5707513 -42.27342986 44.19667657 27.31662068 -67.19305054 1.74743086 -78.83759281] y [0.99917506 0.99852841 0.99643514 0.9973583 0.99096389 0.9801101 0.96517338 0.93383141 0.87414675 0.7534485 ] tolerância 139.39722761837032 ===================================================== d [ -2.54863814 37.44467192 20.51486577 -52.28058717 -42.51663238 46.65748916 27.49898841 -67.12168918 -2.27070006 -77.32148975] y [0.99916883 0.9986168 0.99647876 0.99724141 0.9908642 0.98021432 0.9652378 0.93367296 0.87415087 0.75326259] tolerância 140.68613373264867 ===================================================== d [ -2.41692696 37.32993489 22.48478061 -54.90885028 -42.66696117 49.01645256 27.60265964 -66.88232065 -6.44045243 -75.75509136] y [0.99916284 0.99870482 0.99652699 0.99711852 0.99076426 0.980324 0.96530244 0.93351518 0.87414553 0.75308083] tolerância 141.9804912639277 ===================================================== d [ -2.24405094 37.1400702 24.3988702 -57.44129429 -42.71927081 51.26208525 27.62628101 -66.4708962 -10.75297243 -74.13506044] y [0.99915714 0.99879285 0.99658001 0.99698904 0.99066365 0.98043958 0.96536753 0.93335747 0.87413034 0.75290219] tolerância 143.26615184790637 ===================================================== d [ -2.02986411 36.87101858 26.24522912 -59.85965715 -42.66615081 53.37953551 27.56718897 -65.88025986 -15.19655972 -72.4539594 ] y [0.99915185 0.99888043 0.99663754 0.99685358 0.99056291 0.98056047 0.96543267 0.93320072 0.87410499 0.75272737] tolerância 144.51999310826406 ===================================================== d [ -1.77460001 36.51899166 28.01129668 -62.144697 -42.50049248 55.35329583 27.4230526 -65.10419896 -19.75642023 -70.70488368] y [0.99914708 0.9989671 0.99669924 0.99671287 0.99046262 0.98068594 0.96549747 0.93304586 0.87406927 0.75255706] tolerância 145.7180103429777 ===================================================== d [ -1.47872454 36.08339327 29.68578774 -64.28098383 -42.21874321 57.17125797 27.19385834 -64.14235446 -24.41714668 -68.88802482] y [0.9991429 0.99905322 0.99676529 0.99656633 0.9903624 0.98081647 0.96556214 0.93289234 0.87402268 0.75239033] tolerância 146.84675131183045 ===================================================== d [ -1.14310581 35.56148833 31.25553561 -66.24870778 -41.81502528 58.81771456 26.87824002 -62.9911795 -29.15915269 -66.99879525] y [0.99913942 0.99913804 0.99683507 0.99641523 0.99026316 0.98095086 0.96562606 0.93274156 0.87396528 0.7522284 ] tolerância 147.88228105264326 ===================================================== d [ -0.76887058 34.95402605 32.70936354 -68.03300185 -41.28744746 60.28138906 26.47742783 -61.65354356 -33.96313786 -65.03990578] y [0.99913672 0.9992219 0.99690878 0.99625901 0.99016455 0.98108956 0.96568945 0.93259302 0.87389652 0.75207041] tolerância 148.8125121876657 ===================================================== d [ -0.35763417 34.26039122 34.03498945 -69.61612611 -40.63278372 61.54873176 25.99187993 -60.13054861 -38.80588651 -63.01074191] y [0.99913492 0.99930406 0.99698566 0.99609908 0.9900675 0.98123126 0.96575169 0.93244809 0.87381669 0.75191752] tolerância 149.6177410798065 ===================================================== d [ 0.08868445 33.48231492 35.2218797 -70.98409017 -39.85064036 62.60961282 25.42389072 -58.42779824 -43.6642588 -60.91523268] y [0.99913408 0.9993846 0.99706567 0.99593544 0.98997199 0.98137594 0.96581278 0.93230675 0.87372547 0.7517694 ] tolerância 150.2858848773198 ===================================================== d [ 0.56783031 32.622156 36.2601633 -72.12393514 -38.94156191 63.45495074 24.77636033 -56.55244216 -48.51392935 -58.75821553] y [0.99913428 0.9994633 0.99714846 0.99576858 0.98987831 0.98152312 0.96587255 0.9321694 0.87362283 0.75162621] tolerância 150.8056396548073 ===================================================== d [ 1.07722552 31.68321542 37.14113758 -73.02463658 -37.90744376 64.07749128 24.0530287 -54.5137087 -53.33008109 -56.54596296] y [0.99913562 0.99953999 0.9972337 0.99559904 0.98978677 0.98167228 0.96593079 0.93203647 0.87350879 0.75148809] tolerância 151.16806370868827 ===================================================== d [ 1.61400268 30.66969131 37.85745321 -73.67733934 -36.75151729 64.47198848 23.25843325 -52.32278981 -58.08784644 -54.2860473 ] y [0.99913815 0.99961446 0.997321 0.99542739 0.98969766 0.9818229 0.96598733 0.93190833 0.87338343 0.75135517] tolerância 151.3667244637741 ===================================================== d [ 2.17495592 29.58654442 38.40345635 -74.07566247 -35.47832929 64.63546922 22.39783619 -49.99262614 -62.76248676 -51.98679326] y [0.99914193 0.99968633 0.99740971 0.99525474 0.98961155 0.98197397 0.96604183 0.93178572 0.87324731 0.75122797] tolerância 151.39769204561324 ===================================================== d [ 2.75683906 28.43932341 38.77435274 -74.21472345 -34.09323463 64.56627064 21.47695034 -47.53736358 -67.33000124 -49.65775007] y [0.99914705 0.99975588 0.99749999 0.99508062 0.98952815 0.98212591 0.96609448 0.93166821 0.87309978 0.75110577] tolerância 151.2584026650701 ===================================================== d [ 3.3561449 27.23483226 38.96843078 -74.09435825 -32.60365291 64.26690973 20.50258304 -44.97364364 -71.76878134 -47.30978586] y [0.99915351 0.99982251 0.99759084 0.99490672 0.98944826 0.9822772 0.9661448 0.93155682 0.87294201 0.75098941] tolerância 150.9526565961254 ===================================================== d [ 3.96929313 25.9795764 38.98432831 -73.7145456 -31.01687997 63.73999748 19.48132622 -42.31775044 -76.05713202 -44.9531511 ] y [0.99916137 0.99988633 0.99768215 0.9947331 0.98937187 0.98242779 0.96619284 0.93145143 0.87277384 0.75087855] tolerância 150.48231486126178 ===================================================== d [ 4.59270226 24.6808106 38.82286465 -73.07864412 -29.34145717 62.99104761 18.42036317 -39.58735279 -80.17609895 -42.59906623] y [0.99917067 0.99994721 0.9977735 0.99456037 0.98929919 0.98257715 0.96623849 0.93135227 0.87259563 0.75077322] tolerância 149.8535522713139 ===================================================== d [ 5.22281764 23.34600862 38.48646901 -72.19227374 -27.58652768 62.02746867 17.32704301 -36.80053339 -84.10889327 -40.2588177 ] y [0.99918143 1.00000504 0.99786447 0.99438913 0.98923043 0.98272475 0.96628166 0.93125951 0.87240776 0.7506734 ] tolerância 149.0746591937114 ===================================================== d [ 5.85620147 21.98310229 37.98006139 -71.06472443 -25.76221995 60.85973719 16.20904656 -33.97607014 -87.8425868 -37.94396921] y [0.99919363 1.00005957 0.99795437 0.99422051 0.989166 0.98286963 0.96632213 0.93117356 0.8722113 0.75057936] tolerância 148.15870874750553 ===================================================== d [ 6.48930383 20.5985221 37.30679957 -69.70173277 -23.87695281 59.49518416 15.07286678 -31.13030112 -91.36107959 -35.66373217] y [0.99920735 1.00011108 0.99804336 0.99405399 0.98910563 0.98301224 0.96636011 0.93109394 0.87200547 0.75049045] tolerância 147.1100283881647 ===================================================== d [ 7.1191065 19.20028418 36.47435989 -68.11655343 -21.94140772 57.94745912 13.92611694 -28.28187244 -94.65742982 -33.42962699] y [0.99922256 1.00015935 0.99813078 0.99389066 0.98904968 0.98315165 0.96639543 0.931021 0.87179139 0.75040688] tolerância 145.94615371797457 ===================================================== d [ 7.74278012 17.79583818 35.49150422 -66.32357537 -19.96593151 56.230998 12.77589384 -25.44820348 -97.72711837 -31.25185155] y [0.99923919 1.00020419 0.99821597 0.99373156 0.98899843 0.983287 0.96642796 0.93095494 0.87157029 0.7503288 ] tolerância 144.68507868135217 ===================================================== d [ 8.35733616 16.39107825 34.36525079 -64.33358018 -17.95915676 54.35702442 11.62804795 -22.64412942 -100.56209364 -29.13805444] y [0.99925727 1.00024576 0.99829887 0.99357664 0.9889518 0.98341834 0.9664578 0.9308955 0.87134202 0.7502558 ] tolerância 143.3361047344736 ===================================================== d [ 8.96004654 14.99174225 33.10331041 -62.15832997 -15.9296111 52.33747652 10.48820987 -19.8840399 -103.15672149 -27.09577828] y [0.99927686 1.00028417 0.9983794 0.9934259 0.98890972 0.98354571 0.96648504 0.93084244 0.87110639 0.75018753] tolerância 141.90966792642843 ===================================================== d [ 9.54924549 13.60455768 31.71790621 -59.81712264 -13.88719842 50.19042611 9.362547 -17.18304881 -105.51796483 -25.1339624 ] y [0.99929779 1.00031919 0.99845672 0.99328071 0.98887251 0.98366796 0.96650954 0.93079599 0.87086544 0.75012424] tolerância 140.43144372786034 ===================================================== d [ 10.12228279 12.2334908 30.21600599 -57.31944348 -11.83867674 47.92555977 8.25514974 -14.55225562 -107.6392187 -23.2571185 ] y [0.99932016 1.00035106 0.99853104 0.99314055 0.98883997 0.98378556 0.96653148 0.93075573 0.87061819 0.75006534] tolerância 138.90525070950846 ===================================================== d [ 10.67815299 10.88395459 28.61019708 -54.68460925 -9.79248241 45.56057701 7.17091152 -12.00385178 -109.53182892 -21.47218312] y [0.9993438 1.00037964 0.99860162 0.99300666 0.98881232 0.9838975 0.96655076 0.93072174 0.87036677 0.75001102] tolerância 137.35677194684683 ===================================================== d [ 11.21454554 9.5588462 26.90760892 -51.92176593 -7.75429744 43.10456672 6.11287592 -9.54665909 -111.19114204 -19.78200633] y [0.99936883 1.00040514 0.99866866 0.99287852 0.98878937 0.98400426 0.96656756 0.93069361 0.87011011 0.74996071] tolerância 135.78830347921632 ===================================================== d [ 11.73066313 8.26192214 25.11930951 -49.04713829 -5.73059607 40.57232484 5.08446377 -7.18978322 -112.62751681 -18.19118013] y [0.9993951 1.00042754 0.99873171 0.99275686 0.9887712 0.98410527 0.96658189 0.93067124 0.86984957 0.74991435] tolerância 134.21938721808488 ===================================================== d [ 12.22583683 6.99597849 23.25576039 -46.07554544 -3.72678586 37.97724676 4.08822956 -4.9403684 -113.85108547 -16.70268337] y [0.9994225 1.00044684 0.99879038 0.9926423 0.98875781 0.98420003 0.96659376 0.93065445 0.8695865 0.74987186] tolerância 132.66699453393878 ===================================================== d [ 12.69816659 5.76245349 21.32369536 -43.01482482 -1.74696132 35.32690064 3.12576341 -2.8041337 -114.85871028 -15.31728973] y [0.99945115 1.00046323 0.99884487 0.99253433 0.98874908 0.98428902 0.96660334 0.93064287 0.86931972 0.74983273] tolerância 131.13063136152292 ===================================================== d [ 13.14701846 4.56313987 19.33205137 -39.87702613 0.20474696 32.63220748 2.1986986 -0.78655052 -115.65853675 -14.03695154] y [0.999481 1.00047678 0.998895 0.99243322 0.98874498 0.98437206 0.96661069 0.93063628 0.86904973 0.74979672] tolerância 129.62199164503303 ===================================================== d [ 13.57282036 3.39944286 17.29095338 -36.6763854 2.12494504 29.90562164 1.30812914 1.10855971 -116.26716276 -12.86323683] y [0.99951181 1.00048747 0.9989403 0.99233978 0.98874546 0.98444853 0.96661584 0.93063444 0.86877872 0.74976383] tolerância 128.1609522468662 ===================================================== d [ 13.97439852 2.27168739 15.20711302 -33.42047534 4.01095069 27.15406141 0.45441839 2.87827032 -116.68569589 -11.79554765] y [0.99954361 1.00049544 0.99898081 0.99225384 0.98875043 0.9845186 0.96661891 0.93063704 0.86850628 0.74973369] tolerância 126.7487164355533 ===================================================== d [ 14.35079182 1.18011145 13.08720745 -30.11691142 5.8603062 24.38444282 -0.36222439 4.51986059 -116.91659553 -10.83354693] y [0.99957646 1.00050078 0.99901656 0.99217528 0.98875986 0.98458243 0.96661998 0.9306438 0.86823199 0.74970596] tolerância 125.38740717131981 ===================================================== d [ 14.7022037 0.12474432 10.93889709 -26.77543649 7.67128115 21.60524643 -1.14196604 6.03180749 -116.97141603 -9.97714409] y [0.99961019 1.00050355 0.99904733 0.99210449 0.98877364 0.98463975 0.96661913 0.93065443 0.86795716 0.74968049] tolerância 124.08840160219556 ===================================================== d [ 15.02774353 -0.89474242 8.76818977 -23.40240708 9.44211002 18.82217488 -1.88520534 7.41256519 -116.85197874 -9.22537855] y [0.99964486 1.00050384 0.99907312 0.99204135 0.98879173 0.9846907 0.96661643 0.93066865 0.86768133 0.74965697] tolerância 122.8519752574824 ===================================================== d [ 15.32809273 -1.87905517 6.58207152 -20.00654223 11.17219774 16.04261891 -2.59288874 8.66238831 -116.57201401 -8.57741098] y [0.99968019 1.00050174 0.99909373 0.99198634 0.98881392 0.98473494 0.966612 0.93068607 0.86740665 0.74963528] tolerância 121.69066016076626 ===================================================== d [ 15.60170794 -2.8288686 4.38560438 -16.59214142 12.85983092 13.27046747 -3.26578677 9.78023729 -116.12734404 -8.03150025] y [0.99971633 1.00049731 0.99910925 0.99193916 0.98884027 0.98477277 0.96660589 0.9307065 0.86713176 0.74961506] tolerância 120.59772993057787 ===================================================== d [ 15.84874427 -3.7451801 2.18466238 -13.16577274 14.50457918 10.51129619 -3.90515146 10.76659045 -115.52656635 -7.58647308] y [0.99975312 1.00049064 0.99911959 0.99190003 0.98887059 0.98480407 0.96659819 0.93072956 0.86685792 0.74959612] tolerância 119.57968614770022 ===================================================== d [ 1.60686704e+01 -4.62903751e+00 -1.54402891e-02 -9.73265844e+00 1.61056938e+01 7.76953636e+00 -4.51229425e+00 1.16218496e+01 -1.14772668e+02 -7.24067273e+00] y [0.9997905 1.00048181 0.99912475 0.99186899 0.9889048 0.98482885 0.96658898 0.93075495 0.86658549 0.74957823] tolerância 118.63698839895096 ===================================================== d [ 16.26051379 -5.48138045 -2.20956279 -6.29747907 17.6620585 5.04920061 -5.08847293 12.34599918 -113.86534515 -6.99253886] y [0.99982851 1.00047086 0.99912471 0.99184596 0.9889429 0.98484723 0.9665783 0.93078245 0.86631399 0.7495611 ] tolerância 117.76655580915084 ===================================================== d [ 16.42422696 -6.30351038 -4.39258021 -2.86542328 19.17354568 2.35457864 -5.63537374 12.94022735 -112.81069202 -6.84056744] y [0.99986697 1.00045789 0.99911948 0.99183107 0.98898468 0.98485918 0.96656627 0.93081165 0.86604464 0.74954456] tolerância 116.97186114410044 ===================================================== d [ 16.55910959 -7.09658519 -6.5594757 0.55888577 20.63943813 -0.31054404 -6.15460537 13.40544583 -111.60997527 -6.78294438] y [0.99990583 1.00044298 0.99910909 0.99182429 0.98903003 0.98486475 0.96655294 0.93084226 0.86577778 0.74952838] tolerância 116.2513122953827 ===================================================== d [ 16.66440557 -7.86173888 -8.70521466 3.97081715 22.05894549 -2.94242845 -6.64783016 13.74268209 -110.26404425 -6.81783487] y [0.999945 1.00042619 0.99909357 0.99182561 0.98907886 0.98486401 0.96653838 0.93087397 0.86551376 0.74951233] tolerância 115.6028102899325 ===================================================== d [ 16.73901817 -8.59990352 -10.82456675 7.36563224 23.43080419 -5.53723097 -7.1166072 13.95252277 -108.7716334 -6.9435882 ] y [0.99998454 1.00040754 0.99907292 0.99183503 0.9891312 0.98485703 0.9665226 0.93090658 0.8652521 0.74949615] tolerância 115.02191532086519 ===================================================== d [ 16.78259289 -9.31236123 -12.91261356 10.73860732 24.75468614 -8.09124243 -7.56287926 14.03664642 -107.13630535 -7.15851703] y [1.00002426 1.00038713 0.99904723 0.99185251 0.9891868 0.98484389 0.96650571 0.93093969 0.86499398 0.74947967] tolerância 114.50916267581121 ===================================================== d [ 16.79424515 -10.00011662 -14.9641317 14.08481258 26.02952847 -10.600652 -7.98842868 13.99645666 -105.35813816 -7.46070055] y [1.00006409 1.00036503 0.99901659 0.99187799 0.98924555 0.98482469 0.96648777 0.930973 0.86473975 0.74946269] tolerância 114.06130580231385 ===================================================== d [ 16.7728561 -10.66397415 -16.97357919 17.39895175 27.25384444 -13.06134855 -8.39495214 13.83302634 -103.43600994 -7.84839887] y [1.00010407 1.00034122 0.99898097 0.99191152 0.98930751 0.98479945 0.96646875 0.93100632 0.86448895 0.74944493] tolerância 113.67356410867775 ===================================================== d [ 16.71811016 -11.30516469 -18.9360642 20.67614203 28.42727428 -15.46974248 -8.78456373 13.54887618 -101.37358307 -8.3195435 ] y [1.00014387 1.00031592 0.99894069 0.99195281 0.98937218 0.98476846 0.96644883 0.93103915 0.86424349 0.7494263 ] tolerância 113.34619950841004 ===================================================== d [ 16.6285525 -11.92414627 -20.84545185 23.91023204 29.54759349 -17.82119475 -9.15884144 13.14518143 -99.16792945 -8.87210756] y [1.00018367 1.00028901 0.99889561 0.99200203 0.98943985 0.98473164 0.96642792 0.9310714 0.86400218 0.7494065 ] tolerância 113.07189765792201 ===================================================== d [ 16.50365238 -12.52188361 -22.6964626 27.09576447 30.61395606 -20.11172828 -9.51983011 12.62437678 -96.82173797 -9.50409212] y [1.00022325 1.00026062 0.99884599 0.99205895 0.98951019 0.98468921 0.96640612 0.93110269 0.86376611 0.74938538] tolerância 112.84924750915316 ===================================================== d [ 16.34243514 -13.09897706 -24.48317282 30.22650469 31.62469244 -22.33677357 -9.86937004 11.98847872 -94.33533488 -10.21347683] y [1.00026266 1.00023072 0.99879179 0.99212365 0.98958329 0.98464119 0.96638338 0.93113284 0.86353491 0.74936268] tolerância 112.67376337064823 ===================================================== d [ 16.14450368 -13.65631412 -26.20040998 33.29681816 32.57899325 -24.49243233 -10.20955602 11.24072465 -91.71210884 -10.99782011] y [1.00030156 1.00019954 0.99873351 0.9921956 0.98965857 0.98458802 0.96635989 0.93116137 0.86331035 0.74933837] tolerância 112.54421984498586 ===================================================== d [ 15.90876036 -14.19419009 -27.84180719 36.29962137 33.47468759 -26.57365179 -10.54212555 10.38353329 -88.95193912 -11.85472442] y [1.00034011 1.00016693 0.99867095 0.99227511 0.98973637 0.98452953 0.96633551 0.93118822 0.86309135 0.74931211] tolerância 112.454623054109 ===================================================== d [ 15.63487537 -14.71332497 -29.40215679 39.22892298 34.31085253 -28.5764188 -10.8691438 9.42065366 -86.05872897 -12.78149126] y [1.00037798 1.00013314 0.99860467 0.99236152 0.98981606 0.98446627 0.96631041 0.93121293 0.8628796 0.74928389] tolerância 112.4034303158956 ===================================================== d [ 15.32200053 -15.21395663 -30.87520016 42.07740088 35.08548203 -30.49567078 -11.19241282 8.35524148 -83.03403548 -13.77547428] y [1.00041532 1.00009801 0.99853446 0.9924552 0.98989799 0.98439803 0.96628446 0.93123543 0.8626741 0.74925337] tolerância 112.38543373717586 ===================================================== d [ 14.96988694 -15.69668004 -32.25571611 44.83880385 35.79762829 -32.32729051 -11.51401001 7.19135982 -79.8825575 -14.83396381] y [1.00045191 1.00006168 0.99846073 0.99255567 0.98998177 0.98432521 0.96625773 0.93125538 0.86247582 0.74922047] tolerância 112.39890699010984 ===================================================== d [ 14.5781835 -16.16185754 -33.53825351 47.50638488 36.4459797 -34.0668858 -11.83586913 5.9332742 -76.60845056 -15.95395347] y [1.00048765 1.00002419 0.99838371 0.99266274 0.99006725 0.98424802 0.96623024 0.93127255 0.86228507 0.74918505] tolerância 112.44081616615993 ===================================================== d [ 14.14662842 -16.60986711 -34.7174513 50.07344757 37.02937066 -35.7101155 -12.16000166 4.58534865 -73.2166571 -17.13258978] y [1.00052257 0.99998548 0.99830337 0.99277654 0.99015455 0.98416642 0.96620189 0.93128677 0.86210156 0.74914684] tolerância 112.50843777193826 ===================================================== d [ 13.67527921 -17.04111283 -35.78873163 52.53390256 37.54705747 -37.2533148 -12.48835684 3.15280558 -69.71321335 -18.36648598] y [1.00055635 0.99994582 0.99822047 0.99289611 0.99024297 0.98408114 0.96617285 0.93129772 0.86192673 0.74910592] tolerância 112.60020944775844 ===================================================== d [ 13.16400661 -17.45568903 -36.74699777 54.88082145 37.99773852 -38.69223749 -12.82270519 1.64085253 -66.10344599 -19.65207977] y [1.00058901 0.99990513 0.99813501 0.99302156 0.99033263 0.98399219 0.96614303 0.93130524 0.86176026 0.74906207] tolerância 112.71264307096253 ===================================================== d [ 1.26129420e+01 -1.78539156e+01 -3.75877409e+01 5.71080392e+01 3.83807401e+01 -4.00232039e+01 -1.31650365e+01 5.48507217e-02 -6.23942995e+01 -2.09861630e+01] y [1.00062054 0.99986331 0.99804699 0.99315302 0.99042365 0.9838995 0.96611231 0.93130917 0.86160192 0.74901499] tolerância 112.8438829581498 ===================================================== d [ 12.0224655 -18.23603282 -38.30721204 59.20992653 38.69564081 -41.24315049 -13.51714025 -1.59912634 -58.59325652 -22.36495942] y [1.00065066 0.99982068 0.99795723 0.99328938 0.9905153 0.98380393 0.96608088 0.93130931 0.86145293 0.74896488] tolerância 112.99261899673647 ===================================================== d [ 11.39285615 -18.6021723 -38.90132804 61.1804322 38.9418179 -42.3486689 -13.88083337 -3.31515726 -54.7078522 -23.78489699] y [1.00067946 0.999777 0.99786547 0.99343121 0.99060799 0.98370514 0.9660485 0.93130548 0.86131257 0.74891131] tolerância 113.15657185184908 ===================================================== d [ 10.72471283 -18.95246069 -39.36703484 63.01441505 39.11910093 -43.337222 -14.2577458 -5.08668483 -50.74640349 -25.24197035] y [1.00070666 0.99973258 0.99777258 0.99357731 0.99070098 0.98360402 0.96601535 0.93129756 0.86118194 0.74885451] tolerância 113.33457658066787 ===================================================== d [ 10.01857389 -19.28697642 -39.70107918 64.70651379 39.22725095 -44.20607636 -14.6495798 -6.90728195 -46.71748565 -26.73242785] y [1.00073235 0.99968718 0.99767828 0.99372825 0.99079468 0.98350021 0.9659812 0.93128537 0.86106038 0.74879405] tolerância 113.52483956331027 ===================================================== d [ 9.27525003 -19.60571764 -39.90117359 66.25216551 39.26634476 -44.95327805 -15.05775378 -8.76998965 -42.63010314 -28.25205443] y [1.00075628 0.99964112 0.99758348 0.99388276 0.99088836 0.98339465 0.96594622 0.93126888 0.86094882 0.74873021] tolerância 113.72626563468378 ===================================================== d [ 8.49551622 -19.90868037 -39.96493625 67.6467506 39.23648925 -45.57679582 -15.48379269 -10.66797314 -38.49364019 -29.79693253] y [1.00077849 0.99959416 0.9974879 0.99404146 0.99098241 0.98328697 0.96591015 0.93124787 0.86084671 0.74866254] tolerância 113.93737649691137 ===================================================== d [ 7.6803861 -20.19571147 -39.89088535 68.8863259 39.13798286 -46.07527985 -15.92884455 -12.59395184 -34.31764011 -31.36264446] y [1.00079878 0.99954662 0.99739247 0.994203 0.99107611 0.98317813 0.96587318 0.9312224 0.86075479 0.74859139] tolerância 114.1570341254414 ===================================================== d [ 6.8308521 -20.46668868 -39.67752265 69.96701589 38.97122391 -46.44739352 -16.39418798 -14.54076973 -30.11209404 -32.94510354] y [1.00081718 0.99949824 0.99729692 0.994368 0.99116986 0.98306777 0.96583502 0.93119223 0.86067259 0.74851626] tolerância 114.38388478951155 ===================================================== d [ 5.94812632 -20.72128077 -39.32419059 70.88550268 38.73669578 -46.69238715 -16.88063666 -16.50089316 -25.88696029 -34.53968089] y [1.00083349 0.99944937 0.99720217 0.99453508 0.99126292 0.98295685 0.96579587 0.93115751 0.86060068 0.74843759] tolerância 114.61661483353961 ===================================================== d [ 5.03340862 -20.95920838 -38.8302711 71.63862345 38.4350276 -46.80958943 -17.3891484 -18.46691208 -21.65268225 -36.1420977 ] y [1.00084774 0.99939974 0.99710797 0.99470487 0.9913557 0.98284501 0.96575544 0.93111798 0.86053867 0.74835486] tolerância 114.8537946644685 ===================================================== d [ 4.08811453 -21.17993641 -38.19594537 72.22367633 38.0668524 -46.79883832 -17.92013568 -20.43109303 -17.41954955 -37.74749147] y [1.00085976 0.99934969 0.99701525 0.99487594 0.99144748 0.98273323 0.96571391 0.93107389 0.86048697 0.74826855] tolerância 115.093801810561 ===================================================== d [ 3.11365443 -21.38299091 -37.42147252 72.63816687 37.63297408 -46.66009253 -18.47415667 -22.38581795 -13.19835849 -39.35135009] y [1.00086955 0.99929895 0.99692376 0.99504894 0.99153867 0.98262113 0.96567099 0.93102495 0.86044524 0.74817813] tolerância 115.33493287362766 ===================================================== d [ 2.11166431 -21.56761399 -36.5078995 72.88001107 37.13415367 -46.3937745 -19.05115574 -24.32319552 -8.99968809 -40.9485484 ] y [1.00087698 0.99924789 0.9968344 0.9952224 0.99162853 0.98250971 0.96562687 0.93097149 0.86041373 0.74808417] tolerância 115.57514394136112 ===================================================== d [ 1.0837831 -21.73310572 -35.45637509 72.94734881 36.5713283 -46.00044779 -19.65121008 -26.23542918 -4.83463927 -42.5342879 ] y [1.00088204 0.99919623 0.99674695 0.99539697 0.99171748 0.98239858 0.96558124 0.93091323 0.86039217 0.74798608] tolerância 115.81227542447289 ===================================================== d [ 3.18956776e-02 -2.18784795e+01 -3.42688644e+01 7.28387521e+01 3.59453886e+01 -4.54811416e+01 -2.02737330e+01 -2.81145115e+01 -7.14067221e-01 -4.41031576e+01] y [1.00088463 0.99914434 0.99666228 0.99557116 0.99180481 0.98228874 0.96553431 0.93085058 0.86038062 0.74788451] tolerância 116.04379076943735 ===================================================== d [ -1.0420984 -22.00278913 -32.94744481 72.5529932 35.25738558 -44.83702242 -20.91823961 -29.95249137 3.3506351 -45.65001837] y [1.00088471 0.99909193 0.9965802 0.99574564 0.99189091 0.98217979 0.96548575 0.93078324 0.86037891 0.74777887] tolerância 116.26692082783768 ===================================================== d [ -2.13602993 -22.10483225 -31.49508334 72.08939482 34.50837044 -44.06978439 -21.58355913 -31.7413037 7.3483463 -47.16917279] y [1.00088222 0.99903939 0.99650152 0.99591889 0.9919751 0.98207273 0.9654358 0.93071171 0.86038691 0.74766986] tolerância 116.47862601796722 ===================================================== d [ -3.24762097 -22.18324291 -29.91490417 71.44717208 33.69932079 -43.18111985 -22.26827252 -33.47271185 11.26755806 -48.65469019] y [1.00087712 0.9989866 0.99642631 0.99609103 0.99205751 0.98196749 0.96538426 0.93063592 0.86040446 0.74755723] tolerância 116.67489735309695 ===================================================== d [ -4.3744274 -22.23667403 -28.21077427 70.62637118 32.83150843 -42.17333989 -22.97074361 -35.13863216 15.09663958 -50.10072673] y [1.00086936 0.99893363 0.99635488 0.99626164 0.99213798 0.98186438 0.96533109 0.93055599 0.86043137 0.74744104] tolerância 116.85206801502048 ===================================================== d [ -5.51383084 -22.26368912 -26.38714456 69.62751573 31.90631925 -41.04915114 -23.68900018 -36.73100935 18.82386066 -51.50129785] y [1.00085892 0.99888053 0.99628751 0.99643029 0.99221638 0.98176367 0.96527623 0.93047208 0.86046742 0.74732141] tolerância 117.00619527910393 ===================================================== d [ -6.6630321 -22.26277692 -24.44907843 68.45165787 30.92527192 -39.81168413 -24.4207295 -38.24185289 22.43742056 -52.85029914] y [1.00084575 0.99882737 0.9962245 0.99659655 0.99229257 0.98166565 0.96521967 0.93038437 0.86051237 0.74719843] tolerância 117.13308700759919 ===================================================== d [ -7.81904728 -22.23236934 -22.40227893 67.10043452 29.89003844 -38.46452303 -25.16327811 -39.66328118 25.92548824 -54.14153553] y [1.00082984 0.99877421 0.99616612 0.99676001 0.99236641 0.98157058 0.96516135 0.93029305 0.86056594 0.74707222] tolerância 117.22833846958115 ===================================================== d [ -8.97870714 -22.1708625 -20.25311257 65.57612734 28.80246546 -37.0117356 -25.91365649 -40.98757408 29.27625542 -55.36875905] y [1.00081117 0.99872112 0.99611263 0.99692024 0.99243779 0.98147873 0.96510126 0.93019834 0.86062785 0.74694294] tolerância 117.28737950849694 ===================================================== d [-10.13866046 -22.07664091 -18.0086277 63.88172396 27.6645956 -35.4579015 -26.66854914 -42.20723376 32.47800343 -56.52571563] y [1.00078973 0.99866818 0.99606426 0.99707683 0.99250657 0.98139035 0.96503939 0.93010046 0.86069776 0.74681072] tolerância 117.30553181811099 ===================================================== d [-11.29538208 -21.94810433 -15.6765655 62.02097825 26.47868778 -33.8081374 -27.42433076 -43.31505357 35.51918439 -57.60620079] y [1.00076552 0.99861546 0.99602126 0.99722937 0.99257263 0.98130568 0.9649757 0.92999968 0.86077532 0.74667574] tolerância 117.2780757969259 ===================================================== d [-12.44518605 -21.78369698 -13.26536132 59.99846689 25.24723525 -32.06811679 -28.17708881 -44.30419424 38.38851649 -58.60412387] y [1.00073855 0.99856305 0.99598383 0.99737747 0.99263585 0.98122495 0.96491022 0.92989624 0.86086013 0.74653819] tolerância 117.20032613553484 ===================================================== d [-13.58423482 -21.58199064 -10.78432533 57.81995756 23.97304476 -30.24425774 -28.92258711 -45.16851868 41.07553337 -59.51367563] y [1.00070892 0.99851119 0.99595225 0.9975203 0.99269595 0.98114862 0.96484314 0.92979078 0.86095152 0.74639868] tolerância 117.06824110488998 ===================================================== d [-14.70839329 -21.34124673 -8.24289169 55.49065483 22.65870762 -28.3427639 -29.65607822 -45.90126138 43.56877852 -60.32825287] y [1.00067648 0.99845966 0.9959265 0.99765837 0.9927532 0.9810764 0.96477408 0.92968292 0.8610496 0.74625657] tolerância 116.87522537128514 ===================================================== d [-15.81386434 -21.06068389 -5.65183392 53.01921852 21.30799138 -26.37172883 -30.37345573 -46.49827542 45.85965414 -61.04374097] y [1.00064147 0.99840886 0.99590688 0.99779046 0.99280714 0.98100893 0.96470348 0.92957366 0.86115331 0.74611296] tolerância 116.61986394773872 ===================================================== d [-16.8963249 -20.73890801 -3.02199295 50.41302736 19.92407488 -24.33863545 -31.06954896 -46.95431863 47.93843106 -61.65432199] y [1.00060382 0.99835872 0.99589342 0.99791667 0.99285786 0.98094615 0.96463118 0.92946297 0.86126248 0.74596765] tolerância 116.29662638726049 ===================================================== d [-17.95166558 -20.3750807 -0.36487897 47.68134987 18.51078359 -22.25197511 -31.73958649 -47.26569107 49.79695923 -62.15567497] y [1.0005636 0.99830935 0.99588623 0.99803668 0.99290529 0.98088821 0.96455722 0.9293512 0.8613766 0.74582088] tolerância 115.902381168553 ===================================================== d [-18.97581549 -19.96859597 2.30759259 44.83437594 17.0721835 -20.1207086 -32.37878537 -47.4295804 51.42813704 -62.54409313] y [1.00052087 0.99826085 0.99588536 0.99815018 0.99294935 0.98083524 0.96448166 0.92923868 0.86149514 0.74567292] tolerância 115.43458197929534 ===================================================== d [-19.9647916 -19.5191048 4.98320535 41.88316791 15.61255738 -17.95420547 -32.98241469 -47.44413362 52.82604332 -62.81656522] y [1.0004757 0.99821332 0.99589085 0.99825691 0.99298999 0.98078735 0.96440459 0.92912578 0.86161756 0.74552404] tolerância 114.8913534603708 ===================================================== d [-20.91475061 -19.02653523 7.64956048 38.83958944 14.13637329 -15.76216963 -33.54586143 -47.30851684 53.98605854 -62.97084962] y [1.00042817 0.99816685 0.99590272 0.99835661 0.99302716 0.98074461 0.96432607 0.92901284 0.86174331 0.74537451] tolerância 114.27156848737648 ===================================================== d [-21.82204176 -18.49110737 10.29420733 35.71621241 12.64824573 -13.5545524 -34.0646968 -47.02296047 54.90497092 -63.00553835] y [1.00037839 0.99812156 0.99592092 0.99844906 0.99306081 0.98070709 0.96424622 0.92890022 0.86187182 0.74522461] tolerância 113.57491438052745 ===================================================== d [-22.68369392 -17.91374985 12.90497786 32.52698874 11.15312556 -11.34171339 -34.53536759 -46.58992794 55.58258365 -62.92138053] y [1.0003266 0.99807768 0.99594535 0.99853382 0.99309082 0.98067492 0.96416538 0.92878864 0.86200211 0.74507509] tolerância 112.80451789982605 ===================================================== d [-23.49594771 -17.29472779 15.46946778 29.28442944 9.65538347 -9.13335829 -34.95299192 -46.01041665 56.01707584 -62.71687495] y [1.00027277 0.99803517 0.99597598 0.99861101 0.99311729 0.98064801 0.96408343 0.92867808 0.86213401 0.74492578] tolerância 111.95840007351553 ===================================================== d [-24.25544102 -16.63471635 17.97548596 26.00177748 8.1595981 -6.93941507 -35.31325705 -45.2866276 56.20806812 -62.39182405] y [1.00021684 0.997994 0.9960128 0.99868072 0.99314028 0.98062626 0.96400023 0.92856855 0.86226735 0.74477649] tolerância 111.03592503913582 ===================================================== d [-24.9612437 -15.9362446 20.41279179 22.69497298 6.67104224 -4.77029296 -35.61521241 -44.42616876 56.1624616 -61.95240058] y [1.00015929 0.99795453 0.99605546 0.99874242 0.99315964 0.9806098 0.96391643 0.92846109 0.86240074 0.74462843] tolerância 110.04762515648567 ===================================================== d [-25.61050407 -15.20036695 22.7702191 19.37704809 5.19395381 -2.63526788 -35.8549457 -43.43291128 55.88275703 -61.39949287] y [1.00010005 0.99791671 0.9961039 0.99879627 0.99317547 0.98059848 0.96383191 0.92835566 0.86253401 0.74448142] tolerância 108.99357278717382 ===================================================== d [-26.20169441 -14.42902688 25.03797835 16.06192589 3.73274169 -0.54353499 -36.03034392 -42.3133707 55.37506115 -60.73721992] y [1.00003928 0.99788064 0.99615793 0.99884226 0.99318779 0.98059222 0.96374683 0.9282526 0.86266662 0.74433571] tolerância 107.87888822451808 ===================================================== d [-26.73463257 -13.62490615 27.20810718 12.76365664 2.29165722 1.49631418 -36.14099497 -41.07646386 54.64912862 -59.97254589] y [0.9999773 0.99784651 0.99621716 0.99888025 0.99319662 0.98059094 0.9636616 0.9281525 0.86279761 0.74419204] tolerância 106.71336487406614 ===================================================== d [-27.20664408 -12.78916345 29.27064766 9.49405475 0.87432039 3.47615625 -36.18299582 -39.72677493 53.70904816 -59.10641626] y [0.99991385 0.99781418 0.99628172 0.99891054 0.99320206 0.98059449 0.96357583 0.92805503 0.8629273 0.74404972] tolerância 105.4949112682504 ===================================================== d [-27.61882214 -11.92493138 31.21998011 6.26615061 -0.51559451 5.3886424 -36.15746082 -38.27492221 52.56737489 -58.14802526] y [0.99984929 0.99778383 0.99635118 0.99893307 0.99320414 0.98060274 0.96348997 0.92796075 0.86305475 0.74390946] tolerância 104.23584708858218 ===================================================== d [-27.9725854 -11.03527566 33.05178423 3.09155375 -1.87503803 7.22756145 -36.0657464 -36.73163676 51.23746629 -57.10654973] y [0.99978396 0.99775562 0.99642504 0.99894789 0.99320292 0.98061549 0.96340444 0.92787021 0.8631791 0.74377191] tolerância 102.94834421375715 ===================================================== d [-2.82673588e+01 -1.01221892e+01 3.47600951e+01 -1.96658505e-02 -3.20134231e+00 8.98698880e+00 -3.59065070e+01 -3.51044184e+01 4.97284624e+01 -5.59865115e+01] y [0.99971779 0.99772951 0.99650322 0.99895521 0.99319848 0.98063258 0.96331912 0.92778332 0.86330031 0.74363682] tolerância 101.63556623662711 ===================================================== d [-28.5031657 -9.18784023 36.34002748 -3.05795089 -4.49208943 10.66153966 -35.67916025 -33.401378 48.05044534 -54.79347628] y [0.99965071 0.99770549 0.99658571 0.99895516 0.99319088 0.98065391 0.96323392 0.92770002 0.86341831 0.74350396] tolerância 100.30188277400302 ===================================================== d [-28.68425063 -8.23580576 37.79284147 -6.01514492 -5.74586565 12.24824288 -35.3884068 -31.63560036 46.22123872 -53.54070532] y [0.99958328 0.99768376 0.99667167 0.99894793 0.99318026 0.98067913 0.96314952 0.92762101 0.86353198 0.74337435] tolerância 98.96610820557541 ===================================================== d [-28.81097905 -7.26804231 39.11558966 -8.88414785 -6.96120793 13.74350798 -35.03395632 -29.8147809 44.25120432 -52.23333219] y [0.99951543 0.99766428 0.99676107 0.9989337 0.99316667 0.9807081 0.9630658 0.92754617 0.86364132 0.74324769] tolerância 97.63189104255287 ===================================================== d [-28.8855142 -6.28692642 40.30823882 -11.65909552 -8.13730868 15.14509353 -34.61773493 -27.94827408 42.15338581 -50.87949514] y [0.99944728 0.99764708 0.9968536 0.99891268 0.9931502 0.98074061 0.96298293 0.92747564 0.863746 0.74312413] tolerância 96.30836519592422 ===================================================== d [-28.91023968 -5.29467487 41.37171291 -14.33535769 -9.27383238 16.45158541 -34.14188588 -26.04508658 39.94064022 -49.48721065] y [0.99937894 0.99763221 0.99694896 0.9988851 0.99313095 0.98077644 0.96290104 0.92740953 0.86384571 0.74300377] tolerância 95.00457328218658 ===================================================== d [-28.88770078 -4.29331498 42.30779133 -16.90951173 -10.37089599 17.66234355 -33.60870047 -24.11377473 37.62546092 -48.06426925] y [0.99931056 0.99761969 0.99704682 0.99885119 0.99310901 0.98081536 0.96282027 0.92734792 0.86394019 0.74288671] tolerância 93.72936911616574 ===================================================== d [-28.82054455 -3.28466069 43.11899431 -19.37929285 -11.42904156 18.77743551 -33.02054801 -22.16235845 35.21981903 -46.61814205] y [0.99924222 0.99760953 0.99714691 0.99881119 0.99308448 0.98085714 0.96274077 0.92729087 0.8640292 0.74277301] tolerância 92.49132967488481 ===================================================== d [-28.7114599 -2.27029509 43.80845895 -21.74352333 -12.44920221 19.79755932 -32.37980659 -20.19825296 32.73502654 -45.15589938] y [0.99917404 0.99760176 0.99724891 0.99876535 0.99305744 0.98090156 0.96266266 0.92723845 0.86411251 0.74266273] tolerância 91.29867770751642 ===================================================== d [-28.56197322 -1.25142024 44.37791595 -24.00102509 -13.432151 20.72300669 -31.68746253 -18.22739753 30.18013686 -43.68244803] y [0.99910591 0.99759637 0.99735286 0.99871375 0.9930279 0.98094854 0.96258582 0.92719052 0.8641902 0.74255557] tolerância 90.15540423149417 ===================================================== d [-28.37733736 -0.2293733 44.83555748 -26.15469935 -14.38066383 21.55748885 -30.94874207 -16.25771662 27.56799726 -42.20790827] y [0.99903835 0.99759341 0.99745784 0.99865697 0.99299613 0.98099756 0.96251086 0.9271474 0.86426159 0.74245224] tolerância 89.07743932880132 ===================================================== d [-28.15850708 0.79511388 45.18326779 -28.20488637 -15.29593238 22.30201358 -30.16395438 -14.29351958 24.90563404 -40.73540985] y [0.99897122 0.99759287 0.9975639 0.9985951 0.99296211 0.98104856 0.96243765 0.92710894 0.8643268 0.7423524 ] tolerância 88.06676703605564 ===================================================== d [-27.90778101 1.82142436 45.42534392 -30.15373072 -16.1800162 22.95890081 -29.33489166 -12.33948928 22.20084674 -39.26988137] y [0.99890461 0.99759475 0.99767079 0.99852838 0.99292592 0.98110131 0.9623663 0.92707513 0.86438572 0.74225603] tolerância 87.12964631695543 ===================================================== d [-27.62634888 2.84915273 45.56446621 -32.00281366 -17.03461856 23.52975135 -28.46207982 -10.39925792 19.45962691 -37.81452245] y [0.99883838 0.99759907 0.99777858 0.99845683 0.99288753 0.98115579 0.96229668 0.92704585 0.8644384 0.74216285] tolerância 86.26882782874917 ===================================================== d [-27.31816428 3.87820871 45.60839294 -33.75766895 -17.86331591 24.01906943 -27.54901613 -8.47708619 16.68963323 -36.37590687] y [0.99877303 0.99760581 0.99788637 0.99838112 0.99284723 0.98121145 0.96222935 0.92702125 0.86448443 0.74207339] tolerância 85.49620933824355 ===================================================== d [-26.98279398 4.9084344 45.55764961 -35.41926482 -18.66730538 24.42761387 -26.59447423 -6.57501013 13.89428137 -34.95459178] y [0.9987082 0.99761502 0.9979946 0.99830101 0.99280484 0.98126845 0.96216398 0.92700113 0.86452404 0.74198707] tolerância 84.80999721029235 ===================================================== d [-26.62260304 5.94010503 45.41772201 -36.99247668 -19.44972273 24.75889059 -25.60013266 -4.69564803 11.07839253 -33.55467843] y [0.99864417 0.99762666 0.99810271 0.99821696 0.99276055 0.98132642 0.96210087 0.92698553 0.86455701 0.74190413] tolerância 84.21768311771085 ===================================================== d [-26.23845437 6.97348336 45.19176738 -38.48083974 -20.21294223 25.01526333 -24.56610606 -2.84071813 8.24518032 -32.17813946] y [0.998581 0.99764076 0.99821048 0.99812918 0.99271439 0.98138517 0.96204012 0.92697438 0.8645833 0.7418245 ] tolerância 83.72253078151923 ===================================================== d [-25.83085357 8.00890017 44.88246524 -39.88777667 -20.95924519 25.19888208 -23.49214037 -1.01155173 5.39716077 -30.82644494] y [0.99851873 0.99765731 0.99831772 0.99803787 0.99266643 0.98144453 0.96198183 0.92696764 0.86460286 0.74174814] tolerância 83.32720118281813 ===================================================== d [-25.39993856 9.04671152 44.49196786 -41.21648842 -21.69077465 25.31163265 -22.37762133 0.79082517 2.53625893 -29.50060528] y [0.99845744 0.99767631 0.99842423 0.99794321 0.99261669 0.98150433 0.96192608 0.92696524 0.86461567 0.74167499] tolerância 83.03375279804175 ===================================================== d [-24.9452504 10.08722611 44.02135707 -42.46940412 -22.40934019 25.35473701 -21.22131799 2.56554259 -0.33616618 -28.20111469] y [0.99839697 0.99769785 0.99853014 0.9978451 0.99256505 0.98156458 0.96187281 0.92696713 0.86462171 0.74160476] tolerância 82.84281051012672 ===================================================== d [-24.46680067 11.1309483 43.47273327 -43.65003102 -23.1172303 25.33007064 -20.02248997 4.31196732 -3.21897778 -26.92892097] y [0.99833759 0.99772186 0.99863493 0.997744 0.99251171 0.98162494 0.96182229 0.92697323 0.86462091 0.74153763] tolerância 82.75727566542943 ===================================================== d [-23.96359126 12.17806417 42.84648613 -44.76038675 -23.81594995 25.23852383 -18.77953003 6.02953199 -6.11138096 -25.68390098] y [0.99827935 0.99774836 0.99873842 0.99764009 0.99245668 0.98168524 0.96177463 0.9269835 0.86461325 0.74147353] tolerância 82.77736922827236 ===================================================== d [-23.4342674 13.22858855 42.14241756 -45.80192111 -24.50670441 25.08061935 -17.49059221 7.71762177 -9.01264556 -24.46568102] y [0.99822231 0.99777735 0.99884041 0.99753354 0.99239999 0.98174532 0.96172993 0.92699785 0.8645987 0.74141239] tolerância 82.90268774589718 ===================================================== d [-22.87713507 14.28231223 41.35976529 -46.77545949 -25.19036522 24.85651572 -16.15364585 9.37548814 -11.92195901 -23.27369238] y [0.99816652 0.99780884 0.99894073 0.99742451 0.99234165 0.98180502 0.96168829 0.92701622 0.86457724 0.74135415] tolerância 83.13218592665987 ===================================================== d [-22.29018435 15.338749 40.49724094 -47.68115878 -25.86743866 24.56602153 -14.76653857 11.00216637 -14.83827727 -22.10722684] y [0.99811206 0.99784284 0.99903918 0.99731317 0.99228169 0.98186419 0.96164984 0.92703854 0.86454886 0.74129875] tolerância 83.46415421873526 ===================================================== d [-21.67142131 16.39735425 39.55355764 -48.51910618 -26.53847317 24.2088527 -13.32717955 12.59647889 -17.76043384 -20.96597349] y [0.99805884 0.99787946 0.99913589 0.99719931 0.99221992 0.98192285 0.96161458 0.92706481 0.86451343 0.74124596] tolerância 83.89732100476184 ===================================================== d [-21.01828786 17.45698082 38.52654144 -49.28811118 -27.20320883 23.78421116 -11.83335017 14.15686409 -20.68659177 -19.84908441] y [0.99800709 0.99791862 0.99923034 0.99708345 0.99215655 0.98198066 0.96158275 0.92709489 0.86447102 0.74119589] tolerância 84.42864669368359 ===================================================== d [-20.32799054 18.5160359 37.41364332 -49.98620678 -27.86091617 23.29102672 -10.28293077 15.68131643 -23.61422984 -18.75571026] y [0.9979569 0.9979603 0.99932234 0.99696575 0.99209159 0.98203745 0.9615545 0.9271287 0.86442162 0.7411485 ] tolerância 85.0542521199157 ===================================================== d [-19.59757299 19.57245034 36.21207677 -50.61070928 -28.51040412 22.72804065 -8.67401316 17.16734096 -26.54001934 -17.68507315] y [0.99790836 0.99800452 0.99941168 0.99684639 0.99202506 0.98209307 0.96152994 0.92716614 0.86436524 0.74110371] tolerância 85.76946770878368 ===================================================== d [-18.82447121 20.62421559 34.91980875 -51.15958757 -29.15086067 22.09438094 -7.00512491 18.61228761 -29.46045147 -16.63716297] y [0.99786141 0.9980514 0.99949842 0.99672516 0.99195676 0.98214751 0.96150916 0.92720726 0.86430166 0.74106135] tolerância 86.5710817347809 ===================================================== d [-18.00475087 21.66732589 33.53242982 -51.6266851 -29.77884334 21.38774345 -5.27480104 20.01193415 -32.3690992 -15.61053065] y [0.99781646 0.99810065 0.9995818 0.99660299 0.99188715 0.98220027 0.96149244 0.92725171 0.86423131 0.74102162] tolerância 87.44910459627943 ===================================================== d [-17.13617274 22.69911191 32.04868446 -52.00982747 -30.39321072 20.60768801 -3.48258771 21.36291886 -35.26099719 -14.6058267 ] y [0.99777333 0.99815255 0.99966213 0.99647933 0.99181582 0.9822515 0.9614798 0.92729964 0.86415378 0.74098423] tolerância 88.40041462173853 ===================================================== d [-16.21536191 23.71490769 30.46532051 -52.30294997 -30.99032767 19.75252813 -1.62829254 22.66015977 -38.12801885 -13.62263753] y [0.99773229 0.99820692 0.99973889 0.99635475 0.99174302 0.98230087 0.96147146 0.92735082 0.86406931 0.74094924] tolerância 89.41525689614639 ===================================================== d [-15.23895337 24.70895914 28.77916673 -52.49869424 -31.56545683 18.82062589 0.28732449 23.89764236 -40.95995511 -12.66046617] y [0.99769356 0.99826355 0.99981164 0.99622985 0.99166902 0.98234803 0.96146757 0.92740493 0.86397827 0.74091671] tolerância 90.4814395984113 ===================================================== d [-14.2053046 25.67722409 26.99034932 -52.59433575 -32.11653302 17.81237947 2.26269201 25.07074106 -43.74890797 -11.72075659] y [0.99765706 0.99832274 0.99988058 0.9961041 0.99159341 0.98239312 0.96146826 0.92746217 0.86388015 0.74088638] tolerância 91.59427436561506 ===================================================== d [-13.11180324 26.61308497 25.09722444 -52.58260994 -32.63841123 16.72705883 4.29532055 26.17255454 -46.48253197 -10.80391931] y [0.99762303 0.99838425 0.99994523 0.99597812 0.99151648 0.98243578 0.96147368 0.92752222 0.86377536 0.74085831] tolerância 92.74071621388396 ===================================================== d [-11.95710646 27.51099331 23.10061571 -52.45947218 -33.12766268 15.56550725 6.3819073 27.19718518 -49.14991669 -9.91158679] y [0.99759153 0.99844819 1.00000553 0.99585177 0.99143805 0.98247598 0.961484 0.92758511 0.86366367 0.74083235] tolerância 93.91250370349138 ===================================================== d [-10.73919573 28.36220821 20.99997642 -52.21570082 -33.57708513 14.32769004 8.51737217 28.13599024 -51.73434134 -9.04423544] y [0.99756289 0.99851409 1.00006087 0.99572611 0.9913587 0.98251326 0.96149929 0.92765026 0.86354594 0.74080861] tolerância 95.09125508056343 ===================================================== d [ -9.45783095 29.16002958 18.79815171 -51.84718179 -33.98231169 13.01578304 10.69592498 28.98237639 -54.22225593 -8.20381327] y [0.99753716 0.99858203 1.00011117 0.99560104 0.99127827 0.98254758 0.96151969 0.92771765 0.86342202 0.74078694] tolerância 96.2663145499922 ===================================================== d [ -8.11313495 29.89675458 16.49859996 -51.348902 -34.33796919 11.63235775 12.91044126 29.72897582 -56.59802373 -7.39225813] y [0.99751451 0.99865188 1.0001562 0.99547684 0.99119687 0.98257876 0.96154531 0.92778708 0.86329214 0.74076729] tolerância 97.42398869044328 ===================================================== d [ -6.70601922 30.56450802 14.10616452 -50.71654939 -34.63865063 10.18090192 15.15257425 30.36850069 -58.84538394 -6.6117598 ] y [0.99749508 0.99872349 1.00019572 0.99535384 0.99111462 0.98260662 0.96157623 0.92785829 0.86315656 0.74074958] tolerância 98.55018977314431 ===================================================== d [ -5.2382449 31.15537517 11.62714348 -49.94669993 -34.87902115 8.66587251 17.41278302 30.89390929 -60.94770673 -5.8647228 ] y [0.99747901 0.9987967 1.00022951 0.99523236 0.99103165 0.98263101 0.96161253 0.92793103 0.86301561 0.74073375] tolerância 99.63061751224863 ===================================================== d [ -3.71246778 31.66154933 9.06932601 -49.03699691 -35.0539314 7.09272627 19.68039486 31.29858228 -62.88828466 -5.15372017] y [0.99746646 0.99887133 1.00025736 0.99511272 0.9909481 0.98265177 0.96165424 0.92800503 0.86286961 0.7407197 ] tolerância 100.65096675943016 ===================================================== d [ -2.13226407 32.07549052 6.44199044 -47.9863164 -35.15853577 5.46792473 21.9437036 31.57650417 -64.65065434 -4.48143932] y [0.99745757 0.99894717 1.00027908 0.99499526 0.99086413 0.98266876 0.96170138 0.92808001 0.86271897 0.74070735] tolerância 101.59715759528825 ===================================================== d [ -0.50213403 32.39009126 3.75586062 -46.79491177 -35.18841203 3.7989109 24.19010425 31.72244429 -66.21894042 -3.85062081] y [0.99745246 0.99902401 1.00029452 0.99488031 0.99077992 0.98268185 0.96175395 0.92815564 0.86256411 0.74069662] tolerância 102.45558276320256 ===================================================== d [ 1.17251865 32.59884444 1.02301828 -45.46453098 -35.13967867 2.09405598 26.4062628 31.73213082 -67.57821201 -3.26399186] y [0.99745126 0.99910159 1.00030351 0.99476822 0.99069563 0.98269095 0.96181189 0.92823163 0.86240549 0.74068739] tolerância 103.21336595458882 ===================================================== d [ 2.88531528 32.695466 -1.74320757 -43.99765063 -35.00839707 0.36256734 28.57777626 31.60197619 -68.71370896 -2.72394979] y [0.99745406 0.99917944 1.00030596 0.99465966 0.99061172 0.98269595 0.96187495 0.9283074 0.86224412 0.7406796 ] tolerância 103.85681389083216 ===================================================== d [ 4.6293121 32.67595785 -4.52843031 -42.40048862 -34.79310646 -1.38555559 30.69122653 31.33080302 -69.61515559 -2.23326596] y [0.99746097 0.99925775 1.00030178 0.99455427 0.99052786 0.98269682 0.9619434 0.9283831 0.86207953 0.74067308] tolerância 104.37799029232143 ===================================================== d [ 6.3966234 32.53599697 -7.31739984 -40.67874938 -34.49143528 -3.13977018 32.73182397 30.91763254 -70.27134521 -1.79396601] y [0.99747203 0.99933578 1.00029097 0.99445302 0.99044478 0.98269351 0.96201669 0.92845792 0.86191329 0.74066774] tolerância 104.76584710407423 ===================================================== d [ 8.17894046 32.27317438 -10.09431626 -38.84084152 -34.10283893 -4.88909736 34.68571729 30.36357213 -70.67488061 -1.40804562] y [0.9974873 0.99941347 1.00027349 0.99435588 0.99036241 0.98268602 0.96209485 0.92853175 0.86174549 0.74066346] tolerância 105.01348854267148 ===================================================== d [ 9.96751905 31.88618379 -12.84306742 -36.89644601 -33.62754782 -6.62232312 36.53940733 29.67107019 -70.82051255 -1.07713684] y [0.99750683 0.99949054 1.00024939 0.99426313 0.99028098 0.98267434 0.96217767 0.92860425 0.86157673 0.7406601 ] tolerância 105.1153904703126 ===================================================== d [ 11.75335186 31.37504143 -15.54753332 -34.8565999 -33.06676058 -8.32819522 38.28015452 28.84407616 -70.70565302 -0.80247131] y [0.99753063 0.99956668 1.00021872 0.99417503 0.99020068 0.98265853 0.96226493 0.9286751 0.86140761 0.74065752] tolerância 105.0680571514047 ===================================================== d [ 13.52733769 30.74110872 -18.19188986 -32.7335205 -32.42263741 -9.99562805 39.89622997 27.88800203 -70.3304786 -0.58484081] y [0.9975587 0.9996416 1.00018159 0.99409179 0.99012172 0.98263864 0.96235634 0.92874398 0.86123878 0.74065561] tolerância 104.87011456838505 ===================================================== d [ 15.28052669 29.98750841 -20.76115046 -30.54068439 -31.69857523 -11.61406126 41.37761419 26.81007785 -69.69887908 -0.42439433] y [0.9975909 0.99971478 1.00013829 0.99401387 0.99004454 0.98261485 0.96245131 0.92881037 0.86107136 0.74065422] tolerância 104.5236389765723 ===================================================== d [ 17.0039355 29.11770791 -23.24068488 -28.2914574 -30.89799906 -13.17319295 42.71460458 25.61789468 -68.81540932 -0.32097142] y [0.99762728 0.99978616 1.00008887 0.99394117 0.98996908 0.9825872 0.96254981 0.92887419 0.86090544 0.74065321] tolerância 104.02984016366707 ===================================================== d [ 18.68919368 28.13688585 -25.61726481 -26.00024865 -30.02566442 -14.66360437 43.89971517 24.32062932 -67.6884106 -0.27394528] y [0.99766775 0.99985548 1.00003354 0.99387382 0.98989553 0.98255584 0.96265149 0.92893517 0.86074163 0.74065244] tolerância 103.39372944146623 ===================================================== d [ 20.32850358 27.05130079 -27.87903806 -23.68166949 -29.08701198 -16.07678154 44.92715848 22.92838956 -66.32865926 -0.28222446] y [0.99771224 0.99992246 0.99997256 0.99381193 0.98982405 0.98252094 0.96275599 0.92899306 0.8605805 0.74065179] tolerância 102.62227654606143 ===================================================== d [ 21.9147745 25.86814731 -30.01571632 -21.35026385 -28.0880477 -17.40524584 45.79290658 21.45201067 -64.74911257 -0.3442807 ] y [0.99776063 0.99998685 0.9999062 0.99375556 0.98975481 0.98248267 0.96286294 0.92904764 0.86042261 0.74065112] tolerância 101.72424267513354 ===================================================== d [ 23.44237449 24.59630192 -32.01975842 -19.02080517 -27.03605551 -18.64329752 46.49624135 19.9036037 -62.9667804 -0.45803225] y [0.99781264 1.00004824 0.99983497 0.99370489 0.98968816 0.98244136 0.9629716 0.92909855 0.86026896 0.7406503 ] tolerância 100.7133502329505 ===================================================== d [ 24.90434113 23.24232923 -33.88202497 -16.705514 -25.93574965 -19.78438778 47.03314394 18.29314552 -60.99312348 -0.6213497 ] y [0.99786844 1.00010679 0.99975875 0.99365962 0.9896238 0.98239698 0.96308228 0.92914593 0.86011907 0.74064921] tolerância 99.59360414950916 ===================================================== d [ 26.29811468 21.81727278 -35.60025079 -14.41866464 -24.79616752 -20.82621436 47.40861068 16.63401772 -58.85088236 -0.83154839] y [0.99792754 1.00016194 0.99967835 0.99361997 0.98956226 0.98235004 0.9631939 0.92918934 0.85997433 0.74064774] tolerância 98.38610593769177 ===================================================== d [ 27.61915286 20.329179 -37.16976532 -12.17130044 -23.62297184 -21.76519537 47.62311563 14.93686244 -56.5554414 -1.0858205 ] y [0.99798994 1.00021371 0.99959387 0.99358576 0.98950341 0.98230061 0.9633064 0.92922881 0.85983468 0.74064576] tolerância 97.09975087823186 ===================================================== d [ 28.8649878 18.78726991 -38.58914452 -9.97423031 -22.42296564 -22.59980333 47.68066291 13.21290847 -54.12551032 -1.38121398] y [0.99805548 1.00026195 0.99950566 0.99355687 0.98944736 0.98224897 0.96341941 0.92926426 0.85970047 0.74064319] tolerância 95.74863999069784 ===================================================== d [ 30.03411286 17.20063194 -39.85862601 -7.83703762 -21.20271964 -23.32963652 47.58644884 11.47287667 -51.57984194 -1.71467593] y [0.99812398 1.00030654 0.99941409 0.99353321 0.98939415 0.98219534 0.96353255 0.92929561 0.85957203 0.74063991] tolerância 94.34714294055459 ===================================================== d [ 31.12593172 15.57806117 -40.97997248 -5.76801027 -19.9684666 -23.95532969 47.34666042 9.72683413 -48.93689417 -2.08311511] y [0.99819525 1.00034735 0.99931951 0.99351461 0.98934383 0.98213997 0.96364548 0.92932284 0.85944963 0.74063584] tolerância 92.9096427487755 ===================================================== d [ 32.14068695 13.9279289 -41.95630669 -3.77410927 -18.72601225 -24.47844514 46.96826318 7.98407751 -46.21452679 -2.48346021] y [0.99826912 1.00038432 0.99922226 0.99350092 0.98929645 0.98208313 0.96375783 0.92934592 0.8593335 0.7406309 ] tolerância 91.45030150394398 ===================================================== d [ 33.08098537 12.25883822 -42.79418059 -1.86099296 -17.48157081 -24.90272433 46.46130517 6.2534567 -43.43213371 -2.9127107 ] y [0.99834515 1.00041727 0.99912301 0.99349199 0.98925215 0.98202522 0.96386894 0.92936481 0.85922418 0.74062502] tolerância 89.98766366882552 ===================================================== d [ 3.39448068e+01 1.05763773e+01 -4.34939381e+01 -3.27954604e-02 -1.62378285e+01 -2.52282696e+01 4.58281805e+01 4.54161254e+00 -4.06003671e+01 -3.36782656e+00] y [0.99842365 1.00044636 0.99902146 0.99348758 0.98921067 0.98196613 0.96397919 0.92937964 0.85912111 0.74061811] tolerância 88.5242706260893 ===================================================== d [ 34.73637865 8.88795796 -44.0642939 1.70711189 -15.00014458 -25.46021316 45.07991157 2.85563895 -37.73684837 -3.84623815] y [0.9985042 1.00047146 0.99891825 0.9934875 0.98917213 0.98190626 0.96408794 0.92939042 0.85902477 0.74061012] tolerância 87.07830494906102 ===================================================== d [ 35.46009274 7.19988861 -44.51425693 3.35699631 -13.77281506 -25.60397721 44.22696519 1.20130852 -34.85686141 -4.34548782] y [0.99858637 1.00049248 0.99881401 0.99349154 0.98913665 0.98184603 0.96419458 0.92939718 0.8589355 0.74060102] tolerância 85.66592551467885 ===================================================== d [ 36.11483338 5.51618682 -44.84556913 4.91547197 -12.557122 -25.66071009 43.2717863 -0.41692747 -31.96795181 -4.86289525] y [0.99867052 1.00050957 0.99870838 0.9934995 0.98910397 0.98178528 0.96429953 0.92940003 0.85885278 0.74059071] tolerância 84.28741785797162 ===================================================== d [ 36.70596679 3.84176527 -45.06826467 6.3825869 -11.3564001 -25.6364528 42.22475991 -1.99502465 -29.08299234 -5.396583 ] y [0.99875622 1.00052266 0.99860196 0.99351117 0.98907417 0.98172438 0.96440221 0.92939904 0.85877692 0.74057917] tolerância 82.95795691026599 ===================================================== d [ 37.23848046 2.18047605 -45.19184723 7.7595504 -10.17295717 -25.53700835 41.0950825 -3.53007237 -26.21236336 -5.94480845] y [0.99884305 1.00053174 0.99849535 0.99352627 0.9890473 0.98166374 0.9645021 0.92939432 0.85870813 0.7405664 ] tolerância 81.69024660720868 ===================================================== d [ 37.71148798 0.53486173 -45.21827872 9.04682021 -9.00685372 -25.36376398 39.88440075 -5.01946075 -23.36048345 -6.50537054] y [0.99893142 1.00053692 0.99838811 0.99354468 0.98902316 0.98160314 0.96459962 0.92938594 0.85864592 0.74055229] tolerância 80.48239849755974 ===================================================== d [ 38.13034099 -1.0924757 -45.15716271 10.24669513 -7.85955681 -25.12251772 38.60115003 -6.46164676 -20.53539739 -7.07711337] y [0.9990209 1.00053819 0.9982808 0.99356615 0.98900179 0.98154295 0.96469427 0.92937403 0.85859049 0.74053686] tolerância 79.3459752239586 ===================================================== d [ 38.49790758 -2.69971364 -45.01492344 11.36147887 -6.73138126 -24.81724772 37.25045875 -7.85552018 -17.74210721 -7.65873218] y [0.99911139 1.0005356 0.99817364 0.99359046 0.98898314 0.98148333 0.96478587 0.9293587 0.85854176 0.74052006] tolerância 78.28636902819814 ===================================================== d [ 38.81522951 -4.28540317 -44.79562355 12.39313539 -5.62200927 -24.45053558 35.83512096 -9.20005505 -14.98383004 -8.24886048] y [0.99920303 1.00052917 0.99806649 0.99361751 0.98896712 0.98142426 0.96487454 0.92934 0.85849952 0.74050183] tolerância 77.30469496619853 ===================================================== d [ 39.08815294 -5.84883909 -44.50896681 13.34568254 -4.53159912 -24.02818168 34.36242782 -10.495792 -12.26502142 -8.84714071] y [0.99929514 1.000519 0.99796019 0.99364692 0.98895377 0.98136624 0.96495958 0.92931817 0.85846397 0.74048226] tolerância 76.41189461465257 ===================================================== d [ 39.31512372 -7.38901815 -44.15578063 14.22073871 -3.458965 -23.55094513 32.83209319 -11.7418733 -9.58642369 -9.45204552] y [0.99938819 1.00050508 0.99785424 0.99367869 0.98894299 0.98130904 0.96504137 0.92929318 0.85843477 0.7404612 ] tolerância 75.60363076969477 ===================================================== d [ 39.50072305 -8.90598447 -43.74395566 15.02240416 -2.40350496 -23.02353049 31.24934592 -12.939418 -6.95027657 -10.06341907] y [0.99948148 1.00048754 0.99774945 0.99371243 0.98893478 0.98125315 0.96511929 0.92926532 0.85841202 0.74043877] tolerância 74.88804085104135 ===================================================== d [ 39.64292315 -10.39890259 -43.27369022 15.7522361 -1.36386933 -22.44634417 29.61328942 -14.0877698 -4.35667422 -10.67987025] y [0.99957551 1.00046634 0.99764532 0.99374819 0.98892906 0.98119834 0.96519367 0.92923452 0.85839548 0.74041481] tolerância 74.26031680752284 ===================================================== d [ 39.74385159 -11.86788426 -42.74983216 16.41350466 -0.33896058 -21.82231726 27.92644612 -15.18777688 -1.80632745 -11.30107988] y [0.99966988 1.00044159 0.99754231 0.99378569 0.98892581 0.98114491 0.96526417 0.92920098 0.85838511 0.74038939] tolerância 73.7240280391994 ===================================================== d [ 39.80256023 -13.31244241 -42.17373911 17.00818191 0.67252167 -21.15246193 26.1887083 -16.23933146 0.70069983 -11.92612465] y [0.99976479 1.00041325 0.99744023 0.99382488 0.988925 0.9810928 0.96533085 0.92916471 0.85838079 0.7403624 ] tolerância 73.27710512324138 ===================================================== d [ 39.82050247 -14.73288377 -41.54938878 17.53950387 1.67191191 -20.43925265 24.40180981 -17.24349687 3.16452778 -12.55474167] y [0.99985953 1.00038156 0.99733984 0.99386537 0.9889266 0.98104245 0.96539319 0.92912606 0.85838246 0.74033401] tolerância 72.92253975790689 ===================================================== d [ 39.79463284 -16.12808907 -40.87589334 18.00851752 2.66046823 -19.68256949 22.56422748 -18.19945772 5.58518463 -13.18555359] y [0.99995462 1.00034638 0.99724062 0.99390726 0.98893059 0.98099364 0.96545146 0.92908488 0.85839002 0.74030403] tolerância 72.65516352064755 ===================================================== d [ 39.72436571 -17.49772031 -40.15502315 18.41740744 3.63944517 -18.88371351 20.67624793 -19.10749171 7.96274304 -13.8179061 ] y [1.00004965 1.00030787 0.99714301 0.99395026 0.98893695 0.98094664 0.96550534 0.92904142 0.85840335 0.74027255] tolerância 72.47486253502846 ===================================================== d [ 39.60744639 -18.84074854 -39.38684708 18.76757476 4.60994983 -18.04318023 18.73726106 -19.96714905 10.297012 -14.45064858] y [1.00014451 1.00026608 0.99704713 0.99399424 0.98894564 0.98090155 0.96555472 0.92899579 0.85842237 0.74023955] tolerância 72.37885661087805 ===================================================== d [ 39.44114027 -20.15579234 -38.57102457 19.06013814 5.57292296 -17.16133179 16.74662988 -20.77770131 12.5874579 -15.0824553 ] y [1.00023909 1.00022109 0.99695307 0.99403905 0.98895665 0.98085846 0.96559946 0.92894811 0.85844696 0.74020504] tolerância 72.36385928182682 ===================================================== d [ 39.222271 -21.44107703 -37.70687506 19.29594815 6.52910389 -16.2384566 14.70379959 -21.53812721 14.83311791 -15.71180253] y [1.00033327 1.00017296 0.99686097 0.99408457 0.98896995 0.98081748 0.96563945 0.9288985 0.85847701 0.74016903] tolerância 72.42609895422513 ===================================================== d [ 38.94739083 -22.69446911 -36.79358493 19.47558205 7.47903089 -15.27482696 12.60835242 -22.2471133 17.0324993 -16.33705715] y [1.00042722 1.0001216 0.99677065 0.99413079 0.98898559 0.98077859 0.96567467 0.92884691 0.85851255 0.74013139] tolerância 72.5614958532597 ===================================================== d [ 38.61299036 -23.91357724 -35.83039871 19.59964175 8.42304574 -14.27092303 10.46033064 -22.90329431 19.18378546 -16.95642534] y [1.00052051 1.00006724 0.99668251 0.99417744 0.98900351 0.980742 0.96570487 0.92879362 0.85855334 0.74009226] tolerância 72.76613199436729 ===================================================== d [ 38.21484981 -25.09532875 -34.81603216 19.66833738 9.3611441 -13.22715367 8.26000169 -23.50479784 21.28441108 -17.56770578] y [1.000613 1.00000996 0.99659669 0.99422439 0.98902368 0.98070781 0.96572993 0.92873876 0.8585993 0.74005164] tolerância 73.03486882890759 ===================================================== d [ 37.74854601 -26.23619251 -33.74918146 19.68173995 10.29301852 -12.14410974 6.00814557 -24.04949898 23.33117375 -18.16845164] y [1.00070454 0.99994985 0.99651329 0.9942715 0.98904611 0.98067613 0.96574971 0.92868245 0.85865028 0.74000956] tolerância 73.36216197913888 ===================================================== d [ 37.2095348 -27.33219311 -32.6286103 19.63982986 11.21804548 -11.02262724 3.70615189 -24.53505683 25.32022477 -18.75597316] y [1.00079496 0.999887 0.99643245 0.99431864 0.98907076 0.98064704 0.96576411 0.92862485 0.85870617 0.73996604] tolerância 73.74208520248528 ===================================================== d [ 36.59324087 -28.37893771 -31.45323677 19.54254949 12.13527801 -9.86384703 1.35610827 -24.9589602 27.24707784 -19.32734703] y [1.00088409 0.99982153 0.99635429 0.99436569 0.98909764 0.98062064 0.96577298 0.92856608 0.85876682 0.73992112] tolerância 74.16835912516608 ===================================================== d [ 35.89515417 -29.37165607 -30.22221872 19.38985874 13.04344433 -8.66926977 -1.03912249 -25.31858201 29.10663627 -19.87943274] y [1.00097175 0.99975355 0.99627895 0.9944125 0.9891267 0.98059701 0.96577623 0.92850629 0.85883208 0.73987482] tolerância 74.63438596880839 ===================================================== d [ 35.11154183 -30.30576429 -28.93561057 19.18209102 13.9412211 -7.44092703 -3.47595615 -25.61163971 30.89368672 -20.40929783] y [1.001058 0.99968298 0.99620633 0.99445909 0.98915804 0.98057618 0.96577374 0.92844545 0.85890202 0.73982705] tolerância 75.13457667010141 ===================================================== d [ 34.23741985 -31.17517864 -27.59245157 18.91895486 14.82635193 -6.18096315 -5.94972237 -25.83487104 32.60141615 -20.91289392] y [1.0011421 0.99961039 0.99613702 0.99450504 0.98919144 0.98055836 0.96576541 0.92838411 0.85897602 0.73977817] tolerância 75.65995744408289 ===================================================== d [ 33.26937578 -31.97459443 -26.19332653 18.60095778 15.6967505 -4.89228434 -8.45492607 -25.98589758 34.22347158 -21.38675516] y [1.00122411 0.99953571 0.99607093 0.99455036 0.98922695 0.98054355 0.96575116 0.92832222 0.85905412 0.73972807] tolerância 76.20384774655213 ===================================================== d [ 32.20480477 -32.6989899 -24.73966874 18.22903294 16.55031556 -3.57830497 -10.98539793 -26.06276973 35.75355052 -21.8276389 ] y [1.00130405 0.99945888 0.99600799 0.99459505 0.98926467 0.9805318 0.96573084 0.92825978 0.85913635 0.73967668] tolerância 76.76033208943713 ===================================================== d [ 31.04011163 -33.34181688 -23.23208646 17.80353711 17.3838656 -2.24272442 -13.53349627 -26.06259408 37.18361805 -22.23113888] y [1.0013812 0.99938056 0.99594873 0.99463872 0.98930431 0.98052322 0.96570453 0.92819735 0.85922199 0.7396244 ] tolerância 77.31978383516181 ===================================================== d [ 29.77422564 -33.8984993 -21.67341493 17.32621007 18.19506586 -0.88997628 -16.09143803 -25.98424914 38.5074735 -22.59422637] y [1.00145578 0.99930044 0.9958929 0.99468149 0.98934608 0.98051783 0.96567201 0.92813473 0.85931134 0.73957098] tolerância 77.8770818574686 ===================================================== d [ 28.40507827 -34.36278656 -20.06561424 16.79813362 18.98034515 0.47517794 -18.64974741 -25.82556361 39.71697637 -22.9125869 ] y [1.0015271 0.99921924 0.99584099 0.994723 0.98938967 0.9805157 0.96563346 0.92807249 0.85940358 0.73951686] tolerância 78.42278177013624 ===================================================== d [ 26.93262856 -34.7300373 -18.41228484 16.22148406 19.73675416 1.84742181 -21.19879069 -25.58582173 40.80555603 -23.18295663] y [1.00159514 0.99913693 0.99579292 0.99476323 0.98943513 0.98051684 0.96558879 0.92801063 0.85949871 0.73946198] tolerância 78.95067488358232 ===================================================== d [ 25.35742085 -34.99571528 -16.7174391 15.59867976 20.46112615 3.22105137 -23.72816852 -25.26458031 41.76661348 -23.40205446] y [1.00165965 0.99905374 0.99574882 0.99480209 0.98948241 0.98052127 0.96553801 0.92794934 0.85959646 0.73940644] tolerância 79.45410291897959 ===================================================== d [ 23.68125827 -35.15634071 -14.98602574 14.93281908 21.15068061 4.59005216 -26.22741362 -24.86234491 42.59461484 -23.56725743] y [1.00172058 0.99896965 0.99570865 0.99483957 0.98953157 0.98052901 0.965481 0.92788863 0.85969681 0.73935021] tolerância 79.92803302834669 ===================================================== d [ 21.90606293 -35.20782443 -13.2228042 14.22680873 21.80184013 5.94804604 -28.68470663 -24.37935334 43.28323736 -23.67532984] y [1.00177731 0.99888544 0.99567275 0.99487534 0.98958224 0.98054 0.96541817 0.92782908 0.85979884 0.73929376] tolerância 80.36501577142492 ===================================================== d [ 20.0353963 -35.14777424 -11.43362017 13.4844327 22.41178649 7.28851392 -31.08872946 -23.81719971 43.8280249 -23.72410783] y [1.00182978 0.9988011 0.99564108 0.99490942 0.98963446 0.98055425 0.96534946 0.92777068 0.85990252 0.73923705] tolerância 80.76043897028124 ===================================================== d [ 18.07359079 -34.97437185 -9.62466289 12.70975789 22.97775375 8.60480097 -33.42796952 -23.17798406 44.22511939 -23.71169628] y [1.00187777 0.99871691 0.99561369 0.99494172 0.98968814 0.98057171 0.965275 0.92771363 0.86000751 0.73918022] tolerância 81.10984549049928 ===================================================== d [ 16.02580153 -34.68639446 -7.80234428 11.90705439 23.49695889 9.89018759 -35.69071554 -22.46429122 44.47132197 -23.63643545] y [1.00192093 0.9986334 0.99559071 0.99497207 0.98974301 0.98059225 0.96519517 0.92765828 0.86011311 0.7391236 ] tolerância 81.40884963239282 ===================================================== d [ 13.89818616 -34.28404443 -5.97367175 11.08119624 23.96740495 11.13809315 -37.86618107 -21.67973514 44.56508598 -23.49759941] y [1.00195932 0.99855031 0.99557202 0.99500059 0.9897993 0.98061595 0.96510968 0.92760447 0.86021964 0.73906698] tolerância 81.65518486820562 ===================================================== d [ 11.69755459 -33.76818891 -4.14565453 10.23715346 24.38718569 12.34208846 -39.94377191 -20.82836438 44.50567787 -23.29475377] y [1.00199261 0.99846819 0.99555771 0.99502713 0.98985671 0.98064263 0.96501898 0.92755254 0.86032639 0.7390107 ] tolerância 81.84679404063333 ===================================================== d [ 9.43142111 -33.14055716 -2.32528746 9.37999957 24.75463244 13.49602229 -41.91339549 -19.91474428 44.29343903 -23.02789992] y [1.00202054 0.99838755 0.99554781 0.99505158 0.98991494 0.9806721 0.96492359 0.92750281 0.86043266 0.73895507] tolerância 81.9822950928566 ===================================================== d [ 7.10776003 -32.40367749 -0.51961272 8.51497353 25.06846741 14.59400755 -43.76564867 -18.94392158 43.92968356 -22.6975221 ] y [1.00204306 0.99830842 0.99554226 0.99507398 0.98997405 0.98070432 0.96482351 0.92745525 0.86053843 0.73890008] tolerância 82.06098482082011 ===================================================== d [ 4.73501792 -31.56107844 1.26444938 7.64741601 25.32791724 15.6306329 -45.49218558 -17.92147906 43.41702145 -22.30469833] y [1.00206004 0.99823104 0.99554102 0.99509431 0.99003392 0.98073917 0.964719 0.92741002 0.86064333 0.73884588] tolerância 82.08336997880089 ===================================================== d [ 2.32181566 -30.61692424 3.02015053 6.78269852 25.5325747 16.60085825 -47.08547752 -16.85330884 42.75888515 -21.85089562] y [1.00207138 0.99815544 0.99554405 0.99511263 0.99009458 0.98077661 0.96461003 0.92736709 0.86074733 0.73879246] tolerância 82.05043669862306 ===================================================== d [ -0.12283566 -29.57679851 4.7412822 5.92610359 25.68284134 17.50065591 -48.5400253 -15.74589912 41.96069802 -21.33844183] y [1.00207692 0.99808233 0.99555126 0.99512883 0.99015555 0.98081626 0.96449759 0.92732684 0.86084943 0.73874028] tolerância 81.96571556965858 ===================================================== d [ -2.59000896 -28.44621087 6.42203054 5.08258962 25.77894714 18.32624964 -49.85045144 -14.60551918 41.02780742 -20.76955883] y [1.00207663 0.99801192 0.99556255 0.99514293 0.99021669 0.98085792 0.96438205 0.92728936 0.86094932 0.73868948] tolerância 81.83165677068963 ===================================================== d [ -5.07095216 -27.23061073 8.05676521 4.25700869 25.82109856 19.07395748 -51.0114854 -13.4383338 39.96544698 -20.14642528] y [1.00207045 0.99794399 0.99557788 0.99515507 0.99027825 0.98090168 0.96426301 0.92725448 0.86104729 0.73863989] tolerância 81.64971005672521 ===================================================== d [ -7.55708506 -25.93711149 9.64070273 3.45410353 25.81084529 19.74153666 -52.02103545 -12.25112564 38.7812959 -19.47236541] y [1.00205834 0.99787897 0.99559712 0.99516523 0.99033991 0.98094722 0.9641412 0.92722239 0.86114273 0.73859178] tolerância 81.42533639851109 ===================================================== d [-10.04014293 -24.57264089 11.16956831 2.6782202 25.74959813 20.32707056 -52.87728239 -11.0503384 37.48285286 -18.75055892] y [1.00204029 0.99781703 0.99562014 0.99517348 0.99040154 0.98099436 0.96401698 0.92719314 0.86123533 0.73854528] tolerância 81.16309236047834 ===================================================== d [-12.51232038 -23.1446884 12.63992491 1.93330319 25.63931915 20.82961293 -53.58020781 -9.84236811 36.07857134 -17.98459427] y [1.00201639 0.99775854 0.99564673 0.99517986 0.99046284 0.98104275 0.9638911 0.92716683 0.86132456 0.73850065] tolerância 80.86907571077171 ===================================================== d [-14.9658987 -21.65958811 14.04822349 1.22303581 25.48098416 21.24758039 -54.12798417 -8.63295686 34.57520139 -17.17722636] y [1.00198651 0.99770327 0.99567691 0.99518448 0.99052406 0.98109249 0.96376316 0.92714333 0.86141071 0.7384577 ] tolerância 80.54547024984706 ===================================================== d [-17.39440567 -20.12551209 15.39259409 0.55067338 25.27757616 21.58167855 -54.52380295 -7.42824498 32.9824997 -16.33263384] y [1.00195089 0.99765171 0.99571035 0.99518739 0.99058472 0.98114307 0.96363431 0.92712278 0.86149302 0.73841681] tolerância 80.2009077973955 ===================================================== d [-19.79095297 -18.54875 16.67054401 -0.08079201 25.03024929 21.83122971 -54.76720271 -6.23344928 31.3073499 -15.45358704] y [1.00190935 0.99760366 0.99574711 0.9951887 0.99064508 0.98119461 0.96350411 0.92710504 0.86157177 0.73837781] tolerância 79.83754363604736 ===================================================== d [-22.15044652 -16.93718081 17.88152596 -0.66891694 24.74223953 21.99797636 -54.8630323 -5.05396044 29.55942423 -14.54418534] y [1.00186224 0.9975595 0.99578679 0.99518851 0.99070466 0.98124657 0.96337374 0.9270902 0.8616463 0.73834102] tolerância 79.46435612402989 ===================================================== d [-24.46687999 -15.29680724 19.02399889 -1.21139149 24.41474201 22.08198037 -54.81185811 -3.89435895 27.74539866 -13.60707036] y [1.00180935 0.99751906 0.99582949 0.99518691 0.99076374 0.9812991 0.96324273 0.92707814 0.86171689 0.73830629] tolerância 79.08324315195476 ===================================================== d [-26.73651557 -13.63490199 20.09855379 -1.70654897 24.05101553 22.0857474 -54.61961136 -2.75916924 25.87439717 -12.64605677] y [1.0017511 0.99748264 0.99587478 0.99518403 0.99082186 0.98135167 0.96311225 0.92706887 0.86178293 0.7382739 ] tolerância 78.70304770219913 ===================================================== d [-28.95417311 -11.95702398 21.10443975 -2.15270688 23.65217857 22.00989565 -54.28753669 -1.65229869 23.9526429 -11.66355784] y [1.00168726 0.99745008 0.99592277 0.99517995 0.99087929 0.98140441 0.96298183 0.92706228 0.86184472 0.7382437 ] tolerância 78.3252293927199 ===================================================== d [-31.11687616 -10.26946665 22.04272461 -2.54871752 23.22096434 21.85694473 -53.82110103 -0.57744835 21.9879748 -10.66277039] y [1.00161812 0.99742153 0.99597317 0.99517481 0.99093577 0.98145697 0.96285219 0.92705833 0.86190192 0.73821585] tolerância 77.95671160690242 ===================================================== d [-33.22182744 -8.57790124 22.9146629 -2.89385676 22.75964312 21.62941111 -53.22530505 0.46224289 19.98742326 -9.64649524] y [1.00154405 0.99739709 0.99602564 0.99516875 0.99099105 0.981509 0.96272408 0.92705696 0.86195426 0.73819047] tolerância 77.60320802665184 ===================================================== d [-35.2648048 -6.88711803 23.72032902 -3.18722248 22.26906481 21.32843019 -52.50183956 1.46381272 17.95630775 -8.61672302] y [1.00146472 0.9973766 0.99608036 0.99516184 0.9910454 0.98156064 0.96259698 0.92705806 0.86200199 0.73816744] tolerância 77.26530423685011 ===================================================== d [-37.24400993 -5.20224012 24.46158733 -3.42845964 21.75148772 20.95682542 -51.65615885 2.42469576 15.90103326 -7.575976 ] y [1.00138051 0.99736016 0.996137 0.99515422 0.99109857 0.98161157 0.96247161 0.92706156 0.86204486 0.73814686] tolerância 76.94865226855069 ===================================================== d [-39.15700278 -3.52780186 25.13983453 -3.61734231 21.20837379 20.51686265 -50.69214759 3.34263574 13.82700076 -6.52630309] y [1.00129157 0.99734773 0.99619541 0.99514604 0.99115052 0.98166162 0.96234826 0.92706735 0.86208283 0.73812877] tolerância 76.65634411614336 ===================================================== d [-41.0014622 -1.86805337 25.75653873 -3.75381736 20.64100673 20.01081045 -49.61352124 4.21563816 11.7392228 -5.46958031] y [1.00119807 0.99733931 0.99625544 0.9951374 0.99120116 0.98171061 0.96222721 0.92707533 0.86211585 0.73811318] tolerância 76.39098381003582 ===================================================== d [-42.77511177 -0.22699521 26.31319112 -3.83798247 20.05049155 19.44092106 -48.42379587 5.04193771 9.64235858 -4.40752949] y [1.00110016 0.99733485 0.99631695 0.99512844 0.99125045 0.98175839 0.96210874 0.92708539 0.86214388 0.73810012] tolerância 76.15467162782959 ===================================================== d [-44.47564885 1.39158256 26.8112634 -3.87006758 19.43775911 18.80941914 -47.12627318 5.81996619 7.54075746 -3.34174079] y [1.00099802 0.99733431 0.99637978 0.99511927 0.99129833 0.98180482 0.96199311 0.92709743 0.86216691 0.7380896 ] tolerância 75.94899422306209 ===================================================== d [-46.10067875 2.98405863 27.25217131 -3.85042016 18.8035752 18.11849837 -45.72404039 6.5483221 5.43850893 -2.27369834] y [1.00089181 0.99733763 0.99644381 0.99511003 0.99134474 0.98184973 0.96188057 0.92711133 0.86218491 0.73808162] tolerância 75.77502004296406 ===================================================== d [-47.64732865 4.54685629 27.6370155 -3.779326 18.14844059 17.37009919 -44.21955064 7.2255775 3.33947223 -1.20478854] y [1.00078138 0.99734478 0.99650908 0.99510081 0.99138978 0.98189313 0.96177105 0.92712702 0.86219794 0.73807617] tolerância 75.6326465289174 ===================================================== d [-49.11405576 6.07672817 27.96775641 -3.65758001 17.47328368 16.56692001 -42.61677002 7.85088333 1.24747207 -0.13641062] y [1.00066761 0.99735564 0.99657508 0.99509178 0.99143312 0.98193461 0.96166545 0.92714427 0.86220592 0.7380733 ] tolerância 75.5239638793223 ===================================================== d [-50.49678058 7.57015138 28.24482917 -3.48560801 16.77805914 15.71060845 -40.91712952 8.42290561 -0.83392941 0.9301436 ] y [1.00054996 0.99737019 0.99664207 0.99508302 0.99147498 0.98197429 0.96156337 0.92716308 0.8622089 0.73807297] tolerância 75.44664205619803 ===================================================== d [-51.79308769 9.02388338 28.46967848 -3.26422664 16.06332764 14.80361137 -39.1238285 8.94080991 -2.90112205 1.99353966] y [1.000429 0.99738832 0.99670973 0.99507467 0.99151517 0.98201193 0.96146536 0.92718325 0.86220691 0.7380752 ] tolerância 75.40101010613559 ===================================================== d [-52.99943713 10.43457411 28.64312913 -2.99429225 15.3292697 13.84812226 -37.23927222 9.40370212 -4.95052874 3.0524249 ] y [1.00030494 0.99740994 0.99677792 0.99506685 0.99155364 0.98204739 0.96137164 0.92720467 0.86219996 0.73807997] tolerância 75.3856110036235 ===================================================== d [-54.11198057 11.79881787 28.76586664 -2.67677623 14.57602678 12.84641189 -35.2659031 9.81074825 -6.97849628 4.1053809 ] y [1.00017799 0.99743493 0.99684653 0.99505968 0.99159036 0.98208056 0.96128244 0.92722719 0.8621881 0.73808728] tolerância 75.39852703700883 ===================================================== d [-55.12656772 13.11313356 28.83844652 -2.31277814 13.80373033 11.80086127 -33.20627378 10.16117484 -8.98124964 5.15089913] y [1.00004837 0.9974632 0.99691544 0.99505327 0.99162528 0.98211133 0.96119797 0.9272507 0.86217138 0.73809712] tolerância 75.43740238401759 ===================================================== d [-56.03876616 14.37395263 28.86130888 -1.90354126 13.01253149 10.71399578 -31.06312383 10.45427483 -10.95485325 6.1873599 ] y [0.99991632 0.99749461 0.99698452 0.99504773 0.99165834 0.9821396 0.96111843 0.92727503 0.86214987 0.73810946] tolerância 75.49946943138386 ===================================================== d [-56.84397692 15.57754547 28.83485758 -1.45034706 12.20266801 9.58847685 -28.83943411 10.68931592 -12.89516037 7.21302847] y [0.99978167 0.99752915 0.99705387 0.99504315 0.99168961 0.98216534 0.96104379 0.92730015 0.86212355 0.73812432] tolerância 75.58162159906983 ===================================================== d [-57.53775174 16.72038643 28.75956362 -0.95491133 11.37448579 8.4273387 -26.53881609 10.86596238 -14.79798383 8.22607499] y [0.99964551 0.99756646 0.99712294 0.99503968 0.99171884 0.98218831 0.96097471 0.92732576 0.86209266 0.7381416 ] tolerância 75.68100191770843 ===================================================== d [-58.11469739 17.79847224 28.63549287 -0.4188796 10.52829846 7.233613 -24.16475624 10.98360717 -16.65865936 9.22443019] y [0.99950726 0.99760664 0.99719204 0.99503739 0.99174617 0.98220856 0.96091094 0.92735187 0.8620571 0.73816137] tolerância 75.7932511440129 ===================================================== d [-58.57002041 18.80813584 28.4629724 0.15563724 9.66465585 6.01079352 -21.7216583 11.04213475 -18.47254484 10.20595511] y [0.99936805 0.99764927 0.99726063 0.99503638 0.99177139 0.98222589 0.96085306 0.92737818 0.8620172 0.73818346] tolerância 75.91474895431058 ===================================================== d [-58.89835718 19.74531857 28.24213274 0.76653481 8.78419268 4.76250686 -19.21412286 11.04125696 -20.23453873 11.16828132] y [0.99922732 0.99769446 0.99732902 0.99503676 0.99179461 0.98224033 0.96080086 0.92740471 0.86197281 0.73820799] tolerância 76.04079281742607 ===================================================== d [-59.09514131 20.60630961 27.97345428 1.41135325 7.88781963 3.49279092 -16.64764886 10.9811191 -21.93961261 12.10901663] y [0.9990858 0.99774191 0.99739688 0.9950386 0.99181572 0.98225177 0.9607547 0.92743124 0.86192419 0.73823482] tolerância 76.16746776614126 ===================================================== d [-59.1555793 21.38733658 27.65729009 2.08738301 6.9765691 2.20594211 -14.02819468 10.86200376 -23.58245479 13.02557219] y [0.9989438 0.99779142 0.9974641 0.99504199 0.99183467 0.98226016 0.96071469 0.92745763 0.86187148 0.73826392] tolerância 76.29019180094713 ===================================================== d [-59.07518745 22.0847527 27.29411131 2.79165641 6.05168707 0.90656847 -11.36238206 10.68444129 -25.15763861 13.91525095] y [0.99880166 0.99784281 0.99753055 0.995047 0.99185143 0.98226547 0.96068099 0.92748373 0.86181481 0.73829521] tolerância 76.40433702366238 ===================================================== d [-58.84990308 22.69509807 26.88453956 3.5209475 5.11464252 -0.4004196 -8.65750552 10.44923779 -26.65967826 14.77527377] y [0.99865972 0.99789587 0.99759614 0.99505371 0.99186597 0.98226764 0.96065369 0.9275094 0.86175436 0.73832865] tolerância 76.50529487393982 ===================================================== d [-58.47619633 23.21516497 26.42937682 4.27177774 4.1671317 -1.70982797 -5.92152437 10.15750057 -28.08309403 15.60281068] y [0.99851831 0.99795041 0.99766074 0.99506217 0.99187826 0.98226668 0.96063288 0.92753451 0.8616903 0.73836415] tolerância 76.58854420932813 ===================================================== d [-57.95117942 23.6420653 25.92963185 5.04042791 3.21107674 -3.01620706 -3.1630363 9.81065995 -29.42248545 16.39501694] y [0.9983778 0.99800619 0.99772424 0.99507244 0.99188828 0.98226257 0.96061865 0.92755891 0.86162283 0.73840164] tolerância 76.64972123786042 ===================================================== d [-57.27271015 23.97329828 25.38654213 5.82295668 2.24861851 -4.31388969 -0.39123205 9.41048629 -30.67261068 17.14907295] y [0.99823856 0.998063 0.99778654 0.99508455 0.99189599 0.98225533 0.96061105 0.92758249 0.86155213 0.73844104] tolerância 76.68469010840225 ===================================================== d [-56.43948619 24.20681622 24.8015907 6.61522571 1.28210352 -5.59703791 2.38416869 8.9591012 -31.82847007 17.86222717] y [0.99810094 0.9981206 0.99784754 0.99509854 0.9919014 0.98224496 0.96061011 0.9276051 0.86147843 0.73848224] tolerância 76.68961278506887 ===================================================== d [-55.45112718 24.34108622 24.17651691 7.41293121 0.31406504 -6.85969818 5.15299938 8.45898209 -32.88539146 18.53184096] y [0.99796533 0.99817876 0.99790714 0.99511444 0.99190448 0.98223151 0.96061584 0.92762662 0.86140195 0.73852516] tolerância 76.66101669160554 ===================================================== d [-54.30824133 24.37514562 23.51332081 8.21164139 -0.65280138 -8.09586361 7.90474548 7.91295952 -33.83911479 19.1554341 ] y [0.99783209 0.99823725 0.99796523 0.99513225 0.99190523 0.98221503 0.96062822 0.92764695 0.86132293 0.73856969] tolerância 76.59585852149155 ===================================================== d [-53.01208585 24.30837936 22.81418555 9.00684757 -1.6156623 -9.29946302 10.62860251 7.32408434 -34.68560588 19.73061672] y [0.99770119 0.998296 0.9980219 0.99515204 0.99190366 0.98219552 0.96064728 0.92766602 0.86124137 0.73861586] tolerância 76.49102686066875 ===================================================== d [-51.56660142 24.14175321 22.08205466 9.79412915 -2.57162153 -10.46481788 13.3139364 6.69609336 -35.42248565 20.25578447] y [0.99757381 0.99835441 0.99807672 0.99517368 0.99189978 0.98217317 0.96067282 0.92768362 0.86115803 0.73866327] tolerância 76.34627854168323 ===================================================== d [-49.9746963 23.87586312 21.31928952 10.56881116 -3.51768 -11.58605232 15.94978527 6.03265524 -36.04664666 20.72888425] y [0.99744991 0.99841242 0.99812978 0.99519721 0.9918936 0.98214803 0.96070481 0.92769971 0.86107292 0.73871194] tolerância 76.15894971986091 ===================================================== d [-48.2412036 23.51238743 20.52883749 11.32641449 -4.45082824 -12.65765463 18.52547806 5.33783053 -36.55621533 21.14849094] y [0.99732983 0.99846979 0.99818101 0.99522261 0.99188514 0.98212019 0.96074313 0.9277142 0.8609863 0.73876175] tolerância 75.92821830620065 ===================================================== d [-46.37197102 23.05369478 19.71382609 12.06257701 -5.36807784 -13.67441932 21.0306488 4.6159049 -36.95005309 21.51352995] y [0.99721391 0.99852628 0.99823033 0.99524982 0.99187445 0.98208977 0.96078764 0.92772703 0.86089847 0.73881256] tolerância 75.65389882045655 ===================================================== d [-44.3737981 22.50282833 18.87752835 12.77310514 -6.26650368 -14.6315144 23.45537461 3.87134351 -37.22778517 21.82329993] y [0.99710249 0.99858168 0.9982777 0.99527881 0.99186155 0.98205692 0.96083818 0.92773812 0.86080968 0.73886426] tolerância 75.33645280771135 ===================================================== d [-42.25435219 21.86347612 18.02332395 13.45402283 -7.1432854 -15.52454229 25.79030713 3.10874082 -37.38981386 22.07748809] y [0.99699587 0.99863575 0.99832306 0.9953095 0.99184649 0.98202176 0.96089454 0.92774742 0.86072023 0.73891669] tolerância 74.9769872426426 ===================================================== d [-40.02206524 21.13992806 17.15465687 14.10161657 -7.99574683 -16.34959242 28.02679315 2.33276744 -37.43731556 22.27617745] y [0.99689434 0.99868828 0.99836637 0.99534183 0.99182933 0.98198446 0.96095651 0.92775489 0.86063039 0.73896974] tolerância 74.57724091561504 ===================================================== d [-37.68687145 20.33754048 16.27525283 14.7127484 -8.821556 -17.10368876 30.1576087 1.54818652 -37.37302826 22.42028534] y [0.99679847 0.99873892 0.99840746 0.99537561 0.99181018 0.98194529 0.96102364 0.92776048 0.86054071 0.7390231 ] tolerância 74.14115828204186 ===================================================== d [-35.25633282 19.46048509 15.38778673 15.28363349 -9.61801138 -17.78310053 32.17428508 0.75952725 -37.19766756 22.50956187] y [0.99670792 0.99878778 0.99844656 0.99541096 0.99178898 0.9819042 0.9610961 0.9277642 0.86045091 0.73907697] tolerância 73.66780894018605 ===================================================== d [-3.27416252e+01 1.85151287e+01 1.44961472e+01 1.58119850e+01 -1.03833125e+01 -1.83860870e+01 3.40716755e+01 -2.85679780e-02 -3.69156929e+01 2.25459161e+01] y [0.9966232 0.99883454 0.99848354 0.99544768 0.99176587 0.98186147 0.96117341 0.92776603 0.86036153 0.73913106] tolerância 73.16289455016287 ===================================================== d [-30.15293577 17.50752361 13.60358971 16.29546736 -11.11567731 -18.91100061 35.84479313 -0.81166438 -36.53103688 22.5309264 ] y [0.99654453 0.99887903 0.99851837 0.99548567 0.99174092 0.98181729 0.96125528 0.92776596 0.86027283 0.73918523] tolerância 72.63032663811005 ===================================================== d [-27.5013443 16.44448298 12.71348019 16.73256157 -11.81388909 -19.35735384 37.49069063 -1.58553268 -36.04914644 22.4670489 ] y [0.99647231 0.99892097 0.99855096 0.99552471 0.99171429 0.98177199 0.96134114 0.92776401 0.86018533 0.7392392 ] tolerância 72.07639225294567 ===================================================== d [-24.79518913 15.33132278 11.82799124 17.12068381 -12.47602317 -19.72338377 39.0041321 -2.34607887 -35.47265032 22.35507806] y [0.99640622 0.99896048 0.9985815 0.99556491 0.99168591 0.98172548 0.96143122 0.9277602 0.86009871 0.73929319] tolerância 71.50117423800405 ===================================================== d [-22.04627303 14.17567566 10.95054575 17.45973696 -13.10189463 -20.01045841 40.38569185 -3.08963913 -34.80916567 22.19883419] y [0.99634683 0.99899721 0.99860984 0.99560592 0.99165602 0.98167823 0.96152465 0.92775458 0.86001374 0.73934674] tolerância 70.91430829815386 ===================================================== d [-19.2626452 12.98301077 10.08290791 17.74776015 -13.69001554 -20.21773177 41.63179619 -3.81254375 -34.06189639 21.9994735 ] y [0.99629386 0.99903127 0.99863615 0.99564788 0.99162454 0.98163015 0.96162169 0.92774716 0.8599301 0.73940008] tolerância 70.3161493441294 ===================================================== d [-16.45447462 11.76033946 9.22769895 17.98493941 -14.24048003 -20.34696588 42.74396437 -4.51169638 -33.23793123 21.76060036] y [0.99624757 0.99906246 0.99866038 0.99569052 0.99159165 0.98158157 0.96172173 0.927738 0.85984825 0.73945294] tolerância 69.71470275207412 ===================================================== d [-13.63093979 10.51430182 8.38698778 18.1716681 -14.75357412 -20.40025649 43.72440001 -5.18438518 -32.34406507 21.48566729] y [0.99620816 0.99909063 0.99868248 0.9957336 0.99155754 0.98153283 0.96182411 0.92772719 0.85976864 0.73950506] tolerância 69.11740745042232 ===================================================== d [-10.79941656 9.25046538 7.56199338 18.30736795 -15.22884702 -20.37856157 44.57307702 -5.827889 -31.38478481 21.17671945] y [0.99617551 0.99911582 0.99870257 0.99577713 0.9915222 0.98148397 0.96192885 0.92771477 0.85969116 0.73955653] tolerância 68.52683543450213 ===================================================== d [ -7.96703489 7.97436517 6.75389468 18.39184794 -15.6661594 -20.2832506 45.29092685 -6.43974457 -30.36492896 20.83611057] y [0.99614956 0.99913805 0.99872074 0.99582112 0.9914856 0.981435 0.96203595 0.92770077 0.85961575 0.73960741] tolerância 67.9462206044046 ===================================================== d [ -5.14123302 6.6921334 5.96413311 18.42695622 -16.06703409 -20.118077 45.88393501 -7.01844598 -29.29220399 20.46813836] y [0.99613048 0.99915715 0.99873692 0.99586517 0.99144808 0.98138642 0.96214444 0.92768534 0.85954301 0.73965732] tolerância 67.38545177696979 ===================================================== d [ -2.32742677 5.40840172 5.19305305 18.41216612 -16.43106339 -19.88404505 46.35234468 -7.56185982 -28.16993511 20.07428658] y [0.99611812 0.99917323 0.99875125 0.99590945 0.99140947 0.98133808 0.96225469 0.92766848 0.85947263 0.7397065 ] tolerância 66.84491217945828 ===================================================== d [ 0.46883384 4.12841322 4.44141108 18.34957492 -16.76005431 -19.58511684 46.70292795 -8.06910603 -27.00509364 19.65858781] y [0.99611255 0.99918618 0.99876369 0.99595355 0.99137011 0.98129045 0.96236572 0.92765037 0.85940515 0.73975459] tolerância 66.33386739046577 ===================================================== d [ 3.24322935 2.85629308 3.70912587 18.23891674 -17.05381229 -19.22256297 46.93663112 -8.53850564 -25.80064177 19.22242976] y [0.99611367 0.9991961 0.99877436 0.99599764 0.99132984 0.98124339 0.96247794 0.92763098 0.85934027 0.73980182] tolerância 65.85256583664479 ===================================================== d [ 5.99194857 1.59632283 2.9963082 18.08185288 -17.31385155 -18.79974135 47.05928206 -8.96941466 -24.56197682 18.76901692] y [0.99612147 0.99920297 0.99878327 0.99604147 0.99128887 0.9811972 0.96259072 0.92761046 0.85927827 0.73984801] tolerância 65.40776573852928 ===================================================== d [ 8.71204516 0.35224622 2.30265862 17.87980036 -17.54147177 -18.31972365 47.07612205 -9.36134729 -23.29357692 18.30100066] y [0.99613582 0.99920679 0.99879045 0.99608478 0.99124739 0.98115217 0.96270344 0.92758898 0.85921944 0.73989297] tolerância 65.00494021956086 ===================================================== d [ 11.40069974 -0.87256404 1.62757755 17.63268321 -17.73664328 -17.78390835 46.98863254 -9.71315688 -21.99768687 17.81952346] y [0.99615675 0.99920764 0.99879598 0.99612774 0.99120524 0.98110815 0.96281656 0.92756648 0.85916347 0.73993694] tolerância 64.64413668644036 ===================================================== d [ 14.05618805 -2.07494475 0.9704754 17.34197784 -17.90080564 -17.19529485 46.80233043 -10.02466601 -20.67823518 17.32711084] y [0.99618415 0.99920554 0.99879989 0.99617011 0.99116263 0.98106542 0.96292946 0.92754315 0.85911061 0.73997976] tolerância 64.3308253184397 ===================================================== d [ 16.6770518 -3.2520504 0.33051089 17.00848277 -18.03476669 -16.55613143 46.52099963 -10.29549264 -19.3379343 16.82547749] y [0.99621792 0.99920055 0.99880222 0.99621178 0.99111961 0.9810241 0.96304192 0.92751906 0.85906092 0.74002139] tolerância 64.06800200000853 ===================================================== d [ 19.26221134 -4.4012636 -0.29328644 16.63288801 -18.1392436 -15.8685115 46.14816785 -10.52531299 -17.97909406 16.31613905] y [0.99625799 0.99919274 0.99880302 0.99625265 0.99107628 0.98098432 0.9631537 0.92749432 0.85901446 0.74006182] tolerância 63.85831605651271 ===================================================== d [ 21.81087761 -5.5201542 -0.90199157 16.21575924 -18.21484154 -15.13436346 45.68705888 -10.71383369 -16.60364106 15.80041523] y [0.99630428 0.99918217 0.99880231 0.99629261 0.99103269 0.98094619 0.96326458 0.92746903 0.85897126 0.74010103] tolerância 63.704065255779824 ===================================================== d [ 24.32246065 -6.60643623 -1.49675979 15.75752707 -18.26203561 -14.35544774 45.14055647 -10.86076663 -15.21314471 15.27943621] y [0.99635668 0.9991689 0.99880015 0.99633158 0.99098893 0.98090982 0.96337436 0.92744329 0.85893136 0.74013899] tolerância 63.607192189563484 ===================================================== d [ 26.79647625 -7.65792374 -2.07880297 15.25848156 -18.28115576 -13.53336084 44.51118 -10.96580699 -13.80884929 14.75415223] y [0.99641513 0.99915303 0.99879655 0.99636944 0.99094505 0.98087533 0.96348283 0.92741719 0.85889481 0.74017571] tolerância 63.569281282427255 ===================================================== d [ 29.23243176 -8.67240233 -2.64934108 14.71868978 -18.27234511 -12.66945319 43.80095431 -11.02851822 -12.39170497 14.22537687] y [0.99647971 0.99913457 0.99879154 0.99640622 0.99090098 0.98084271 0.96359011 0.92739076 0.85886153 0.74021127] tolerância 63.591395782004405 ===================================================== d [ 31.63016956 -9.64795287 -3.20969315 14.13842596 -18.23586751 -11.76527583 43.01229654 -11.04874413 -10.96257132 13.69388002] y [0.99654995 0.99911373 0.99878517 0.99644158 0.99085708 0.98081227 0.96369535 0.92736426 0.85883175 0.74024545] tolerância 63.675341689221156 ===================================================== d [ 33.98879519 -10.58229417 -3.76108199 13.51738875 -18.17141183 -10.82183759 42.14619456 -11.02586461 -9.52184685 13.16008655] y [0.99662619 0.99909048 0.99877744 0.99647566 0.99081313 0.98078391 0.96379903 0.92733763 0.85880533 0.74027845] tolerância 63.82104600678038 ===================================================== d [ 36.30768007 -11.47332354 -4.30477392 12.85546871 -18.0788127 -9.84033268 41.20407276 -10.95946487 -8.06994844 12.62448341] y [0.99670811 0.99906497 0.99876837 0.99650824 0.99076933 0.98075783 0.96390061 0.92731105 0.85878238 0.74031017] tolerância 64.02922525439351 ===================================================== d [ 38.58557374 -12.31875366 -4.84196022 12.15232448 -17.957562 -8.82178092 40.18661819 -10.84895671 -6.60707111 12.0873232 ] y [0.99679562 0.99903732 0.998758 0.99653923 0.99072575 0.98073411 0.96399992 0.92728464 0.85876293 0.7403406 ] tolerância 64.2996152942611 ===================================================== d [ 40.82025827 -13.11609896 -5.37369765 11.40744986 -17.80675305 -7.76712789 39.09373163 -10.6936281 -5.13325782 11.54855994] y [0.99688834 0.99900772 0.99874636 0.99656843 0.99068261 0.98071291 0.96409648 0.92725857 0.85874705 0.74036964] tolerância 64.63075052041594 ===================================================== d [ 43.01085392 -13.86297644 -5.90118372 10.62065556 -17.62612281 -6.67756491 37.92680461 -10.49294248 -3.64888342 11.00880945] y [0.99698703 0.99897601 0.99873337 0.99659601 0.99063955 0.98069414 0.964191 0.92723272 0.85873464 0.74039757] tolerância 65.02353903363333 ===================================================== d [ 45.15366099 -14.55643928 -6.42522634 9.79126293 -17.41427558 -5.5540336 36.68488822 -10.24597058 -2.15397663 10.46779868] y [0.9970907 0.99894259 0.99871915 0.9966216 0.99059707 0.98067804 0.96428241 0.92720743 0.85872585 0.7404241 ] tolerância 65.47499547830319 ===================================================== d [ 47.24555724 -15.1936258 -6.9466674 8.91891242 -17.17021566 -4.39778276 35.36806816 -9.95199028 -0.64886201 9.925648 ] y [0.99719953 0.99890751 0.99870366 0.9966452 0.9905551 0.98066465 0.96437083 0.92718273 0.85872065 0.74044933] tolerância 65.98356458016565 ===================================================== d [ 49.28253957 -15.77141625 -7.46618398 8.00325598 -16.89274775 -3.21018037 33.97617504 -9.61021881 0.86600773 9.38241936] y [0.9973134 0.99887089 0.99868692 0.9966667 0.99051371 0.98065405 0.96445608 0.92715874 0.85871909 0.74047325] tolerância 66.54689331313996 ===================================================== d [ 51.2599132 -16.28650978 -7.98430275 7.04406351 -16.58059883 -1.99280418 32.50907504 -9.21989843 2.38997374 8.83820022] y [0.99743219 0.99883287 0.99866892 0.99668599 0.990473 0.98064632 0.96453797 0.92713558 0.85872118 0.74049587] tolerância 67.162167877107 ===================================================== d [ 53.17374068 -16.73576761 -8.50163772 6.04139463 -16.23289922 -0.74750007 30.9676464 -8.78049462 3.92219124 8.29343117] y [0.99755612 0.9987935 0.99864962 0.99670302 0.99043291 0.9806415 0.96461657 0.92711329 0.85872696 0.74051724] tolerância 67.82795497351984 ===================================================== d [ 55.01556302 -17.11496982 -9.01799135 4.99512975 -15.84756048 0.52366039 29.3505872 -8.29106096 5.46136632 7.74784064] y [0.99768428 0.99875316 0.99862913 0.99671758 0.99039378 0.9806397 0.96469121 0.92709213 0.85873641 0.74053723] tolerância 68.53746031484162 ===================================================== d [ 56.78119908 -17.42107788 -9.53381756 3.90588502 -15.42403417 1.81821033 27.65985836 -7.75144704 7.00616702 7.20217016] y [0.99781729 0.99871178 0.99860732 0.99672966 0.99035547 0.98064096 0.96476217 0.92707208 0.85874961 0.74055596] tolerância 69.28971918668591 ===================================================== d [ 58.46024767 -17.64954291 -10.04841311 2.77417508 -14.9602343 3.13323057 25.89477095 -7.16099346 8.55440348 6.65633072] y [0.99795415 0.99866979 0.99858434 0.99673907 0.99031829 0.98064535 0.96482884 0.9270534 0.8587665 0.74057332] tolerância 70.0764073556758 ===================================================== d [ 60.04569688 -17.79691173 -10.56155324 1.60118802 -14.45532709 4.46543248 24.0572782 -6.5197815 10.10385872 6.11098685] y [0.99809505 0.99862725 0.99856012 0.99674576 0.99028223 0.9806529 0.96489125 0.92703614 0.85878712 0.74058936] tolerância 70.8936870700361 ===================================================== d [ 61.53088968 -17.85987647 -11.0730174 0.38839604 -13.90876623 5.8112447 22.15010416 -5.82810982 11.65213062 5.56704223] y [0.99824023 0.99858422 0.99853459 0.99674963 0.99024729 0.98066369 0.96494942 0.92702037 0.85881155 0.74060413] tolerância 71.73820855870326 ===================================================== d [ 62.90306624 -17.83388761 -11.581301 -0.86229856 -13.3188789 7.166175 20.17435072 -5.08614842 13.19543762 5.02484428] y [0.99838853 0.99854118 0.9985079 0.99675057 0.99021376 0.9806777 0.9650028 0.92700633 0.85883963 0.74061755] tolerância 72.59960019002655 ===================================================== d [ 64.15402862 -17.71590543 -12.08558696 -2.14842398 -12.68541016 8.52561832 18.13385708 -4.29487402 14.73036613 4.4854649 ] y [0.99854015 0.99849819 0.99847999 0.99674849 0.99018166 0.98069497 0.96505143 0.92699407 0.85887144 0.74062966] tolerância 73.47286183821551 ===================================================== d [ 65.27384751 -17.50273232 -12.58458225 -3.46702447 -12.0080348 9.88437338 16.03266559 -3.45549499 16.2528091 3.94998368] y [0.99869477 0.99845549 0.99845086 0.99674331 0.99015109 0.98071552 0.96509514 0.92698372 0.85890694 0.74064047] tolerância 74.35085406356671 ===================================================== d [ 66.2524513 -17.19153102 -13.07677799 -4.81459721 -11.28675109 11.23676029 13.8756572 -2.56964623 17.75822352 3.41963744] y [0.9988521 0.99841331 0.99842053 0.99673495 0.99012214 0.98073935 0.96513378 0.92697539 0.85894611 0.74065 ] tolerância 75.22605539688487 ===================================================== d [ 67.0797811 -16.77991573 -13.56045807 -6.18707718 -10.52192271 12.57664271 11.66859128 -1.63942117 19.24165486 2.8958185 ] y [0.99901179 0.99837187 0.99838901 0.99672335 0.99009494 0.98076643 0.96516722 0.92696919 0.85898892 0.74065824] tolerância 76.09061782856364 ===================================================== d [ 67.74378873 -16.26567252 -14.03321445 -7.57956529 -9.71400179 13.89705943 9.41772303 -0.66743085 20.69713145 2.37991068] y [0.99917297 0.99833155 0.99835642 0.99670848 0.99006966 0.98079665 0.96519526 0.92696526 0.85903515 0.7406652 ] tolerância 76.93398611650083 ===================================================== d [ 68.23863739 -15.64835465 -14.49375372 -8.9872537 -8.8647587 15.19174676 7.13132398 0.34322481 22.12004705 1.87382644] y [0.99933625 0.99829235 0.9983226 0.99669022 0.99004624 0.98083014 0.96521796 0.92696365 0.85908504 0.74067093] tolerância 77.75202035758699 ===================================================== d [ 68.55187931 -14.92681512 -14.93904312 -10.40394899 -7.97539151 16.45275468 4.81733807 1.38890172 23.50337675 1.37918726] y [0.99950021 0.99825475 0.99828777 0.99666862 0.99002494 0.98086665 0.96523509 0.92696447 0.85913819 0.74067543] tolerância 78.53237425762835 ===================================================== d [ 68.67678481 -14.10174029 -15.36700224 -11.82355229 -7.04828201 17.67289222 2.48531873 2.46546739 24.84141642 0.89796834] y [0.99966493 0.99821888 0.99825188 0.99664362 0.99000578 0.98090618 0.96524667 0.92696781 0.85919466 0.74067875] tolerância 79.26832944014075 ===================================================== d [ 68.60444998 -13.17416499 -15.77471156 -13.23900126 -6.08594741 18.84412074 0.14528711 3.56817453 26.1272712 0.43208873] y [0.99982944 0.9981851 0.99821507 0.9966153 0.9899889 0.98094851 0.96525262 0.92697371 0.85925416 0.7406809 ] tolerância 79.94970594951786 ===================================================== d [ 6.83315809e+01 -1.21466411e+01 -1.61603063e+01 -1.46437001e+01 -5.09182629e+00 1.99594396e+01 -2.19172225e+00 4.69210310e+00 2.73556694e+01 -1.62725681e-02] y [0.99999428 0.99815345 0.99817717 0.99658349 0.98997427 0.98099379 0.96525297 0.92698229 0.85931694 0.74068194] tolerância 80.57169234654923 ===================================================== d [ 67.85108193 -11.0219917 -16.52060855 -16.02971531 -4.06937678 21.0106285 -4.51434474 5.83155241 28.51947313 -0.44510087] y [1.00015796 0.99812435 0.99813846 0.99654841 0.98996208 0.9810416 0.96524772 0.92699353 0.85938247 0.7406819 ] tolerância 81.12389914363716 ===================================================== d [ 67.16066087 -9.80451033 -16.85327266 -17.38955696 -3.02280355 21.99051232 -6.81064808 6.98067376 29.61299229 -0.85228758] y [1.00032049 0.99809795 0.99809888 0.99651002 0.98995233 0.98109193 0.96523691 0.9270075 0.85945078 0.74068083] tolerância 81.60026160842978 ===================================================== d [ 66.25878208 -8.499311 -17.15580705 -18.71544047 -1.95665831 22.89197975 -9.06839149 8.13326213 30.63048834 -1.23576547] y [1.00048136 0.99807446 0.99805851 0.99646836 0.98994509 0.9811446 0.96522059 0.92702422 0.85952172 0.74067879] tolerância 81.99429721264691 ===================================================== d [ 65.14503051 -7.11255228 -17.42559359 -19.9993491 -0.87586835 23.70817971 -11.27519861 9.28274214 31.56628967 -1.59359784] y [1.00063958 0.99805417 0.99801755 0.99642367 0.98994042 0.98119927 0.96519894 0.92704364 0.85959486 0.74067584] tolerância 82.29947271913497 ===================================================== d [ 63.82178867 -5.65130968 -17.66045853 -21.23364469 0.21428028 24.43306702 -13.41871853 10.42253802 32.41566778 -1.92389096] y [1.00079514 0.99803719 0.99797594 0.99637591 0.98993832 0.98125588 0.96517202 0.9270658 0.85967024 0.74067203] tolerância 82.5112497299678 ===================================================== d [ 62.29305223 -4.12348065 -17.85834631 -22.41083931 1.30825641 25.06116877 -15.48679484 11.54598734 33.17437152 -2.22488641] y [1.00094754 0.99802369 0.99793376 0.99632521 0.98993884 0.98131423 0.96513997 0.92709069 0.85974764 0.74066744] tolerância 82.62570483034284 ===================================================== d [ 60.5647951 -2.53788111 -18.01739517 -23.5237943 2.40034157 25.58787904 -17.46782371 12.64644048 33.83884586 -2.49507373] y [1.00109583 0.99801387 0.99789125 0.99627186 0.98994195 0.98137388 0.96510311 0.92711818 0.85982661 0.74066214] tolerância 82.64007544532842 ===================================================== d [ 58.64408528 -0.9036895 -18.13589019 -24.56557021 3.48469421 26.00898677 -19.35033373 13.71730919 34.40594088 -2.73295995] y [1.00124045 0.99800781 0.99784823 0.99621569 0.98994768 0.98143498 0.9650614 0.92714838 0.85990742 0.74065619] tolerância 82.55162904648562 ===================================================== d [ 56.5421471 0.76890897 -18.21296529 -25.53069608 4.55555944 26.32245179 -21.12473673 14.75265336 34.87457196 -2.93760387] y [1.00137961 0.99800567 0.99780519 0.99615739 0.98995595 0.9814967 0.96501548 0.92718093 0.85998906 0.7406497 ] tolerância 82.36212312394913 ===================================================== d [ 54.26781829 2.46977252 -18.24697819 -26.41271337 5.60701937 26.52520073 -22.78066308 15.74602969 35.24226581 -3.10789358] y [1.00151421 0.9980075 0.99776184 0.99609662 0.9899668 0.98155936 0.96496519 0.92721605 0.86007208 0.74064271] tolerância 82.06873230657772 ===================================================== d [ 51.834905 4.18833572 -18.23764335 -27.20736108 6.63352454 26.61679782 -24.31012112 16.69220661 35.509449 -3.24321452] y [1.00164339 0.99801338 0.9977184 0.99603375 0.98998014 0.98162251 0.96491096 0.92725353 0.86015597 0.74063531] tolerância 81.67472312491151 ===================================================== d [ 49.25852139 5.9139504 -18.18502674 -27.91132347 7.62985553 26.59800452 -25.70664276 17.5865456 35.67752641 -3.3433139 ] y [1.0017664 0.99802332 0.99767512 0.99596918 0.98999588 0.98168567 0.96485327 0.92729314 0.86024024 0.74062761] tolerância 81.18475325355797 ===================================================== d [ 46.55244883 7.63596616 -18.08874177 -28.52090576 8.59081165 26.4692859 -26.96381392 18.42425142 35.74719422 -3.4079822 ] y [1.00188329 0.99803735 0.99763197 0.99590295 0.99001399 0.98174879 0.96479227 0.92733487 0.8603249 0.74061968] tolerância 80.60077416126593 ===================================================== d [ 43.73400517 9.34416457 -17.94968938 -29.03482116 9.51199405 26.23361584 -28.07811607 19.20209157 35.7218881 -3.43752954] y [1.00199341 0.99805542 0.99758918 0.99583548 0.99003431 0.9818114 0.96472849 0.92737846 0.86040946 0.74061162] tolerância 79.9302211971223 ===================================================== d [ 40.81685639 11.02812247 -17.7674961 -29.45002601 10.38861618 25.89227697 -29.04461661 19.91582321 35.60263558 -3.43205218] y [1.00209719 0.99807759 0.99754659 0.99576658 0.99005688 0.98187365 0.96466186 0.92742402 0.86049423 0.74060346] tolerância 79.17394119307953 ===================================================== d [ 37.82109226 12.67900447 -17.54450854 -29.7683928 11.21770686 25.45121253 -29.86394704 20.56451468 35.39603905 -3.39251219] y [1.00219375 0.99810368 0.99750456 0.99569691 0.99008146 0.9819349 0.96459315 0.92747113 0.86057845 0.74059534] tolerância 78.34477421822271 ===================================================== d [ 34.76196547 14.28760165 -17.28135886 -29.98937157 11.99573287 24.91408496 -30.53478761 21.14579001 35.10537973 -3.31962485] y [1.00228321 0.99813367 0.99746305 0.9956265 0.99010799 0.98199511 0.96452251 0.92751978 0.86066218 0.74058732] tolerância 77.4472213596832 ===================================================== d [ 31.65632239 15.84584787 -16.9796739 -30.11446244 12.72013399 24.28631636 -31.05834194 21.65880484 34.73600603 -3.21444313] y [1.00236545 0.99816747 0.99742217 0.99555555 0.99013637 0.98205404 0.96445027 0.9275698 0.86074523 0.74057946] tolerância 76.48972464934538 ===================================================== d [ 28.52055031 17.34652276 -16.64129807 -30.14595766 13.38894413 23.57389291 -31.43709964 22.10343815 34.29376239 -3.07818652] y [1.00244033 0.99820495 0.99738201 0.99548432 0.99016646 0.9821115 0.9643768 0.92762104 0.8608274 0.74057186] tolerância 75.48124297874776 ===================================================== d [ 25.37034538 18.78331735 -16.26825176 -30.08686464 14.00079077 22.78324222 -31.67473107 22.48026025 33.78488314 -2.91220856] y [1.0025078 0.99824599 0.99734264 0.995413 0.99019813 0.98216726 0.96430244 0.92767333 0.86090852 0.74056458] tolerância 74.43111660728528 ===================================================== d [ 22.2216467 20.15176193 -15.86344416 -29.94227174 14.55558267 21.92226164 -31.77762046 22.79156 33.21741985 -2.71817325] y [1.00256762 0.99829028 0.99730428 0.99534206 0.99023115 0.98222099 0.96422775 0.92772634 0.86098819 0.74055771] tolerância 73.35251834446875 ===================================================== d [ 19.08560876 21.44547064 -15.42741488 -29.71313285 15.05155412 20.99541086 -31.74789471 23.03675564 32.59456475 -2.497218 ] y [1.00262019 0.99833795 0.99726675 0.99527123 0.99026558 0.98227284 0.96415257 0.92778025 0.86106677 0.74055128] tolerância 72.24723734380771 ===================================================== d [ 15.97710524 22.66322333 -14.96416412 -29.40732263 15.49060592 20.01214594 -31.59556009 23.22066364 31.92661942 -2.25125709] y [1.0026652 0.99838852 0.99723037 0.99520116 0.99030107 0.98232235 0.96407771 0.92783457 0.86114363 0.74054539] tolerância 71.13302394491207 ===================================================== d [ 12.90603314 23.80130012 -14.47501741 -29.02793056 15.87259558 18.97809883 -31.32568139 23.34467784 31.21830155 -1.98166818] y [1.00270287 0.99844196 0.99719509 0.99513181 0.9903376 0.98236954 0.9640032 0.92788933 0.86121892 0.74054008] tolerância 70.01502442071767 ===================================================== d [ 9.88162918 24.8568356 -13.96146233 -28.57852897 16.19781448 17.89902455 -30.94392313 23.4107276 30.47466554 -1.68978284] y [1.0027334 0.99849827 0.99716084 0.99506315 0.99037515 0.98241444 0.9639291 0.92794455 0.86129276 0.7405354 ] tolerância 68.89889000900668 ===================================================== d [ 6.91362717 25.83185802 -13.42730859 -28.06770749 16.46966776 16.78370248 -30.46175701 23.42496203 29.70560858 -1.37733426] y [1.0027567 0.99855688 0.99712792 0.99499575 0.99041335 0.98245665 0.96385613 0.92799976 0.86136463 0.74053141] tolerância 67.80178678249372 ===================================================== d [ 4.00812012 26.72485018 -12.87366213 -27.49884163 16.68897001 15.63713452 -29.88504382 23.38956257 28.91524896 -1.04553766] y [1.00277301 0.9986178 0.99709626 0.99492957 0.99045218 0.98249622 0.9637843 0.928055 0.86143468 0.74052816] tolerância 66.72791065337397 ===================================================== d [ 1.17045426 27.53507425 -12.30172951 -26.8756329 16.85688619 14.46427775 -29.21998296 23.30714069 28.10795375 -0.69549274] y [1.00278249 0.99868101 0.9970658 0.99486452 0.99049166 0.98253321 0.9637136 0.92811033 0.86150308 0.74052569] tolerância 65.68198686254628 ===================================================== d [ -1.5949804 28.26689971 -11.71466601 -26.20620859 16.97761184 13.27228664 -28.47781779 23.18425212 27.29228888 -0.3284656 ] y [1.00278525 0.99874595 0.9970368 0.99480114 0.99053141 0.98256732 0.9636447 0.92816529 0.86156936 0.74052405] tolerância 64.67940801108094 ===================================================== d [ -4.28553716 28.91893855 -11.11255555 -25.49224536 17.05156917 12.06432565 -27.66262566 23.02217184 26.47011914 0.05476975] y [1.00278148 0.99881281 0.99700908 0.99473915 0.99057157 0.98259872 0.96357733 0.92822013 0.86163392 0.74052327] tolerância 63.71993193453153 ===================================================== d [ -6.90002667 29.49645312 -10.49793369 -24.74102894 17.08306174 10.84616841 -26.78472515 22.82724957 25.64878712 0.45335947] y [1.00277137 0.99888101 0.99698288 0.99467904 0.99061178 0.98262717 0.9635121 0.92827442 0.86169634 0.7405234 ] tolerância 62.81733237494291 ===================================================== d [ -9.43759903 29.99872244 -9.87061719 -23.95387755 17.07264414 9.62025826 -25.84776977 22.60076564 24.82957685 0.86679699] y [1.00275505 0.99895078 0.99695805 0.99462051 0.9906522 0.98265283 0.96344874 0.92832842 0.86175701 0.74052447] tolerância 61.970916864629295 ===================================================== d [-11.89925933 30.43001574 -9.23200437 -23.13590381 17.02366199 8.39048905 -24.85954513 22.34761206 24.01752853 1.2947077 ] y [1.00273272 0.99902175 0.9969347 0.99456385 0.99069258 0.98267558 0.9633876 0.92838188 0.86181575 0.74052653] tolerância 61.18998486604631 ===================================================== d [-14.28664492 30.79295365 -8.58258217 -22.29026805 16.9383432 7.15950792 -23.82564425 22.07110967 23.21563424 1.73690213] y [1.00270457 0.99909373 0.99691286 0.99450912 0.99073285 0.98269543 0.96332879 0.92843475 0.86187256 0.74052959] tolerância 60.47953964916593 ===================================================== d [-16.60214319 31.09011908 -7.92258191 -21.41966031 16.81876164 5.92944987 -22.75106633 21.77433261 22.4264099 2.19337771] y [1.00267078 0.99916657 0.99689255 0.99445639 0.99077292 0.98271237 0.96327243 0.92848696 0.86192748 0.7405337 ] tolerância 59.844066375045436 ===================================================== d [-18.8487529 31.32396743 -7.25198318 -20.52628753 16.66680014 4.70196961 -21.64019813 21.46008117 21.65191134 2.66431406] y [1.0026315 0.99924012 0.99687381 0.99440572 0.99081271 0.98272639 0.96321861 0.92853847 0.86198053 0.74053889] tolerância 59.28755349840503 ===================================================== d [-21.02994433 31.49674398 -6.5705218 -19.61186973 16.48411924 3.47828159 -20.49680978 21.13086152 20.89375564 3.15006536] y [1.00258692 0.99931422 0.99685666 0.99435716 0.99085213 0.98273752 0.96316742 0.92858923 0.86203175 0.74054519] tolerância 58.813516449947784 ===================================================== ###Markdown Nota-se que para n = 10 leva muitas mais iterações porém vai chegando próximo à solução. A última vez, levou mais de 1h40min e não convergiu, porém os valores já estavam próximos à solução. ###Code # Para n = 50 import numpy as np import sympy as sym import pandas as pd variaveis = list(sym.symbols("x:50")) c = variaveis f1 = 0 for i in range(1,50): f1 = f1 + 100*(c[i] - c[i-1]**2)**2 + (1 - c[i-1])**2 x = [] for i in range(1,51): if (i%2 != 0): x.append(-1.2) else: x.append(1) print('x ',x) eps = 1e-2 grad = gradiente_simbolico(f1,c) d1f = eval_gradiente(grad,c,x) p = Parametros(f1,grad,c,x,eps) m,fx,table = gradiente_conjugado(p) ###Output _____no_output_____ ###Markdown **2.** Resolva:Minimizar $(x_1 - x^3_2)^2 + 3(x_1 - x_2)^4$ ###Code import numpy as np import sympy as sym import pandas as pd import math x1 = sym.Symbol('x1') x2 = sym.Symbol('x2') c = [x1,x2] fo = (c[0] - c[1]**3)**2 + 3*(c[0] - c[1])**4 x = [1.2,1.5] eps = 1e-4 nmax = 7 grad = gradiente_simbolico(fo,c) p = Parametros(fo,grad,c,x,eps) m,fx,table = gradiente_conjugado(p) m fx table ###Output _____no_output_____ ###Markdown **3.** Resolva:$2(x_1 - 2)^4 + (2x_1 - x_2)^2 = 4$ ###Code x1 = sym.Symbol('x1') x2 = sym.Symbol('x2') variaveis = [x1,x2] c = variaveis x = [1.2,2] fo = (2*(x1 - 2)**4 + (2*x1 - x2)**2 - 4)**2 eps = 1e-6 nmax = 7 grad = gradiente_simbolico(fo,c) p = Parametros(fo,grad,c,x,eps) m,fx,table = gradiente_conjugado(p) fx m table ###Output _____no_output_____ ###Markdown Para este caso, levou apenas duas iterações para a solução com uma precisão na ordem de $10^{-10}$. **4** Resolva:Minimizar $(x_1 - 3)^4 + (x_1 - 3x_2)^2$ ###Code x1 = sym.Symbol('x1') x2 = sym.Symbol('x2') c = [x1,x2] x = [1.2,0.5] fo = (x1 - 3)**4 + (x1 - 3*x2)**2 eps = 1e-3 nmax = 7 grad = gradiente_simbolico(fo,c) p = Parametros(fo,grad,c,x,eps) m,fx,table = gradiente_conjugado(p) table m fx ###Output _____no_output_____
Hidden Layers And Hidden Neurons.ipynb
###Markdown Keras Tuner- Decide Number of Hidden Layers And Neuron In Neural Network ###Code !pip install keras-tuner import pandas as pd from tensorflow import keras from tensorflow.keras import layers import tensorflow as tf from kerastuner.tuners import RandomSearch import IPython df=pd.read_csv('Real_Combine.csv') df.head() X=df.iloc[:,:-1] ## independent features y=df.iloc[:,-1] ## dependent features ###Output _____no_output_____ ###Markdown Hyperparameters1. How many number of hidden layers we should have?2. How many number of neurons we should have in hidden layers?3. Learning Rate ###Code def build_model(hp): model = keras.Sequential() for i in range(hp.Int('num_layers', 2, 20)): model.add(layers.Dense(units=hp.Int('units_' + str(i), min_value=32, max_value=512, step=32), activation='relu')) model.add(layers.Dense(1, activation='linear')) model.compile( optimizer=keras.optimizers.Adam( hp.Choice('learning_rate', [1e-2, 1e-3, 1e-4])), loss='mean_absolute_error', metrics=['mean_absolute_error']) return model tuner = RandomSearch( build_model, objective='val_mean_absolute_error', max_trials=5, executions_per_trial=3, directory='project', project_name='Air Quality Index') def model_builder(hp): model = keras.Sequential() model.add(keras.layers.Flatten(input_shape=(28, 28))) # Tune the number of units in the first Dense layer # Choose an optimal value between 32-512 hp_units = hp.Int('units', min_value = 32, max_value = 512, step = 32) model.add(keras.layers.Dense(units = hp_units, activation = 'relu')) model.add(keras.layers.Dense(10)) # Tune the learning rate for the optimizer # Choose an optimal value from 0.01, 0.001, or 0.0001 hp_learning_rate = hp.Choice('learning_rate', values = [1e-2, 1e-3, 1e-4]) model.compile(optimizer = keras.optimizers.Adam(learning_rate = hp_learning_rate), loss = keras.losses.SparseCategoricalCrossentropy(from_logits = True), metrics = ['accuracy']) return model import kerastuner as kt tuner_band = kt.Hyperband(build_model, objective = 'val_mean_absolute_error', max_epochs = 10, factor = 3, directory = 'my_dir', project_name = 'intro_to_kt') class ClearTrainingOutput(tf.keras.callbacks.Callback): def on_train_end(*args, **kwargs): IPython.display.clear_output(wait = True) tuner_band.search(X_train, y_train, epochs = 10, validation_data = (X_test, y_test), callbacks = [ClearTrainingOutput()]) # Get the optimal hyperparameters best_hps = tuner_band.get_best_hyperparameters(num_trials = 1)[0] print(f""" The hyperparameter search is complete. The optimal number of units in the first densely-connected layer is {best_hps.get('units')} and the optimal learning rate for the optimizer is {best_hps.get('learning_rate')}. """) # Build the model with the optimal hyperparameters and train it on the data model = tuner_band.hypermodel.build(best_hps) model.fit(X_train, y_train, epochs = 10, validation_data = (X_test, y_test)) tuner.search_space_summary() from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) tuner.search(X_train, y_train, epochs=5, validation_data=(X_test, y_test)) tuner.results_summary() tuner_band.results_summary() ###Output Results summary Results in my_dir/intro_to_kt Showing 10 best trials Objective(name='val_mean_absolute_error', direction='min') Trial summary Hyperparameters: num_layers: 4 units_0: 288 units_1: 224 learning_rate: 0.01 units_2: 64 units_3: 512 units_4: 64 units_5: 288 units_6: 64 units_7: 32 units_8: 320 units_9: 224 units_10: 448 units_11: 160 units_12: 160 units_13: 320 units_14: 448 units_15: 256 units_16: 288 units_17: 352 units_18: 96 tuner/epochs: 10 tuner/initial_epoch: 0 tuner/bracket: 0 tuner/round: 0 Score: 44.205562591552734 Trial summary Hyperparameters: num_layers: 2 units_0: 384 units_1: 448 learning_rate: 0.01 units_2: 288 units_3: 384 units_4: 416 units_5: 128 units_6: 480 units_7: 480 units_8: 128 units_9: 192 units_10: 96 units_11: 256 units_12: 128 units_13: 96 units_14: 320 units_15: 448 units_16: 480 units_17: 192 units_18: 128 tuner/epochs: 10 tuner/initial_epoch: 4 tuner/bracket: 1 tuner/round: 1 tuner/trial_id: 388720d5b1b16c4bb3fce8f6046b9f0b Score: 46.60113525390625 Trial summary Hyperparameters: num_layers: 2 units_0: 384 units_1: 448 learning_rate: 0.01 units_2: 288 units_3: 384 units_4: 416 units_5: 128 units_6: 480 units_7: 480 units_8: 128 units_9: 192 units_10: 96 units_11: 256 units_12: 128 units_13: 96 units_14: 320 units_15: 448 units_16: 480 units_17: 192 units_18: 128 tuner/epochs: 4 tuner/initial_epoch: 0 tuner/bracket: 1 tuner/round: 0 Score: 47.37022399902344 Trial summary Hyperparameters: num_layers: 11 units_0: 192 units_1: 416 learning_rate: 0.001 units_2: 512 units_3: 64 units_4: 384 units_5: 256 units_6: 224 units_7: 96 units_8: 448 units_9: 128 units_10: 320 units_11: 320 units_12: 224 units_13: 480 units_14: 320 units_15: 96 units_16: 448 units_17: 96 units_18: 448 tuner/epochs: 10 tuner/initial_epoch: 0 tuner/bracket: 0 tuner/round: 0 Score: 47.967384338378906 Trial summary Hyperparameters: num_layers: 13 units_0: 320 units_1: 384 learning_rate: 0.01 units_2: 192 units_3: 160 units_4: 96 units_5: 64 units_6: 256 units_7: 64 units_8: 448 units_9: 384 units_10: 224 units_11: 160 units_12: 32 units_13: 256 units_14: 192 units_15: 224 units_16: 160 units_17: 64 units_18: 256 tuner/epochs: 10 tuner/initial_epoch: 0 tuner/bracket: 0 tuner/round: 0 Score: 48.18971252441406 Trial summary Hyperparameters: num_layers: 6 units_0: 128 units_1: 160 learning_rate: 0.01 units_2: 416 units_3: 416 units_4: 192 units_5: 224 units_6: 224 units_7: 96 units_8: 160 units_9: 160 units_10: 416 units_11: 32 units_12: 96 units_13: 288 units_14: 288 units_15: 320 units_16: 288 tuner/epochs: 10 tuner/initial_epoch: 4 tuner/bracket: 2 tuner/round: 2 tuner/trial_id: ac8ae66273cd0198508869a989a65a94 Score: 48.30392074584961 Trial summary Hyperparameters: num_layers: 11 units_0: 448 units_1: 192 learning_rate: 0.001 units_2: 288 units_3: 384 units_4: 416 units_5: 448 units_6: 448 units_7: 256 units_8: 320 units_9: 320 units_10: 192 units_11: 288 units_12: 192 units_13: 352 units_14: 64 units_15: 384 units_16: 384 units_17: 256 units_18: 416 tuner/epochs: 10 tuner/initial_epoch: 4 tuner/bracket: 2 tuner/round: 2 tuner/trial_id: b90a6d65299b87e48fb75be1de5e784c Score: 49.07579040527344 Trial summary Hyperparameters: num_layers: 14 units_0: 512 units_1: 128 learning_rate: 0.001 units_2: 288 units_3: 416 units_4: 384 units_5: 192 units_6: 96 units_7: 320 units_8: 96 units_9: 224 units_10: 352 units_11: 352 units_12: 160 units_13: 96 units_14: 192 units_15: 224 units_16: 192 units_17: 416 units_18: 32 tuner/epochs: 10 tuner/initial_epoch: 0 tuner/bracket: 0 tuner/round: 0 Score: 50.57023620605469 Trial summary Hyperparameters: num_layers: 11 units_0: 352 units_1: 128 learning_rate: 0.001 units_2: 128 units_3: 416 units_4: 192 units_5: 416 units_6: 224 units_7: 192 units_8: 320 units_9: 448 units_10: 64 units_11: 256 units_12: 128 units_13: 256 units_14: 320 units_15: 224 units_16: 256 units_17: 448 units_18: 448 tuner/epochs: 10 tuner/initial_epoch: 0 tuner/bracket: 0 tuner/round: 0 Score: 50.61952209472656 Trial summary Hyperparameters: num_layers: 7 units_0: 448 units_1: 416 learning_rate: 0.001 units_2: 32 units_3: 320 units_4: 32 units_5: 320 units_6: 512 units_7: 512 units_8: 352 units_9: 192 units_10: 96 units_11: 416 units_12: 352 units_13: 224 units_14: 160 units_15: 416 units_16: 192 units_17: 480 units_18: 160 tuner/epochs: 10 tuner/initial_epoch: 4 tuner/bracket: 1 tuner/round: 1 tuner/trial_id: 22fd1918c2d72198585d5d72c5aa9782 Score: 57.10464096069336 ###Markdown Keras Tuner- Decide Number of Hidden Layers And Neuron In Neural Network ###Code import pandas as pd from tensorflow import keras from tensorflow.keras import layers from kerastuner.tuners import RandomSearch df=pd.read_csv('Real_Combine.csv') df.head() X=df.iloc[:,:-1] ## independent features y=df.iloc[:,-1] ## dependent features ###Output _____no_output_____ ###Markdown Hyperparameters1. How many number of hidden layers we should have?2. How many number of neurons we should have in hidden layers?3. Learning Rate ###Code def build_model(hp): model = keras.Sequential() for i in range(hp.Int('num_layers', 2, 20)): model.add(layers.Dense(units=hp.Int('units_' + str(i), min_value=32, max_value=512, step=32), activation='relu')) model.add(layers.Dense(1, activation='linear')) model.compile( optimizer=keras.optimizers.Adam( hp.Choice('learning_rate', [1e-2, 1e-3, 1e-4])), loss='mean_absolute_error', metrics=['mean_absolute_error']) return model tuner = RandomSearch( build_model, objective='val_mean_absolute_error', max_trials=5, executions_per_trial=3, directory='project', project_name='Air Quality Index') tuner.search_space_summary() from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) tuner.search(X_train, y_train, epochs=5, validation_data=(X_test, y_test)) tuner.results_summary() ###Output _____no_output_____
labs/computing-experiment-metrics/enriching_decision_data.ipynb
###Markdown Computing Experiment Datasets 1: Enriching Optimizely Decision data with experiment metadataThis Lab is part of a multi-part series focused on computing useful experiment datasets. In this Lab, we'll use [PySpark](https://spark.apache.org/docs/latest/api/python/index.html), [Bravado](https://github.com/Yelp/bravado), and Optimizely's [Experiments API](https://library.optimizely.com/docs/api/app/v2/index.htmltag/Experiments) to enrich Optimizely ["decision"](https://docs.developers.optimizely.com/optimizely-data/docs/enriched-events-data-specificationdecisions-2) data with human-readable experiment and variation names.Why is this useful? Exported Optimizely [decision](https://docs.developers.optimizely.com/optimizely-data/docs/enriched-events-data-specificationdecisions-2) data contains a record of every "decision" made by Optimizely clients during your experiment. Each "decision" event records the moment that a visitor is added to a particular variation, and includes unique identifiers for the experiment and variation in question. For example: | visitor_id | experiment_id | variation_id | timestamp ||------------|---------------|--------------|--------------------------|| visitor_1 | 12345 | 678 | July 20th, 2020 14:25:00 || visitor_2 | 12345 | 789 | July 20th, 2020 14:28:13 || visitor_3 | 12345 | 678 | July 20th, 2020 14:31:01 |In order to work productively with this data in, it may be useful to enrich it with human-readable names for your experiments and variations, yielding e.g.:| visitor_id | experiment_id | variation_id | timestamp | experiment_name | variation_name ||------------|---------------|--------------|--------------------------|--------------------------|----------------|| visitor_1 | 12345 | 678 | July 20th, 2020 14:25:00 | free_shipping_experiment | control || visitor_2 | 12345 | 789 | July 20th, 2020 14:28:13 | free_shipping_experiment | treatment || visitor_3 | 12345 | 678 | July 20th, 2020 14:31:01 | free_shipping_experiment | control |This Lab is generated from a Jupyter Notebook. Scroll to the bottom of this page for instructions on how to run it on your own machine. Global parametersThe following global parameters are used to control the execution in this notebook. These parameters may be overridden by setting environment variables prior to launching the notebook, e.g.:```export OPTIMIZELY_DATA_DIR=~/my_analysis_dir``` ###Code import os from getpass import getpass # Determines whether output data should be written back to disk # Defaults to False; setting this to True may be useful when running this notebook # as part of a larger workflow SKIP_WRITING_OUTPUT_DATA_TO_DISK = os.environ.get("SKIP_WRITING_OUTPUT_DATA_TO_DISK", False) # This notebook requires an Optimizely API token. OPTIMIZELY_API_TOKEN = os.environ.get("OPTIMIZELY_API_TOKEN", "2:d6K8bPrDoTr_x4hiFCNVidcZk0YEPwcIHZk-IZb5sM3Q7RxRDafI") # Default path for reading and writing analysis data OPTIMIZELY_DATA_DIR = os.environ.get("OPTIMIZELY_DATA_DIR", "./covid_test_data") ###Output _____no_output_____ ###Markdown Create an Optimizely REST API clientFirst, we'll create an API client using the excellent [Bravado](https://github.com/Yelp/bravado) library.**Note:** In order to execute this step, you'll need an Optimizely [Personal Access Token](https://docs.developers.optimizely.com/web/docs/personal-token). You can supply this token to the notebook via the `OPTIMIZELY_API_TOKEN` environment variable. If `OPTIMIZELY_API_TOKEN` has not been set, you will be prompted to enter an access token manually. ###Code import getpass from bravado.requests_client import RequestsClient from bravado.client import SwaggerClient # Create a custom requests client for authentication requests_client = RequestsClient() requests_client.set_api_key( "api.optimizely.com", f"Bearer {OPTIMIZELY_API_TOKEN}", param_name="Authorization", param_in="header", ) # Create an API client using Optimizely's swagger/OpenAPI specification api_client = SwaggerClient.from_url( "https://api.optimizely.com/v2/swagger.json", http_client=requests_client, config={ "validate_swagger_spec": False, # validation produces several warnings } ) ###Output _____no_output_____ ###Markdown Now we'll test that this client can successfully authenticate to Optimizely ###Code import bravado.exception try: api_client.Projects.list_projects().response().result print("Successfully authenticated to Optimizely.") except bravado.exception.HTTPUnauthorized as e: print(f"Failed to authenticate to Optimizely: {e}") ###Output Successfully authenticated to Optimizely. ###Markdown Create a Spark Session ###Code from pyspark.sql import SparkSession num_cores = 1 driver_ip = "127.0.0.1" driver_memory_gb = 1 executor_memory_gb = 2 # Create a local Spark session spark = SparkSession \ .builder \ .appName("Python Spark SQL") \ .config(f"local[{num_cores}]") \ .config("spark.sql.repl.eagerEval.enabled", True) \ .config("spark.sql.repl.eagerEval.truncate", 120) \ .config("spark.driver.bindAddress", driver_ip) \ .config("spark.driver.host", driver_ip) \ .config("spark.driver.memory", f"{driver_memory_gb}g") \ .config("spark.executor.memory", f"{executor_memory_gb}g") \ .getOrCreate() ###Output _____no_output_____ ###Markdown Load decision dataWe'll start by loading decision data and isolating the decisions for the experiment specified by `experiment_id` and the time window specfied by `decisions_start` and `decisions_end`. Local Data StorageThese parameters specify where this notebook should read and write data. The default location is ./example_data in this notebook's directory. You can point the notebook to another data directory by setting the OPTIMIZELY_DATA_DIR environment variable prior to starting Jupyter Lab, e.g.```shexport OPTIMIZELY_DATA_DIR=~/optimizely_data``` ###Code # Local data storage locationsZ decisions_data_dir = os.path.join(OPTIMIZELY_DATA_DIR, "type=decisions") enriched_decisions_output_dir = os.path.join(OPTIMIZELY_DATA_DIR, "type=enriched_decisions") ###Output _____no_output_____ ###Markdown Read decision data from diskWe'll create a `decisions` view with the loaded data ###Code from lib import util util.read_parquet_data_from_disk( spark_session=spark, data_path=decisions_data_dir, view_name="decisions" ) spark.sql("SELECT * FROM decisions LIMIT 1") ###Output _____no_output_____ ###Markdown Enrich decision dataNext we'll query our decision data to list the distinct `experiment_id` values found in our dataset. ###Code from IPython.display import display, Markdown experiment_ids = spark.sql("SELECT DISTINCT experiment_id FROM decisions").toPandas().experiment_id print("Found these experiment IDs in the loaded decision data:") for exp_id in experiment_ids: print(f" {exp_id}") import pandas as pd import warnings # The Optimizely REST API spec causes Bravado to throw several warnings warnings.filterwarnings("ignore") def get_human_readable_name(obj): """Return a human-readable name from an Optimizely Experiment or Variation object. This function is handy because Optimizely Web and Full Stack experiments use different attribute names to store human-readable names.""" if hasattr(obj, "key") and obj.key is not None: # Optimizely Full Stack experiments return obj.key elif hasattr(obj, "name") and obj.name is not None: # Optimizely Web experiments return obj.name return None def get_experiment_names(api_client, exp_id): "Retrieve human-readable names for the experiment and associated variations from Optimizely's experiment API " experiment = api_client.Experiments.get_experiment(experiment_id=exp_id).response().result return pd.DataFrame([ { 'experiment_id' : str(experiment.id), 'variation_id' : str(variation.variation_id), 'experiment_name' : get_human_readable_name(experiment), 'variation_name' : get_human_readable_name(variation), 'reference_variation_id' : str(experiment.variations[0].variation_id) } for variation in experiment.variations ]) names = pd.concat([get_experiment_names(api_client, exp_id) for exp_id in experiment_ids]) spark.createDataFrame(names).createOrReplaceTempView("names") spark.sql("SELECT * FROM names LIMIT 10") ###Output _____no_output_____ ###Markdown Finally, we'll join our decision data with this mapping in order to enrich it with human-readable names. ###Code spark.sql(f""" CREATE OR REPLACE TEMPORARY VIEW enriched_decisions AS SELECT names.experiment_name, names.variation_name, names.reference_variation_id, decisions.* FROM decisions LEFT JOIN names on decisions.variation_id = names.variation_id """) spark.sql("SELECT * FROM enriched_decisions LIMIT 1") ###Output _____no_output_____ ###Markdown Writing our enriched decisions dataset to diskWe'll store our enriched decision data in the directory specified by `enriched_decision_output_dir`. Enriched decision data is partitioned into directories for each experiment included in the input decision data. ###Code if not SKIP_WRITING_OUTPUT_DATA_TO_DISK: spark.sql("""SELECT * FROM enriched_decisions""") \ .coalesce(1) \ .write.mode('overwrite') \ .partitionBy("experiment_id") \ .parquet(enriched_decisions_output_dir) ###Output _____no_output_____
04-Aspect_Based_Opinion_Mining/code/.ipynb_checkpoints/NLP_Build_Model-checkpoint.ipynb
###Markdown LDA ###Code from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer from sklearn.decomposition import NMF, LatentDirichletAllocation import numpy as np tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2, ngram_range=(1,3), max_features=1000, stop_words='english') tf = tf_vectorizer.fit_transform(sandwiches[:1000]) n_components = 6 lda = LatentDirichletAllocation(n_components=n_components, max_iter=5, learning_method='online', learning_offset=50., random_state=0) lda.fit(tf) vocab = tf_vectorizer.get_feature_names() # for topic in range(n_components): # print(f"TOPIC {topic}") # for j in np.argsort(-lda.components_,1)[topic,:15]: # print(vocab[j]) # print() ###Output _____no_output_____ ###Markdown Training the model with Naive Bayes1. replace pronouns with neural coref2. train the model with naive bayes ###Code from neuralcoref import Coref import en_core_web_lg spacy = en_core_web_lg.load() coref = Coref(nlp=spacy) # Define function for replacing pronouns def replace_pronouns(text): coref.one_shot_coref(text) return coref.get_resolved_utterances()[0] # Read annotated reviews df annotated_reviews_df = pd.read_pickle("annotated_reviews_df.pkl") annotated_reviews_df.head(3) annotated_reviews_df.iloc[2704,:] # Create a new column for text whose pronouns have been replaced annotated_reviews_df["text_pro"] = annotated_reviews_df.text.map(lambda x: replace_pronouns(x)) #annotated_reviews_df.to_pickle("annotated_reviews_df2.pkl") annotated_reviews_df = pd.read_pickle("annotated_reviews_df2.pkl") from sklearn.model_selection import train_test_split from sklearn.preprocessing import MultiLabelBinarizer # Convert the multi-labels into arrays mlb = MultiLabelBinarizer() y = mlb.fit_transform(annotated_reviews_df.aspects) X = annotated_reviews_df.text_pro # Split data into train and test set X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.25, random_state=0) # save the the fitted binarizer labels filename = 'mlb.pkl' pickle.dump(mlb, open(filename, 'wb')) from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.pipeline import Pipeline from sklearn.naive_bayes import MultinomialNB from skmultilearn.problem_transform import LabelPowerset import numpy as np # LabelPowerset allows for multi-label classification # Build a pipeline for multinomial naive bayes classification text_clf = Pipeline([('vect', CountVectorizer(stop_words = "english",ngram_range=(1, 1))), ('tfidf', TfidfTransformer(use_idf=False)), ('clf', LabelPowerset(MultinomialNB(alpha=1e-1))),]) text_clf = text_clf.fit(X_train, y_train) predicted = text_clf.predict(X_test) np.mean(predicted == y_test) from sklearn.model_selection import KFold, cross_val_score np.mean(cross_val_score(text_clf, X_train, y_train, cv=5, n_jobs=-1,scoring="accuracy")) from sklearn.linear_model import SGDClassifier text_clf_svm = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf-svm', LabelPowerset( SGDClassifier(loss='hinge', penalty='l2', alpha=1e-3, max_iter=6, random_state=42)))]) _ = text_clf_svm.fit(X_train, y_train) predicted_svm = text_clf_svm.predict(X_test) np.mean(predicted_svm == y_test) # from sklearn.model_selection import KFold, cross_val_score # np.mean(cross_val_score(text_clf_svm, X_train, y_train, cv=8, n_jobs=-1)) import pickle # Train naive bayes on full dataset and save model text_clf = Pipeline([('vect', CountVectorizer(stop_words = "english",ngram_range=(1, 1))), ('tfidf', TfidfTransformer(use_idf=False)), ('clf', LabelPowerset(MultinomialNB(alpha=1e-1))),]) text_clf = text_clf.fit(X, y) # save the model to disk filename = 'naive_model1.pkl' pickle.dump(text_clf, open(filename, 'wb')) #mlb.inverse_transform(predicted) pred_df = pd.DataFrame( {'text_pro': X_test, 'pred_category': mlb.inverse_transform(predicted) }) pd.set_option('display.max_colwidth', -1) pred_df ###Output _____no_output_____
qsweep/docs/hardsweep.ipynb
###Markdown Introduction In this notebook, we will explain how the hardsweep functionality works in pysweep. We will make a dummpy instrument called "Gaussian2D" to which we can send set points as arrays. The detector will subsequently send back an array representing measured values: `f(x, y) = N(x-locx, y-locy)`. Here `N(x, y) = ampl * exp(-(sx*x**2+sy*y**2+sxy*x*y))` is a 2D Gaussian. We will show how to perform a hardware sweep and how we can nest a hardware sweep in a software controlled sweep. To demonstrate the latter, we will sweep the value of `locx` in software. For each set point of `locx` we will get a measurement represening an 2D image. Imports ###Code import numpy as np import itertools import matplotlib.pyplot as plt from functools import partial from qcodes import Instrument from qcodes.dataset.data_export import get_data_by_id from pytopo.sweep import sweep, do_experiment, hardsweep ###Output _____no_output_____ ###Markdown Setting up a dummy instrument ###Code class Gaussian2D(Instrument): def __init__(self, name, loc, sig, ampl): super().__init__(name) self._loc = np.array(loc) self._ampl = ampl if np.atleast_2d(sig).shape == (1, 1): sig = sig * np.identity(2) self._sig = sig self.add_parameter( "locx", set_cmd=lambda value: partial(self.set_loc, locx=value)() ) self.add_parameter( "locy", set_cmd=lambda value: partial(self.set_loc, locy=value)() ) def send(self, setpoints): self._setpoints = setpoints def set_loc(self, locx=None, locy=None): locx = locx or self._loc[0] locy = locy or self._loc[1] self._loc = np.array([locx, locy]) def __call__(self): r = self._setpoints - self._loc[:, None] sig_inv = np.linalg.inv(self._sig) rscaled = np.matmul(sig_inv, r) dist = np.einsum("ij,ij->j", r, rscaled) return self._ampl * np.exp(-dist) ###Output _____no_output_____ ###Markdown Make a detector ###Code loc = [0.5, -0.3] # The location of the Gaussian blob sig = np.array([[1.0, 0.9], [0.9, 3.0]]) ampl = 1 gaussian2d = Gaussian2D("gaussian", loc, sig, ampl) ###Output _____no_output_____ ###Markdown Test if the detector works ###Code def sample_points(): x = np.linspace(-1, 1, 100) y = np.linspace(-1, 1, 100) xx, yy = zip(*itertools.product(x, y)) xy = np.vstack([xx, yy]) return xy xy = sample_points() gaussian2d.send(xy) # Send the set points. In a real hardware sweep, this can be a different instrument meas = gaussian2d() # Make the detector measure at the setpoints # Construct an image to see if we were succesfull. We should see an rotated and elongated gaussian blurr. img = meas.reshape((100, 100)) plt.imshow(img, extent=[-1, 1, -1, 1], interpolation="nearest", origin="lower") plt.show() ###Output _____no_output_____ ###Markdown setting up the hardsweep We make a hardsweep by using the `hardsweep` decorator in the `pytopo.sweep` module. *In this example*, the number of independent parameters `ind` is two (`x` and `y`) and the numbers of measurement parameters is one (`i`). The decorator parameters `ind` and `dep` are arrays of tuples `(name, unit)`. The decorator does not place any restrictions on the number and type of arguments of the decorated function; this can be anything the user desires. However, the decorated function *must* return two parameters; the set points and measurements, both of which are ndarrays. The first dimension of the set points array contains the number of independent parameters. This must match the number of `ind` parameters given in the decorator, else an exception is raised. In the example, `setpoints` is an 2-by-N array since we have two independent parameters `x` and `y`. `N` is the number of set points. Likewise the first dimension of the measurements array contains the number of measurements *per set point*. In this example, we have one measurement `f(x, y)` per set point, where `f` is the previously mentioned 2D Gaussian. This is reflected by the fact that the number of dependent parameter registered in the decorator is one: `i`. The number of measurements `N` must match the number of set points. ###Code @hardsweep(ind=[("x", "V"), ("y", "V")], dep=[("i", "A")]) def hardware_measurement(setpoints, detector): """ Use the detector to measure at given set points. Notice that we do not literally iterate over the set points, rather, we let the hardware take care of this. Returns: spoint_values (ndarray): 2-by-n array of setpoints measurements (ndarray): 1-by-n array of measurements """ spoint_values = setpoints() detector.send(spoint_values) measurements = detector() return spoint_values, measurements ###Output _____no_output_____ ###Markdown Perform the hardware sweep ###Code data = do_experiment( "hardweep/1", hardware_measurement(sample_points, gaussian2d) ) data.plot() ###Output _____no_output_____ ###Markdown Combining a hardware and software sweep We will now sweep the location of the Gaussin blob along the x-axis and for each set point, we will perform a hardware measurement ###Code sweep_object = sweep(gaussian2d.locx, np.linspace(-0.8, 0.8, 9))( hardware_measurement(sample_points, gaussian2d) ) data = do_experiment( "hardweep/2", sweep_object ) ###Output Starting experimental run with id: 9 ###Markdown Plot all the hardware measurements ###Code data_dict = get_data_by_id(data.run_id) print("parameter name (units) - shape of array") print("---------------------------------------") for axis_data in data_dict[0]: print(f"{axis_data['name']} ({axis_data['unit']}) - {axis_data['data'].shape} array") unique_locx = np.unique(data_dict[0][0]['data']) f, axes = plt.subplots(3, 3, figsize=(7, 7)) for locx, ax in zip(unique_locx, axes.ravel()): imgdata1 = np.array(data_dict[0][3]['data'][data_dict[0][0]['data'] == locx]).reshape(100, 100) ax.imshow(imgdata1, extent=[-1, 1, -1, 1], interpolation="nearest", origin="lower") ax.set_title(f"locx=={locx}") ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) f.tight_layout() ###Output _____no_output_____
Numerical_Solutions_Tests/Heat_Equation_1D.ipynb
###Markdown Problem setup with finite differencesThe heat equation is written as follows:$$(1)\:\:\:\frac{\partial\phi}{\partial t}-\alpha\frac{\partial^{2}\phi}{\partial x^{2}}=0,$$where $\alpha$ is the thermal diffusivity, which we will consider to be $\alpha=\frac{1}{\pi^{2}}$. The initial condition will be considered as follows:$$\phi(x,0)=\alpha\:\text{sin}\left(\frac{x}{\sqrt{\alpha}}\right),$$and the boundary conditions:$$\phi(a,t)=\phi(b,t)=0,$$with the following initial condition for the first derivative with respect to the time:$$\partial_{t}\phi(x,0)=-\alpha\:\text{sin}\left(\frac{x}{\sqrt{\alpha}}\right).$$With this, the problem will be solved for:$$\Omega=[0,1],\:\:\:\:0\leq t\leq 1.$$The analytical solution is known so we can compare:$$\phi(x,t)=\alpha\:e^{-t}\text{sin}\left(\frac{x}{\sqrt{\alpha}}\right).$$Using the method of finite differences with the forward scheme for the temporal first order derivative in order to discretize (1), we get:$$\phi_{j}^{n+1}=\frac{\alpha\Delta t}{\Delta x^{2}}\left(\phi_{j+1}^{n}-2\phi_{j}^{n}+\phi_{j-1}^{n}\right)+\phi_{j}^{n}$$ ###Code import numpy as np import tensorflow as tf from tqdm import tqdm import math # Set data type: DTYPE = "float32" tf.keras.backend.set_floatx(DTYPE) # Set constants pi = np.pi alpha = 1/(pi**2) # Discretization: t_i, t_f, x_i, x_f = 0., 1., 0., 1. nx, nt = 100, 5000 delta_x, delta_t = (x_f - x_i)/nx, (t_f - t_i)/nt lam = delta_t/delta_x x = np.linspace(x_i, x_f, nx) t = np.linspace(t_i, t_f, nt) # Initial conditions: ic_phi_f = lambda x: alpha * tf.sin(tf.constant(x/math.sqrt(alpha), dtype = DTYPE)) ic_phi = tf.constant(ic_phi_f(x), dtype = DTYPE) print(lam) phi_n = ic_phi phi_evolved = [] for t in tqdm(range(0, nt)): phi_j_list = [] for j in range(0, nx): if (j != 0) and (j != nx-1): phi_n_p1 = (alpha*delta_t/(delta_x**2)) * (phi_n[j+1] - 2*phi_n[j] + phi_n[j-1]) + phi_n[j] phi_j_list.append(float(phi_n_p1)) else: phi_j_list.append(0.) # After getting out of the spatial bucle, we need to update the values of phi in that "j" so the next iteration over time takes it into account. phi_n = tf.constant(np.array(phi_j_list), dtype = DTYPE) phi_n = tf.reshape(phi_n, shape = (phi_n.shape[0], 1)) phi_evolved.append(np.array(phi_j_list)) phi_evolved = np.stack(phi_evolved, axis = 0) # Plot 2D heatmap import seaborn as sns import matplotlib.pyplot as plt # Set up meshgrid tspace = np.linspace(t_i, t_f, nt) xspace = np.linspace(x_i, x_f, nx) T, X = np.meshgrid(tspace, xspace) Xgrid = np.vstack([T.flatten(),X.flatten()]).T # Plot 2D heatmap plt.figure(figsize = (10, 7)) ax = sns.heatmap(phi_evolved) ax.set_xticks(range(0, xspace.shape[0], 30)) ax.set_xticklabels([np.round(xspace[i], 2) for i in list(range(0, xspace.shape[0], 30))]) ax.set_yticks(range(0, tspace.shape[0], 30)) ax.set_yticklabels([np.round(tspace[i], 2) for i in list(range(0, tspace.shape[0], 30))]) plt.xlabel("x") plt.ylabel("t") plt.title("phi") ax.invert_yaxis() plt.savefig("Images/phi_Heatmap_numeric.png") plt.show() import matplotlib.pyplot as plt from matplotlib import cm from mpl_toolkits.mplot3d import Axes3D x = np.linspace(x_i, x_f, nx) t = np.linspace(t_i, t_f, nt) X, T = np.meshgrid(x, t) fig, ax = plt.subplots(subplot_kw={"projection": "3d"}) fig.set_size_inches(22, 13) surf = ax.plot_surface(X, T, phi_evolved, cmap = cm.coolwarm, linewidth = 0, antialiased = True) ax.view_init(elev = 10, azim = 230) ax.set_xlabel("x") ax.set_ylabel("t") ax.set_zlabel("phi") fig.colorbar(surf, shrink = 0.5, aspect = 5) plt.savefig("Images/phi_3D_numeric.png") plt.show() ###Output _____no_output_____ ###Markdown Setup of the problem with a PINN.Now, we are going to solve the problem using a neural network so we can check the result. ###Code # Import TensorFlow and NumPy import os import tensorflow as tf import numpy as np print("Is Tensorflow using GPU? ", tf.test.is_gpu_available()) os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"]="0" os.environ["TF_GPU_ALLOCATOR"]="cuda_malloc_async" # Set data type DTYPE = 'float32' tf.keras.backend.set_floatx(DTYPE) # Set constants pi = np.pi alpha = 1/(pi**2) # Define initial condition def fun_u_0(x): x_numpy = x.numpy() lambda x: alpha * tf.sin(tf.constant(x/math.sqrt(alpha), dtype = DTYPE)) return tf.constant(ic_phi_f(x_numpy), dtype = DTYPE) # Define the boundary conditions def fun_u_b(t, x): return tf.zeros(shape = (x.shape[0], 1), dtype = DTYPE) # Define residual of the PDE def fun_r(t, x, phi_t, phi_xx): return phi_t - alpha*phi_xx # Set number of data points N_0, N_b, N_r = 5000, 5000, 20000 # Set boundary tmin, tmax, xmin, xmax = 0., 1., 0., 1. # Lower bounds and upper bounds lb, ub = tf.constant([tmin, xmin], dtype = DTYPE), tf.constant([tmax, xmax], dtype = DTYPE) # Set random seed for reproducible results tf.random.set_seed(0) # Boundary data t_b = tf.random.uniform((N_b,1), lb[0], ub[0], dtype=DTYPE) x_b = lb[1] + (ub[1] - lb[1]) * tf.keras.backend.random_bernoulli((N_b,1), 0.5, dtype=DTYPE) X_b = tf.concat([t_b, x_b], axis=1) # Evaluate boundary condition at (t_b,x_b) u_b = fun_u_b(t_b, x_b) # Draw uniform sample points for initial boundary data t_0 = tf.ones((N_0,1), dtype = DTYPE)*lb[0] x_0 = tf.random.uniform((N_0,1), lb[1], ub[1], dtype = DTYPE) X_0 = tf.concat([t_0, x_0], axis=1) # Evaluate intitial condition at x_0 u_0 = fun_u_0(x_0) # Draw uniformly sampled collocation points t_r = tf.random.uniform((N_r,1), lb[0], ub[0], dtype = DTYPE) x_r = tf.random.uniform((N_r,1), lb[1], ub[1], dtype = DTYPE) X_r = tf.concat([t_r, x_r], axis=1) # Collect inital data in lists X_data = [X_0, X_b] u_data = [u_0, u_b] import matplotlib.pyplot as plt fig = plt.figure(figsize=(9,6)) plt.scatter(t_0, x_0, c = "g", marker = "X", vmin = -1, vmax = 1, label = "Initial data") plt.scatter(t_r, x_r, c = "r", marker = ".", alpha = 0.1, label = "Internal data") plt.scatter(t_b, x_b, c = "b", marker = "X", alpha = 0.1, label = "Boundary data") plt.legend() plt.xlabel("t") plt.ylabel("x") plt.title('Positions of collocation points and initial/boundary data') plt.show() def init_model(num_hidden_layers = 8, num_neurons_per_layer = 30): # Initialize a feedforward neural network model = tf.keras.Sequential() # Input is two-dimensional (time + one spatial dimension) model.add(tf.keras.Input(2)) # Introduce a scaling layer to map input to [lb, ub] scaling_layer = tf.keras.layers.Lambda(lambda x: 2.0*(x - lb)/(ub - lb) - 1.0) model.add(scaling_layer) # Append hidden layers for _ in range(num_hidden_layers): model.add(tf.keras.layers.Dense(num_neurons_per_layer, activation=tf.keras.activations.get('tanh'), kernel_initializer='glorot_normal')) # Output is one-dimensional model.add(tf.keras.layers.Dense(1, activation=tf.keras.activations.get('tanh'))) return model def get_r(model, X_r): # A tf.GradientTape is used to compute derivatives in TensorFlow with tf.GradientTape(persistent=True) as tape: # Split t and x to compute partial derivatives t, x = X_r[:, 0:1], X_r[:,1:2] tape.watch(t) tape.watch(x) # Determine residual phi = model(tf.stack([t[:,0], x[:,0]], axis=1)) phi_x = tape.gradient(phi, x) phi_t = tape.gradient(phi, t) phi_xx = tape.gradient(phi_x, x) del tape return fun_r(t, x, phi_t, phi_xx) def compute_loss(model, X_r, X_data, u_data): # Compute loss_r r = get_r(model, X_r) loss_r = tf.reduce_mean(tf.square(r)) u_pred_ic = model(X_data[0]) u_pred_bc = model(X_data[1]) loss_ic = tf.reduce_mean(tf.square(u_data[0] - u_pred_ic)) loss_bc = tf.reduce_mean(tf.square(u_data[1] - u_pred_bc)) loss = 10*loss_r + 0.01*loss_ic + 0.1*loss_bc return loss, loss_ic, loss_r, loss_bc def get_grad(model, X_r, X_data, u_data): with tf.GradientTape(persistent=True) as tape: # This tape is for derivatives with # respect to trainable variables tape.watch(model.trainable_variables) loss, loss_ic, loss_r, loss_bc = compute_loss(model, X_r, X_data, u_data) g = tape.gradient(loss, model.trainable_variables) del tape return loss, g, loss_ic, loss_r, loss_bc # Create batches from data: batch_size = 2000 X_r_batches = tf.data.Dataset.from_tensor_slices(X_r.numpy()).batch(batch_size) # Initialize model aka u_\theta model = init_model() lr = 0.001 #lr = tf.keras.optimizers.schedules.PiecewiseConstantDecay([1000,3000],[1e-2,1e-3,5e-4]) # Choose the optimizer optim = tf.keras.optimizers.Adam(learning_rate = lr) # Number of training epochs epochs = 2500 loss_hist, loss_ic_hist, loss_r_hist = [], [], [] for epoch in range(epochs): for step, X_r_batch in enumerate(X_r_batches): loss, grad_theta, loss_ic, loss_r, loss_bc = get_grad(model, X_r_batch, X_data, u_data) optim.apply_gradients(zip(grad_theta, model.trainable_variables)) loss_hist.append(loss.numpy()) loss_ic_hist.append(loss_ic.numpy()) loss_r_hist.append(loss_r.numpy()) if epoch%10 == 0: print("Epoch ", str(epoch), ": Total Loss = ", str(float(loss)), ", Loss IC = ", str(float(loss_ic)), ", Loss BC = ", str(float(loss_bc)), ", Loss R = ", str(float(loss_r))) # Plot 2D heatmap import seaborn as sns # Set up meshgrid N = 600 tspace = np.linspace(lb[0], ub[0], N + 1) xspace = np.linspace(lb[1], ub[1], N + 1) T, X = np.meshgrid(tspace, xspace) Xgrid = np.vstack([T.flatten(),X.flatten()]).T # Determine predictions of u(t, x) phi = model(tf.cast(Xgrid, DTYPE)) # Reshape upred phi = phi.numpy().reshape(N+1,N+1).T # Plot 2D heatmap plt.figure(figsize = (10, 7)) ax = sns.heatmap(phi) ax.set_xticks(range(0, xspace.shape[0], 30)) ax.set_xticklabels([np.round(xspace[i], 2) for i in list(range(0, xspace.shape[0], 30))]) ax.set_yticks(range(0, tspace.shape[0], 30)) ax.set_yticklabels([np.round(tspace[i], 2) for i in list(range(0, tspace.shape[0], 30))]) plt.xlabel("x") plt.ylabel("t") plt.title("Phi") ax.invert_yaxis() plt.savefig("Images/HeatEquation_1D_Eq_Heatmap.png") plt.show() # Plot 3D surface from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm fig, ax = plt.subplots(subplot_kw={"projection": "3d"}) fig.set_size_inches(22, 13) surf = ax.plot_surface(X, T, phi, cmap = cm.coolwarm, linewidth = 0, antialiased = True) ax.view_init(elev = 10, azim = 150) ax.set_xlabel('$t$') ax.set_ylabel('$x$') ax.set_zlabel('$phi(t,x)$') ax.set_title('$phi(t,x)$') fig.colorbar(surf, shrink = 0.5, aspect = 5) plt.savefig("Images/HeatEquation_1D_Eq_3D.png") plt.show() ###Output _____no_output_____
ipynb-examples/example3-statemachine.ipynb
###Markdown Example 3: A State Machine built with ConditionalUpdate In this example we describe how **ConditionalUpdate** works in the context ofa vending machine that will dispense an item when it has received 4 tokens.If a refund is requested, it returns the tokens. ###Code import pyrtl pyrtl.reset_working_block() token_in = pyrtl.Input(1, 'token_in') req_refund = pyrtl.Input(1, 'req_refund') dispense = pyrtl.Output(1, 'dispense') refund = pyrtl.Output(1, 'refund') state = pyrtl.Register(3, 'state') ###Output _____no_output_____ ###Markdown First new step, let's **enumerate a set of constant to serve as our states** ###Code WAIT, TOK1, TOK2, TOK3, DISPENSE, REFUND = [pyrtl.Const(x, bitwidth=3) for x in range(6)] ###Output _____no_output_____ ###Markdown Now we could build a state machine using just the registers and logic discussedin the earlier examples, but doing operations *conditional* on some input is a prettyfundamental operation in hardware design. **PyRTL provides a class "ConditionalUpdate"**to provide a predicated update to a registers, wires, and memories.**Conditional assignments** are specified with a *"|="* instead of a *"<<="* operator. Theconditional assignment is only value in the context of a condition, and update to thosevalues only happens when that condition is true. In hardware this is implementedwith a simple mux -- for people coming from software it is important to remember that thisis describing a big logic function NOT an "if-then-else" clause. All of these things willexecute straight through when *build_everything* is called. More comments after the code.**One more thing:** ConditionalUpdate might not always be the best item to use.if the update is simple, a regular mux(sel_wire, falsecase=f_wire, truecase=t_wire)can be sufficient. ###Code with pyrtl.conditional_assignment: with req_refund: # signal of highest precedence state.next |= REFUND with token_in: # if token received, advance state in counter sequence with state == WAIT: state.next |= TOK1 with state == TOK1: state.next |= TOK2 with state == TOK2: state.next |= TOK3 with state == TOK3: state.next |= DISPENSE # 4th token received, go to dispense with pyrtl.otherwise: # token received but in state where we can't handle it state.next |= REFUND # unconditional transition from these two states back to wait state # NOTE: the parens are needed because in Python the "|" operator is lower precedence # than the "==" operator! with (state == DISPENSE) | (state == REFUND): state.next |= WAIT dispense <<= state == DISPENSE refund <<= state == REFUND ###Output _____no_output_____ ###Markdown A couple of other things to note:* A condition can be nested within another condition and the implied hardware is that the left-hand-side should only get that value if ALL of the encompassing conditions are satisfied.* Only one conditional at each level can be true meaning that all conditions are implicitly also saying that none of the prior conditions at the same level also have been true. The highest priority condition is listed first, and in a sense you can think about each other condition as an "elif".* If not every condition is enumerated, the default value for the register under those cases will be the same as it was the prior cycle ("state.next |= state" in this example). The default for a wirevector is 0.* There is a way to specify something like an "else" instead of "elif" and that is with an "otherwise" (as seen on the line above "state.next <<= REFUND"). This condition will be true if none of the other conditions at the same level were also true (for this example specifically state.next will get REFUND when req_refund==0, token_in==1, and state is not in TOK1, TOK2, TOK3, or DISPENSE.* Not shown here, you can update multiple different registers, wires, and memories all under the same set of conditionals. A more artificial example might make it even more clear how these rules interact:```python with a: r.next |= 1 <-- when a is truewith d: r2.next |= 2 <-- when a and d are truewith otherwise: r2.next |= 3 <-- when a is true and d is falsewith b == c: r.next |= 0 <-- when a is not true and b & c is true``` Now let's **build and test our state machine**. ###Code sim_trace = pyrtl.SimulationTrace() sim = pyrtl.Simulation(tracer=sim_trace) ###Output _____no_output_____ ###Markdown Rather than just give some random inputs, let's **specify some specific 1 bit values**. Recallthat the sim.step method takes a dictionary mapping inputs to their values. We could justspecify the input set directly as a dictionary but it gets pretty ugly -- let's use some pythonto parse them up. ###Code sim_inputs = { 'token_in': '0010100111010000', 'req_refund': '1100010000000000' } for cycle in range(len(sim_inputs['token_in'])): sim.step({w: int(v[cycle]) for w, v in sim_inputs.items()}) ###Output _____no_output_____ ###Markdown Also, to make our input/output easy to reason about let's **specify an order to the traces** ###Code sim_trace.render_trace(trace_list=['token_in', 'req_refund', 'state', 'dispense', 'refund']) ###Output _____no_output_____ ###Markdown Example 3: A State Machine built with ConditionalUpdate In this example we describe how **ConditionalUpdate** works in the context ofa vending machine that will dispense an item when it has received 4 tokens.If a refund is requested, it returns the tokens. ###Code import pyrtl pyrtl.reset_working_block() token_in = pyrtl.Input(1, 'token_in') req_refund = pyrtl.Input(1, 'req_refund') dispense = pyrtl.Output(1, 'dispense') refund = pyrtl.Output(1, 'refund') state = pyrtl.Register(3, 'state') ###Output _____no_output_____ ###Markdown First new step, let's **enumerate a set of constant to serve as our states** ###Code WAIT, TOK1, TOK2, TOK3, DISPENSE, REFUND = [pyrtl.Const(x, bitwidth=3) for x in range(6)] ###Output _____no_output_____ ###Markdown Now we could build a state machine using just the registers and logic discussedin the earlier examples, but doing operations *conditional* on some input is a prettyfundamental operation in hardware design. **PyRTL provides a class "ConditionalUpdate"**to provide a predicated update to a registers, wires, and memories.**Conditional assignments** are specified with a *"|="* instead of a *"<<="* operator. Theconditional assignment is only value in the context of a condition, and update to thosevalues only happens when that condition is true. In hardware this is implementedwith a simple mux -- for people coming from software it is important to remember that thisis describing a big logic function NOT an "if-then-else" clause. All of these things willexecute straight through when *build_everything* is called. More comments after the code.**One more thing:** ConditionalUpdate might not always be the best item to use.if the update is simple, a regular mux(sel_wire, falsecase=f_wire, truecase=t_wire)can be sufficient. ###Code with pyrtl.conditional_assignment: with req_refund: # signal of highest precedence state.next |= REFUND with token_in: # if token received, advance state in counter sequence with state == WAIT: state.next |= TOK1 with state == TOK1: state.next |= TOK2 with state == TOK2: state.next |= TOK3 with state == TOK3: state.next |= DISPENSE # 4th token received, go to dispense with pyrtl.otherwise: # token received but in state where we can't handle it state.next |= REFUND # unconditional transition from these two states back to wait state # NOTE: the parens are needed because in Python the "|" operator is lower precedence # than the "==" operator! with (state == DISPENSE) | (state == REFUND): state.next |= WAIT dispense <<= state == DISPENSE refund <<= state == REFUND ###Output _____no_output_____ ###Markdown A couple of other things to note:* A condition can be nested within another condition and the implied hardware is that the left-hand-side should only get that value if ALL of the encompassing conditions are satisfied.* Only one conditional at each level can be true meaning that all conditions are implicitly also saying that none of the prior conditions at the same level also have been true. The highest priority condition is listed first, and in a sense you can think about each other condition as an "elif".* If not every condition is enumerated, the default value for the register under those cases will be the same as it was the prior cycle ("state.next |= state" in this example). The default for a wirevector is 0.* There is a way to specify something like an "else" instead of "elif" and that is with an "otherwise" (as seen on the line above "state.next <<= REFUND"). This condition will be true if none of the other conditions at the same level were also true (for this example specifically state.next will get REFUND when req_refund==0, token_in==1, and state is not in TOK1, TOK2, TOK3, or DISPENSE.* Not shown here, you can update multiple different registers, wires, and memories all under the same set of conditionals. A more artificial example might make it even more clear how these rules interact:```python with a: r.next |= 1 <-- when a is true with d: r2.next |= 2 <-- when a and d are true with otherwise: r2.next |= 3 <-- when a is true and d is falsewith b == c: r.next |= 0 <-- when a is not true and b == c is true``` Now let's **build and test our state machine**. ###Code sim_trace = pyrtl.SimulationTrace() sim = pyrtl.Simulation(tracer=sim_trace) ###Output _____no_output_____ ###Markdown Rather than just give some random inputs, let's **specify some specific 1 bit values**. Recallthat the sim.step method takes a dictionary mapping inputs to their values. We could justspecify the input set directly as a dictionary but it gets pretty ugly -- let's use some pythonto parse them up. ###Code sim_inputs = { 'token_in': '0010100111010000', 'req_refund': '1100010000000000' } for cycle in range(len(sim_inputs['token_in'])): sim.step({w: int(v[cycle]) for w, v in sim_inputs.items()}) ###Output _____no_output_____ ###Markdown Also, to make our input/output easy to reason about let's **specify an order to the traces** ###Code sim_trace.render_trace(trace_list=['token_in', 'req_refund', 'state', 'dispense', 'refund']) ###Output _____no_output_____
How-To-Remove-Empty-Line.ipynb
###Markdown How To Remove Empty Line? ###Code cat ~/Work/fruit1.txt %%bash sed -n '/^$/!p' ~/Work/fruit1.txt : OR : sed '/^$/d' ~/Work/fruit1.txt : OR : awk "NF" ~/Work/fruit1.txt %%python3 from pathlib import Path path = Path('/home/mana/Work/') text = (path/"fruit1.txt").read_text().splitlines() text = [i for i in text if i and not i.isspace()] print(*text, sep = '\n') ###Output Apple. Watermelon. Orange.
notebooks/diffusion_equation_test.ipynb
###Markdown Добавляем граничные условия. ###Code for i in range(0, Nx, dx): M[i, 0] = float(10 * 10**5) for j in range(0, Nt): M[0, j] = float(10*10**5) M[Nx-1, j] = float(20*10**5) format_sc_n = lambda x: "{:1.0e}".format(x) if x > 1000 else x # scientific notation df = pd.DataFrame(M, dtype=int) df = df.applymap(format_sc_n) df ###Output _____no_output_____ ###Markdown Рассчитываем коэффициенты в матрице: ###Code # zero matrix for further calculations A = np.zeros((Nx, Nx), dtype=np.int) # we started with 1 and stopped on 9 here for i in range(1, Nx-1): A[i,i] = (2*dt/dx**2) * Xi + 1 A[i, i+1] = (-dt/dx**2)*Xi A[i, i-1] = (-dt/dx**2)*Xi A[0,0] = 1 A[Nx-1, Nx-1] = 1 ###Output _____no_output_____ ###Markdown Матрица коэффициентов будет иметь вид: ###Code pd.DataFrame(A, dtype=float) ###Output _____no_output_____ ###Markdown Решаем системы уравнений на каждом временном шаге ###Code for j in range(1, Nt): M[:, j] = np.linalg.solve(A, M[:, j-1]) pd.set_option('display.max_columns', None) result = pd.DataFrame(M, dtype=float) result = result.applymap(lambda x: "{:1.3e}".format(x)) result ### pd.DataFrame(M[:, i]) # the columns we need x = np.linspace(0, 10, Nx) plt.xlabel('X') plt.plot( x, M[:, 0], 'r-', x, M[:, 2], 'g-.', x, M[:, int(Nt/2)], 'b:', x, M[:, int(Nt-1)], 'm.', ) plt.show() ###Output _____no_output_____
nbs/16_callback.progress.ipynb
###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai2.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def begin_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def begin_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def begin_train(self): self._launch_pbar() def begin_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(begin_fit="Setup the master bar over the epochs", begin_epoch="Update the master bar", begin_train="Launch a progress bar over the training dataloader", begin_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.begin_fit) show_doc(ProgressCallback.begin_epoch) show_doc(ProgressCallback.begin_train) show_doc(ProgressCallback.begin_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after,run_valid=ProgressCallback,False def begin_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(10) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def begin_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appened to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.begin_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code #|export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" order,_stateattrs = 60,('mbar','pbar') def before_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def before_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def before_train(self): self._launch_pbar() def before_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') if hasattr(self, 'old_logger'): self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(before_fit="Setup the master bar over the epochs", before_epoch="Update the master bar", before_train="Launch a progress bar over the training dataloader", before_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #|export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #|hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #|hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.before_fit) show_doc(ProgressCallback.before_epoch) show_doc(ProgressCallback.before_train) show_doc(ProgressCallback.before_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code #|export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" order,run_valid=65,False def before_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") if not(self.run): return self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" if not self.nb_batches: return rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #|slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(5) learn.predict(torch.tensor([[0.1]])) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code #|export class CSVLogger(Callback): "Log the results displayed in `learn.path/fname`" order=60 def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def before_fit(self): "Prepare file with metric names." if hasattr(self, "gather_preds"): return self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.file.flush() os.fsync(self.file.fileno()) self.old_logger(log) def after_fit(self): "Close the file and clean up." if hasattr(self, "gather_preds"): return self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appended to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.before_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #|hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 01a_losses.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 10b_tutorial.albumentations.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 18b_callback.preds.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted dev-setup.ipynb. Converted index.ipynb. Converted quick_start.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai2.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def begin_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def begin_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def begin_train(self): self._launch_pbar() def begin_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(begin_fit="Setup the master bar over the epochs", begin_epoch="Update the master bar", begin_train="Launch a progress bar over the training dataloader", begin_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) yield self if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.begin_fit) show_doc(ProgressCallback.begin_epoch) show_doc(ProgressCallback.begin_train) show_doc(ProgressCallback.begin_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after=ProgressCallback def begin_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(10) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def begin_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appened to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.begin_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.learner.ipynb. Converted 43_tabular.model.ipynb. Converted 45_collab.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 97_test_utils.ipynb. Converted index.ipynb. Converted migrating.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai2.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def begin_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def begin_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def begin_train(self): self._launch_pbar() def begin_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(begin_fit="Setup the master bar over the epochs", begin_epoch="Update the master bar", begin_train="Launch a progress bar over the training dataloader", begin_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.begin_fit) show_doc(ProgressCallback.begin_epoch) show_doc(ProgressCallback.begin_train) show_doc(ProgressCallback.begin_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after,run_valid=ProgressCallback,False def begin_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(5) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def begin_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appened to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.begin_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai2.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def before_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def before_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def before_train(self): self._launch_pbar() def before_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') if hasattr(self, 'old_logger'): self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(before_fit="Setup the master bar over the epochs", before_epoch="Update the master bar", before_train="Launch a progress bar over the training dataloader", before_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.before_fit) show_doc(ProgressCallback.before_epoch) show_doc(ProgressCallback.before_train) show_doc(ProgressCallback.before_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after,run_valid=ProgressCallback,False def before_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(5) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def before_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appened to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.before_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai2.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def begin_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def begin_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def begin_train(self): self._launch_pbar() def begin_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(begin_fit="Setup the master bar over the epochs", begin_epoch="Update the master bar", begin_train="Launch a progress bar over the training dataloader", begin_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) yield self if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.begin_fit) show_doc(ProgressCallback.begin_epoch) show_doc(ProgressCallback.begin_train) show_doc(ProgressCallback.begin_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after=ProgressCallback def begin_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(10) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def begin_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appened to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.begin_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.learner.ipynb. Converted 43_tabular.model.ipynb. Converted 45_collab.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 97_test_utils.ipynb. Converted index.ipynb. Converted migrating.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai2.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def begin_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def begin_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def begin_train(self): self._launch_pbar() def begin_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') if hasattr(self, 'old_logger'): self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(begin_fit="Setup the master bar over the epochs", begin_epoch="Update the master bar", begin_train="Launch a progress bar over the training dataloader", begin_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.begin_fit) show_doc(ProgressCallback.begin_epoch) show_doc(ProgressCallback.begin_train) show_doc(ProgressCallback.begin_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after,run_valid=ProgressCallback,False def begin_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(5) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def begin_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appened to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.begin_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" order,_stateattrs = 60,('mbar','pbar') def before_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def before_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def before_train(self): self._launch_pbar() def before_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') if hasattr(self, 'old_logger'): self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(before_fit="Setup the master bar over the epochs", before_epoch="Update the master bar", before_train="Launch a progress bar over the training dataloader", before_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.before_fit) show_doc(ProgressCallback.before_epoch) show_doc(ProgressCallback.before_train) show_doc(ProgressCallback.before_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" order,run_valid=65,False def before_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") if not(self.run): return self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" if not self.nb_batches: return rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(5) learn.predict(torch.tensor([[0.1]])) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): "Log the results displayed in `learn.path/fname`" order=60 def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def before_fit(self): "Prepare file with metric names." if hasattr(self, "gather_preds"): return self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.file.flush() os.fsync(self.file.fileno()) self.old_logger(log) def after_fit(self): "Close the file and clean up." if hasattr(self, "gather_preds"): return self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appended to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.before_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 01a_losses.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 10b_tutorial.albumentations.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 18b_callback.preds.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted dev-setup.ipynb. Converted index.ipynb. Converted quick_start.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def before_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def before_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def before_train(self): self._launch_pbar() def before_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') if hasattr(self, 'old_logger'): self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(before_fit="Setup the master bar over the epochs", before_epoch="Update the master bar", before_train="Launch a progress bar over the training dataloader", before_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.before_fit) show_doc(ProgressCallback.before_epoch) show_doc(ProgressCallback.before_train) show_doc(ProgressCallback.before_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after,run_valid=ProgressCallback,False def before_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") if not(self.run): return self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(5) learn.predict(torch.tensor([[0.1]])) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def before_fit(self): "Prepare file with metric names." if hasattr(self, "gather_preds"): return self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.file.flush() os.fsync(self.file.fileno()) self.old_logger(log) def after_fit(self): "Close the file and clean up." if hasattr(self, "gather_preds"): return self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appended to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.before_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai2.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def begin_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def begin_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def begin_train(self): self._launch_pbar() def begin_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(begin_fit="Setup the master bar over the epochs", begin_epoch="Update the master bar", begin_train="Launch a progress bar over the training dataloader", begin_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) yield self if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.begin_fit) show_doc(ProgressCallback.begin_epoch) show_doc(ProgressCallback.begin_train) show_doc(ProgressCallback.begin_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after=ProgressCallback def begin_fit(self): self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(10) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def begin_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appened to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.begin_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 18_callback_fp16.ipynb. Converted 03_torchcore.ipynb. Converted 32_text_models_awdlstm.ipynb. Converted 03a_layers.ipynb. Converted 14a_callback_data.ipynb. Converted 07_data_block.ipynb. Converted 12_optimizer.ipynb. Converted 17_callback_tracker.ipynb. Converted 90_xse_resnext.ipynb. Converted 71_callback_tensorboard.ipynb. Converted 60_medical_imaging.ipynb. Converted 05_data_core.ipynb. Converted 06_data_transforms.ipynb. Converted 08_vision_core.ipynb. Converted 00_test.ipynb. Converted 01b_core_dispatch.ipynb. Converted 96_data_external.ipynb. Converted 15a_vision_models_unet.ipynb. Converted 01c_core_transform.ipynb. Converted 13_learner.ipynb. Converted 36_text_models_qrnn.ipynb. Converted 97_utils_test.ipynb. Converted 10_pets_tutorial.ipynb. Converted 34_callback_rnn.ipynb. Converted 42_tabular_rapids.ipynb. Converted 16_callback_progress.ipynb. Converted 70_callback_wandb.ipynb. Converted 09a_vision_data.ipynb. Converted 38_tutorial_ulmfit.ipynb. Converted 95_index.ipynb. Converted 22_tutorial_imagenette.ipynb. Converted 20_interpret.ipynb. Converted 41_tabular_model.ipynb. Converted 09b_vision_utils.ipynb. Converted 50_data_block_examples.ipynb. Converted 21_vision_learner.ipynb. Converted 37_text_learner.ipynb. Converted 20a_distributed.ipynb. Converted 30_text_core.ipynb. Converted 02_core_script.ipynb. Converted 09_vision_augment.ipynb. Converted 01_core_foundation.ipynb. Converted 15_callback_hook.ipynb. Converted 31_text_data.ipynb. Converted 13a_metrics.ipynb. Converted 11_vision_models_xresnet.ipynb. Converted 65_medical_text.ipynb. Converted 40_tabular_core.ipynb. Converted 33_text_models_core.ipynb. Converted 14_callback_schedule.ipynb. Converted 19_callback_mixup.ipynb. Converted 01a_core_utils.ipynb. Converted 04_data_load.ipynb. Converted 23_tutorial_transfer_learning.ipynb. Converted 35_tutorial_wikitext.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai2.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def begin_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def begin_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def begin_train(self): self._launch_pbar() def begin_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(begin_fit="Setup the master bar over the epochs", begin_epoch="Update the master bar", begin_train="Launch a progress bar over the training dataloader", begin_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) yield self if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.begin_fit) show_doc(ProgressCallback.begin_epoch) show_doc(ProgressCallback.begin_train) show_doc(ProgressCallback.begin_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after=ProgressCallback def begin_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(10) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def begin_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appened to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.begin_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_learner.ipynb. Converted 13a_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.transfer_learning.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.ulmfit.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.learner.ipynb. Converted 43_tabular.model.ipynb. Converted 45_collab.ipynb. Converted 50_datablock_examples.ipynb. Converted 60_medical.imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 97_test_utils.ipynb. Converted index.ipynb. Converted migrating.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def before_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def before_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def before_train(self): self._launch_pbar() def before_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') if hasattr(self, 'old_logger'): self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(before_fit="Setup the master bar over the epochs", before_epoch="Update the master bar", before_train="Launch a progress bar over the training dataloader", before_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.before_fit) show_doc(ProgressCallback.before_epoch) show_doc(ProgressCallback.before_train) show_doc(ProgressCallback.before_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after,run_valid=ProgressCallback,False def before_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(5) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def before_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appened to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.before_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" _stateattrs=('mbar','pbar') run_after=Recorder def before_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def before_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def before_train(self): self._launch_pbar() def before_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') if hasattr(self, 'old_logger'): self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(before_fit="Setup the master bar over the epochs", before_epoch="Update the master bar", before_train="Launch a progress bar over the training dataloader", before_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.before_fit) show_doc(ProgressCallback.before_epoch) show_doc(ProgressCallback.before_train) show_doc(ProgressCallback.before_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after,run_valid=ProgressCallback,False def before_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") if not(self.run): return self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" if not self.nb_batches: return rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(5) learn.predict(torch.tensor([[0.1]])) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def before_fit(self): "Prepare file with metric names." if hasattr(self, "gather_preds"): return self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.file.flush() os.fsync(self.file.fileno()) self.old_logger(log) def after_fit(self): "Close the file and clean up." if hasattr(self, "gather_preds"): return self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appended to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.before_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 01a_losses.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 10b_tutorial.albumentations.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 18b_callback.preds.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted dev-setup.ipynb. Converted index.ipynb. Converted quick_start.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def before_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def before_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def before_train(self): self._launch_pbar() def before_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') if hasattr(self, 'old_logger'): self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(before_fit="Setup the master bar over the epochs", before_epoch="Update the master bar", before_train="Launch a progress bar over the training dataloader", before_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.before_fit) show_doc(ProgressCallback.before_epoch) show_doc(ProgressCallback.before_train) show_doc(ProgressCallback.before_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after,run_valid=ProgressCallback,False def before_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") if not(self.run): return self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" if not self.nb_batches: return rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(5) learn.predict(torch.tensor([[0.1]])) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def before_fit(self): "Prepare file with metric names." if hasattr(self, "gather_preds"): return self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.file.flush() os.fsync(self.file.fileno()) self.old_logger(log) def after_fit(self): "Close the file and clean up." if hasattr(self, "gather_preds"): return self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appended to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.before_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 01a_losses.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 10b_tutorial.albumentations.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 18b_callback.preds.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted dev-setup.ipynb. Converted index.ipynb. Converted quick_start.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def before_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def before_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def before_train(self): self._launch_pbar() def before_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') if hasattr(self, 'old_logger'): self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(before_fit="Setup the master bar over the epochs", before_epoch="Update the master bar", before_train="Launch a progress bar over the training dataloader", before_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.before_fit) show_doc(ProgressCallback.before_epoch) show_doc(ProgressCallback.before_train) show_doc(ProgressCallback.before_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after,run_valid=ProgressCallback,False def before_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(5) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def before_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appended to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.before_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def before_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def before_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def before_train(self): self._launch_pbar() def before_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') if hasattr(self, 'old_logger'): self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(before_fit="Setup the master bar over the epochs", before_epoch="Update the master bar", before_train="Launch a progress bar over the training dataloader", before_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.before_fit) show_doc(ProgressCallback.before_epoch) show_doc(ProgressCallback.before_train) show_doc(ProgressCallback.before_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after,run_valid=ProgressCallback,False def before_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(5) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def before_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appended to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.before_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai2.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def begin_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def begin_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def begin_train(self): self._launch_pbar() def begin_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(begin_fit="Setup the master bar over the epochs", begin_epoch="Update the master bar", begin_train="Launch a progress bar over the training dataloader", begin_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.begin_fit) show_doc(ProgressCallback.begin_epoch) show_doc(ProgressCallback.begin_train) show_doc(ProgressCallback.begin_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after,run_valid=ProgressCallback,False def begin_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(5) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def begin_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appened to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.begin_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def before_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def before_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def before_train(self): self._launch_pbar() def before_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') if hasattr(self, 'old_logger'): self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(before_fit="Setup the master bar over the epochs", before_epoch="Update the master bar", before_train="Launch a progress bar over the training dataloader", before_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.before_fit) show_doc(ProgressCallback.before_epoch) show_doc(ProgressCallback.before_train) show_doc(ProgressCallback.before_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after,run_valid=ProgressCallback,False def before_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(5) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def before_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appended to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.before_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai2.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def begin_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def begin_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def begin_train(self): self._launch_pbar() def begin_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(begin_fit="Setup the master bar over the epochs", begin_epoch="Update the master bar", begin_train="Launch a progress bar over the training dataloader", begin_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) yield self if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.begin_fit) show_doc(ProgressCallback.begin_epoch) show_doc(ProgressCallback.begin_train) show_doc(ProgressCallback.begin_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after,run_valid=ProgressCallback,False def begin_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(10) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def begin_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appened to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.begin_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. Converted tutorial.ipynb. ###Markdown Progress and logging callbacks> Callback and helper function to track progress of training or log results ###Code from fastai2.test_utils import * ###Output _____no_output_____ ###Markdown ProgressCallback - ###Code # export @docs class ProgressCallback(Callback): "A `Callback` to handle the display of progress bars" run_after=Recorder def before_fit(self): assert hasattr(self.learn, 'recorder') if self.create_mbar: self.mbar = master_bar(list(range(self.n_epoch))) if self.learn.logger != noop: self.old_logger,self.learn.logger = self.logger,self._write_stats self._write_stats(self.recorder.metric_names) else: self.old_logger = noop def before_epoch(self): if getattr(self, 'mbar', False): self.mbar.update(self.epoch) def before_train(self): self._launch_pbar() def before_validate(self): self._launch_pbar() def after_train(self): self.pbar.on_iter_end() def after_validate(self): self.pbar.on_iter_end() def after_batch(self): self.pbar.update(self.iter+1) if hasattr(self, 'smooth_loss'): self.pbar.comment = f'{self.smooth_loss:.4f}' def _launch_pbar(self): self.pbar = progress_bar(self.dl, parent=getattr(self, 'mbar', None), leave=False) self.pbar.update(0) def after_fit(self): if getattr(self, 'mbar', False): self.mbar.on_iter_end() delattr(self, 'mbar') if hasattr(self, 'old_logger'): self.learn.logger = self.old_logger def _write_stats(self, log): if getattr(self, 'mbar', False): self.mbar.write([f'{l:.6f}' if isinstance(l, float) else str(l) for l in log], table=True) _docs = dict(before_fit="Setup the master bar over the epochs", before_epoch="Update the master bar", before_train="Launch a progress bar over the training dataloader", before_validate="Launch a progress bar over the validation dataloader", after_train="Close the progress bar over the training dataloader", after_validate="Close the progress bar over the validation dataloader", after_batch="Update the current progress bar", after_fit="Close the master bar") if not hasattr(defaults, 'callbacks'): defaults.callbacks = [TrainEvalCallback, Recorder, ProgressCallback] elif ProgressCallback not in defaults.callbacks: defaults.callbacks.append(ProgressCallback) learn = synth_learner() learn.fit(5) #export @patch @contextmanager def no_bar(self:Learner): "Context manager that deactivates the use of progress bars" has_progress = hasattr(self, 'progress') if has_progress: self.remove_cb(self.progress) try: yield self finally: if has_progress: self.add_cb(ProgressCallback()) learn = synth_learner() with learn.no_bar(): learn.fit(5) #hide #Check validate works without any training def tst_metric(out, targ): return F.mse_loss(out, targ) learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() #hide #Check get_preds works without any training learn = synth_learner(n_trn=5, metrics=tst_metric) preds,targs = learn.validate() show_doc(ProgressCallback.before_fit) show_doc(ProgressCallback.before_epoch) show_doc(ProgressCallback.before_train) show_doc(ProgressCallback.before_validate) show_doc(ProgressCallback.after_batch) show_doc(ProgressCallback.after_train) show_doc(ProgressCallback.after_validate) show_doc(ProgressCallback.after_fit) ###Output _____no_output_____ ###Markdown ShowGraphCallback - ###Code # export class ShowGraphCallback(Callback): "Update a graph of training and validation loss" run_after,run_valid=ProgressCallback,False def before_fit(self): self.run = not hasattr(self.learn, 'lr_finder') and not hasattr(self, "gather_preds") self.nb_batches = [] assert hasattr(self.learn, 'progress') def after_train(self): self.nb_batches.append(self.train_iter) def after_epoch(self): "Plot validation loss in the pbar graph" rec = self.learn.recorder iters = range_of(rec.losses) val_losses = [v[1] for v in rec.values] x_bounds = (0, (self.n_epoch - len(self.nb_batches)) * self.nb_batches[0] + len(rec.losses)) y_bounds = (0, max((max(Tensor(rec.losses)), max(Tensor(val_losses))))) self.progress.mbar.update_graph([(iters, rec.losses), (self.nb_batches, val_losses)], x_bounds, y_bounds) #slow learn = synth_learner(cbs=ShowGraphCallback()) learn.fit(5) ###Output _____no_output_____ ###Markdown CSVLogger - ###Code # export class CSVLogger(Callback): run_after=Recorder "Log the results displayed in `learn.path/fname`" def __init__(self, fname='history.csv', append=False): self.fname,self.append = Path(fname),append def read_log(self): "Convenience method to quickly access the log." return pd.read_csv(self.path/self.fname) def before_fit(self): "Prepare file with metric names." self.path.parent.mkdir(parents=True, exist_ok=True) self.file = (self.path/self.fname).open('a' if self.append else 'w') self.file.write(','.join(self.recorder.metric_names) + '\n') self.old_logger,self.learn.logger = self.logger,self._write_line def _write_line(self, log): "Write a line with `log` and call the old logger." self.file.write(','.join([str(t) for t in log]) + '\n') self.old_logger(log) def after_fit(self): "Close the file and clean up." self.file.close() self.learn.logger = self.old_logger ###Output _____no_output_____ ###Markdown The results are appened to an existing file if `append`, or they overwrite it otherwise. ###Code learn = synth_learner(cbs=CSVLogger()) learn.fit(5) show_doc(CSVLogger.read_log) df = learn.csv_logger.read_log() test_eq(df.columns.values, learn.recorder.metric_names) for i,v in enumerate(learn.recorder.values): test_close(df.iloc[i][:3], [i] + v) os.remove(learn.path/learn.csv_logger.fname) show_doc(CSVLogger.before_fit) show_doc(CSVLogger.after_fit) ###Output _____no_output_____ ###Markdown Export - ###Code #hide from nbdev.export import notebook2script notebook2script() ###Output Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Converted 09b_vision.utils.ipynb. Converted 09c_vision.widgets.ipynb. Converted 10_tutorial.pets.ipynb. Converted 11_vision.models.xresnet.ipynb. Converted 12_optimizer.ipynb. Converted 13_callback.core.ipynb. Converted 13a_learner.ipynb. Converted 13b_metrics.ipynb. Converted 14_callback.schedule.ipynb. Converted 14a_callback.data.ipynb. Converted 15_callback.hook.ipynb. Converted 15a_vision.models.unet.ipynb. Converted 16_callback.progress.ipynb. Converted 17_callback.tracker.ipynb. Converted 18_callback.fp16.ipynb. Converted 18a_callback.training.ipynb. Converted 19_callback.mixup.ipynb. Converted 20_interpret.ipynb. Converted 20a_distributed.ipynb. Converted 21_vision.learner.ipynb. Converted 22_tutorial.imagenette.ipynb. Converted 23_tutorial.vision.ipynb. Converted 24_tutorial.siamese.ipynb. Converted 24_vision.gan.ipynb. Converted 30_text.core.ipynb. Converted 31_text.data.ipynb. Converted 32_text.models.awdlstm.ipynb. Converted 33_text.models.core.ipynb. Converted 34_callback.rnn.ipynb. Converted 35_tutorial.wikitext.ipynb. Converted 36_text.models.qrnn.ipynb. Converted 37_text.learner.ipynb. Converted 38_tutorial.text.ipynb. Converted 39_tutorial.transformers.ipynb. Converted 40_tabular.core.ipynb. Converted 41_tabular.data.ipynb. Converted 42_tabular.model.ipynb. Converted 43_tabular.learner.ipynb. Converted 44_tutorial.tabular.ipynb. Converted 45_collab.ipynb. Converted 46_tutorial.collab.ipynb. Converted 50_tutorial.datablock.ipynb. Converted 60_medical.imaging.ipynb. Converted 61_tutorial.medical_imaging.ipynb. Converted 65_medical.text.ipynb. Converted 70_callback.wandb.ipynb. Converted 71_callback.tensorboard.ipynb. Converted 72_callback.neptune.ipynb. Converted 73_callback.captum.ipynb. Converted 74_callback.cutmix.ipynb. Converted 97_test_utils.ipynb. Converted 99_pytorch_doc.ipynb. Converted index.ipynb. Converted tutorial.ipynb.
week5/2_Functions_Part_2.ipynb
###Markdown פונקציות – חלק 2 הקדמה עד כה למדנו להכיר את עולמן של הפונקציות מקרוב: פונקציות הן כלי שימושי שמאפשר לנו לחלק את הקוד לתתי־משימות מוגדרות, ולשמור עליו מסודר וקל לתחזוק. לפונקציה יש "קלט" שהוא הפרמטרים שלה, ו"פלט" שהוא ערך ההחזרה שלה. אפשר לקרוא לפונקציה בציון שמה, סוגריים, ורשימת הארגומנטים שרוצים להעביר לפרמטרים שלה. במחברת זו נרכוש כלים נוספים שיאפשרו לנו גמישות רבה יותר בהגדרת פונקציות ובשימוש בהן. שימוש מתקדם בפונקציות העברת ארגומנטים בעזרת שם כאשר אנחנו קוראים לפונקציה, יישלחו לפי הסדר הארגומנטים שנעביר אל הפרמטרים שמוגדרים בכותרת הפונקציה. מצב כזה נקרא positional arguments ("ארגומנטים לפי מיקום"). נסתכל על פונקציה שמקבלת טווח (סוף והתחלה, בסדר הזה) ומחזירה רשימה של כל המספרים השלמים בטווח: ###Code def my_range(end, start): numbers = [] i = start while i < end: numbers.append(i) i += 1 return numbers my_range(5, 0) ###Output _____no_output_____ ###Markdown לפעמים נרצה לשנות את סדר הארגומנטים שאנחנו שולחים לפונקציה. נעשה זאת בקריאה לפונקציה, על־ידי העברת שם הארגומנט ולאחר מכן העברת הערך שאנחנו רוצים להעביר אליו: ###Code my_range(start=0, end=5) ###Output _____no_output_____ ###Markdown בשורה הזו הפכנו את סדר הארגומנטים. כיוון שבקריאה כתבנו את שמות הפרמטרים התואמים לכותרת הפונקציה, הערכים נשלחו למקום הנכון. השיטה הזו נקראת keyword arguments ("ארגומנטים לפי שם"), ובה אנחנו מעבירים את הארגומנטים שלנו לפי שמות הפרמטרים בכותרת הפונקציה. אנחנו משתמשים בשיטה הזו אפילו כשאנחנו לא רוצים לשנות את סדר הארגומנטים, אלא רק לעשות קצת סדר בקוד. נבחן, לדוגמה, את המקרה של הפונקציה random.randrange – נעים יותר לראות קריאה לפונקציה עם שמות הפרמטרים: ###Code import random random.randrange(100, 200) # מובן פחות random.randrange(start=100, stop=200) # מובן יותר ###Output _____no_output_____ ###Markdown למרות השימוש בסימן =, לא מדובר פה בהשמה במובן הקלאסי שלה. זוהי צורת כתיבה מיוחדת בקריאה לפונקציות שהמטרה שלה היא לסמן "העבר לפרמטר ששמו כך־וכך את הערך כך־וכך". פרמטרים עם ערכי ברירת מחדל נזכר בפונקציה get של מילון, שמאפשרת לקבל ממנו ערך לפי מפתח מסוים. אם המפתח שאנחנו מחפשים לא קיים במילון, הפונקציה מחזירה None: ###Code ghibli_release_dates = { 'Castle in the Sky': '1986-08-02', 'My Neighbor Totoro': '1988-04-16', 'Spirited Away': '2001-07-20', 'Ponyo': '2008-07-19', } ponyo_release_date = ghibli_release_dates.get('Ponyo') men_in_black_release_date = ghibli_release_dates.get('Men in Black') print(f"Ponyo release date: {ponyo_release_date}") print(f"Men in Black release date: {men_in_black_release_date}") ###Output _____no_output_____ ###Markdown נממש את הפונקציה get בעצמנו. לשם הנוחות, ייראה השימוש שונה במקצת: ###Code def get(dictionary, key): if key in dictionary: return dictionary[key] return None ponyo_release_date = get(ghibli_release_dates, 'Ponyo') men_in_black_release_date = get(ghibli_release_dates, 'Men in Black') print(f"Ponyo release date: {ponyo_release_date}") print(f"Men in Black release date: {men_in_black_release_date}") ###Output _____no_output_____ ###Markdown המימוש שלנו לא מושלם. הפעולה המקורית, get על מילון, פועלת בצורה מתוחכמת יותר. אפשר להעביר לה פרמטר נוסף, שקובע מה יחזור אם המפתח שהעברנו בפרמטר הראשון לא נמצא במילון: ###Code ponyo_release_date = ghibli_release_dates.get('Ponyo', '???') men_in_black_release_date = ghibli_release_dates.get('Men in Black', '???') print(f"Ponyo release date: {ponyo_release_date}") print(f"Men in Black release date: {men_in_black_release_date}") ###Output _____no_output_____ ###Markdown שימו לב להתנהגות המיוחדת של הפעולה get! אם המפתח שהעברנו בארגומנט הראשון לא קיים במילון, היא מחזירה את הערך שכתוב בארגומנט השני. אפשר להעביר לה ארגומנט אחד, ואפשר להעביר לה שני ארגומנטים. היא מתפקדת כראוי בשני המצבים. זו לא פעם ראשונה שאנחנו רואים פונקציות כאלו. למעשה, בשבוע שעבר למדנו על פעולות builtins רבות שמתנהגות כך: range, enumerate ו־round, כולן יודעות לקבל מספר משתנה של ארגומנטים. נניח לפעולה get בינתיים. אל דאגה, נחזור אליה בקרוב. בזמן שאנחנו נחים מפעולות על מילונים יום האהבה מתקרב, וחנות הוורדים הקרובה מעוניינת להעלות את מחירי כל מוצריה בשקל אחד. התבקשנו לבנות עבורם פונקציה שמקבלת רשימת מחירים, ומחזירה רשימה שבה כל איבר גדול ב־1 מרשימת המחירים המקורית. ניגש לעבודה: ###Code def get_new_prices(l): l2 = [] for item in l: l2.append(item + 1) return l2 prices = [42, 73, 300] print(get_new_prices(prices)) ###Output _____no_output_____ ###Markdown בתוך זמן קצר הפונקציה שבנינו הופכת ללהיט היסטרי בחנויות הוורדים. מנהל קרטל הוורדים הבין־לאומי ג'וזפה ורדי יוצר איתנו קשר, ומבקש לשכלל התוכנה כך שיוכל להעלות את מחירי המוצרים כרצונו. כדי לעמוד בדרישה, נבנה פונקציה שמקבלת רשימה, ובנוסף אליה את המחיר שיתווסף לכל איבר ברשימה זו. כך, אם הקורא לפונקציה יעביר כארגומנט השני את הערך 2, כל איבר ברשימה יגדל ב־2. נממש בקלילות: ###Code def get_new_prices(l, increment_by): l2 = [] for item in l: l2.append(item + increment_by) return l2 prices = [42, 73, 300] print(get_new_prices(prices, 1)) print(get_new_prices(prices, 2)) ###Output _____no_output_____ ###Markdown ורדי פוצח בשירה מרוב אושר, ומבקש שכלול אחרון לפונקציה, אם אפשר. אם הקורא לפונקציה העביר לה רק את רשימת המחירים, העלו את כל המחירים בשקל, כברירת מחדל. אם כן הועבר הארגומנט השני, העלו את המחיר לפי הערך שצוין באותו ארגומנט. הפעם אנחנו מתחבטים קצת יותר, מגרדים בראש, קוראים כמה מדריכי פייתון ומגיעים לבסוף לתשובה הבאה: ###Code def get_new_prices(l, increment_by=1): l2 = [] for item in l: l2.append(item + increment_by) return l2 prices = [42, 73, 300] print(prices) print(get_new_prices(prices)) print(get_new_prices(prices, 5)) ###Output _____no_output_____ ###Markdown כשאנחנו רוצים להגדיר פרמטר עם ערך ברירת מחדל, נוכל לקבוע את ערך ברירת המחדל שלו בכותרת הפונקציה. אם יועבר ארגומנט שכזה לפונקציה – פייתון תשתמש בערך שהועבר. אם לא – יילקח ערך ברירת המחדל שהוגדר בכותרת הפונקציה. במקרה שלנו הגדרנו את הפרמטר increment_by עם ערך ברירת המחדל 1. קריאה לפונקציה עם ארגומנט אחד בלבד (רשימת המחירים) תגדיל את כל המחירים ב־1, שהרי הוא ערך ברירת המחדל. קריאה לפונקציה עם שני ארגומנטים (רשימת המחירים, סכום ההעלאה) תגדיל את כל המחירים בסכום ההעלאה שהועבר. חשוב להבין שקריאה לפונקציה עם ערכים במקום ערכי ברירת המחדל, לא תשנה את ערך ברירת המחדל בקריאות הבאות: ###Code print(get_new_prices(prices, 5)) print(get_new_prices(prices)) ###Output _____no_output_____ ###Markdown ממשו את פונקציית get המלאה. הפונקציה תקבל מילון, מפתח ו"ערך לשעת חירום". החזירו את הערך השייך למפתח שהתקבל. אחרת – החזירו את הערך לשעת החירום שהועבר לפונקציה. אם לא הועבר ערך לשעת חירום והמפתח לא נמצא במילון, החזירו None. חשוב! פתרו לפני שתמשיכו! נדגים את אותו עיקרון עם כמה ערכי ברירת מחדל. אם הדרישה הייתה, לדוגמה, להוסיף לפונקציה גם אפשרות להנחה במחירי הפרחים, היינו יכולים לממש זאת כך: ###Code def get_new_prices(l, increment_by=1, discount=0): l2 = [] for item in l: new_price = item + increment_by - discount l2.append(new_price) return l2 prices = [42, 73, 300] print(prices) print(get_new_prices(prices, 10, 1)) # העלאה של 10, הנחה של 1 print(get_new_prices(prices, 5)) # העלאה של 5 ###Output _____no_output_____ ###Markdown אך מה יקרה כשנרצה לתת רק הנחה? במקרה כזה, כשנרצה "לדלג" מעל אחד מערכי ברירת המחדל, נצטרך להעביר את שמות הפרמטרים בקריאה לפונקציה. בדוגמה הבאה אנחנו מעלים את המחיר ב־1 (כי זו ברירת המחדל), ומורידים אותו ב־5: ###Code prices = [42, 73, 300] print(prices) print(get_new_prices(prices, discount=5)) ###Output _____no_output_____ ###Markdown זה אמנם עניין של סגנון, אבל יש יופי וסדר בציון שמות הפרמטרים גם כשלא חייבים: ###Code print(get_new_prices(prices, increment_by=10, discount=1)) ###Output _____no_output_____ ###Markdown מספר משתנה של ארגומנטים הפונקציה הפייתונית max, למשל, מתנהגת באופן משונה. היא יודעת לקבל כל מספר שהוא של ארגומנטים, ולהחליט מי מהם הוא הגדול ביותר. ראו בעצמכם! ###Code max(13, 256, 278, 887, 989, 457, 6510, 18, 865, 901, 401, 704, 640) ###Output _____no_output_____ ###Markdown נוכל גם אנחנו לממש פונקציה שמקבלת מספר משתנה של פרמטרים די בקלות. נתחיל מלממש פונקציה טיפשית למדי, שמקבלת מספר משתנה של פרמטרים ומדפיסה אותם: ###Code def silly_function(*parameters): print(parameters) print(type(parameters)) print('-' * 20) silly_function('Shmulik', 'Shlomo') silly_function('Shmulik', 1, 1, 2, 3, 5, 8, 13) silly_function() ###Output _____no_output_____ ###Markdown מה התרחש בדוגמה האחרונה, בעצם? כשפרמטר מוגדר בכותרת הפונקציה עם הסימן כוכבית, אפשר לשלוח אל אותו פרמטר מספר בלתי מוגבל של ארגומנטים. הערך שייכנס לפרמטר יהיה מסוג tuple, שאיבריו הם כל האיברים שהועברו כארגומנטים. לצורך ההדגמה, נבנה פונקציה שמקבלת פרמטרים ומדפיסה אותם בזה אחר זה: ###Code def silly_function2(*parameters): print(f"Printing all the items in {parameters}:") for parameter in parameters: print(parameter) print("-" * 20) silly_function2('Shmulik', 'Shlomo') silly_function2('Shmulik', 1, 1, 2, 3, 5, 8, 13) silly_function2() ###Output _____no_output_____ ###Markdown שחקו עם הפונקציה silly_function2 וודאו שהבנתם מה מתרחש בה. כשתסיימו, נסו לממש את הפונקציה max בעצמכם. חשוב! פתרו לפני שתמשיכו! נממש את max: ###Code def my_max(*numbers): if not numbers: # אם לא סופקו ארגומנטים, אין מקסימום return None maximum = numbers[0] for number in numbers: if number > maximum: maximum = number return maximum my_max(13, 256, 278, 887, 989, 457, 6510, 18, 865, 901, 401, 704, 640) ###Output _____no_output_____ ###Markdown כותרת הפונקציה יכולה לכלול משתנים נוספים לפני הכוכבית. נראה לדוגמה פונקציה שמקבלת גובה הנחה ואת מחירי כל המוצרים שקנינו, ומחזירה את הסכום הסופי שעלינו לשלם: ###Code def get_final_price(discount, *prices): return sum(prices) - discount get_final_price(10000, 3.141, 90053) ###Output _____no_output_____ ###Markdown אף שבמבט ראשון הפונקציה get_final_price עשויה להיראות מגניבה, כדאי להיזהר משימוש מוגזם בתכונה הזו של פייתון. הדוגמה הזו אמנם מדגימה גמישות יוצאת דופן של פייתון, אבל ככלל היא דוגמה גרועה מאוד לשימוש בכוכבית. שימו לב כמה נוח יותר להבין את המימוש הבא ל־get_final_price, וכמה נוח יותר להבין את הקריאה לפונקציה הזו: ###Code def get_final_price(prices, discount): return sum(prices) - discount get_final_price(prices=(3.141, 90053), discount=10000) ###Output _____no_output_____ ###Markdown תרגול ביניים: סולל דרך כתבו פונקציה בשם create_path שיכולה לקבל מספר בלתי מוגבל של ארגומנטים. הפרמטר הראשון יהיה אות הכונן שבו הקבצים מאוחסנים (לרוב "C"), והפרמטרים שאחריו יהיו שמות של תיקיות וקבצים. שרשרו אותם בעזרת התו \ כדי ליצור מהם מחרוזת המייצגת נתיב במחשב. אחרי האות של הכונן שימו נקודתיים. הניחו שהקלט שהמשתמש הכניס הוא תקין. הנה כמה דוגמאות לקריאות לפונקציה ולערכי ההחזרה שלה: הקריאה create_path("C", "Users", "Yam") תחזיר "C:\Users\Yam" הקריאה create_path("C", "Users", "Yam", "HaimonLimon.mp4") תחזיר "C:\Users\Yam\HaimonLimon.mp4" הקריאה create_path("D", "1337.png") תחזיר "D:\1337.png" הקריאה create_path("C") תחזיר "C:" הקריאה create_path() תגרום לשגיאה ###Code def create_path(*params): path_pre='\"' + params[0] + ":" path_post = '\\'.join(params[1:]) return path_pre + path_post + '\"' print(create_path("C", "Users", "Yam", "HaimonLimon.mp4")) ###Output _____no_output_____ ###Markdown מספר משתנה של ארגומנטים עם שמות בתחילת המחברת למדנו כיצד מעבירים לפונקציות ארגומנטים בעזרת שם: ###Code def print_introduction(name, age): return f"My name is {name} and I am {age} years old." print_introduction(age=2019, name="Gandalf") ###Output _____no_output_____ ###Markdown אבל מה אם נרצה להעביר לפונקציה שלנו מספר בלתי מוגבל של ארגומנטים לפי שם? נביא כדוגמה את הפעולה format על מחרוזות. format היא פונקציה גמישה בכל הנוגע לכמות ולשמות של הארגומנטים שמועברים לה לפי שם. נראה שתי דוגמאות לשימוש בה, שימוש שבמבט ראשון עשוי להיראות קסום: ###Code message = "My name is {name} and I am {age} years old" formatted_message = message.format(name="Gandalf", age=2019) print(formatted_message) song = "I'll {action} a story of a {animal}.\nA {animal} who's {key} is {value}." formatted_song = song.format(action="sing", animal="duck", key="name", value="Alfred Kwak") print(formatted_song) ###Output _____no_output_____ ###Markdown נכתוב גם אנחנו פונקציה שמסוגלת לקבל מספר בלתי מוגבל של ארגומנטים לפי שם. ניעזר תחילה בידידתנו הוותיקה, silly_function, כדי לראות איך הקסם קורה: ###Code def silly_function(**kwargs): print(kwargs) print(type(kwargs)) silly_function(a=5, b=6, address="221B Baker St, London, England.") ###Output _____no_output_____ ###Markdown ההתנהגות הזו מתרחשת כיוון שהשתמשנו בשתי כוכביות לפני שם המשתנה. השימוש בשתי כוכביות מאפשר לנו להעביר מספר בלתי מוגבל של ארגומנטים עם שם, באופן שמזכיר קצת את השימוש בכוכבית שראינו קודם. המשתנה שבו נשמרים הנתונים הוא מסוג מילון, ובו המפתחות יהיו שמות הארגומנטים שהועברו, והערכים – הערכים שהועברו לאותם שמות. אחרי שהבנו איך הסיפור הזה עובד, בואו ננסה ליצור פונקציה מעניינת יותר. הפונקציה שנכתוב תקבל כארגומנטים כמה גרם מכל רכיב צריך כדי להכין סושי, ותדפיס לנו מתכון: ###Code def print_sushi_recipe(**ingredients_and_amounts): for ingredient, amount in ingredients_and_amounts.items(): print(f"{amount} grams of {ingredient}") print_sushi_recipe(rice=300, water=300, vinegar=15, sugar=10, salt=3, fish=600) ###Output _____no_output_____ ###Markdown בדוגמה זו השתמשנו בעובדה שהפרמטר שמוגדר בעזרת שתי כוכביות הוא בהכרח מילון. עברנו על כל המפתחות והערכים שבו בעזרת הפעולה items, והדפסנו את המתכון, רכיב אחר רכיב. גרמו לפונקציה print_sushi_recipe להדפיס את הרכיבים לפי סדר משקלם, מהנמוך לגבוה. פרמטר המוגדר בעזרת שתי כוכביות תמיד יופיע בסוף רשימת הפרמטרים. תרגול ביניים: גזור פזורפ כתבו פונקציה בשם my_format שמקבלת מחרוזת, ומספר בלתי מוגבל של פרמטרים עם שמות. הפונקציה תחליף כל הופעה של {key} במחרוזת, אם key הועבר כפרמטר לפונקציה. הערך שבו {key} יוחלף הוא הערך שהועבר ל־key במסגרת העברת הארגומנטים לפונקציה. הפונקציה לא תשתמש בפעולה format של מחרוזות או בפונקציות שלא למדנו עד כה. הנה כמה דוגמאות לקריאות לפונקציה ולערכי ההחזרה שלה: הקריאה my_format("I'm Mr. {name}, look at me!", name="Meeseeks") תחזיר "I'm Mr. Meeseeks, look at me!" הקריאה my_format("{a} {b} {c} {c}", a="wubba", b="lubba", c="dub") תחזיר "wubba lubba dub dub" הקריאה my_format("The universe is basically an animal", animal="Chicken") תחזיר "The universe is basically an animal" הקריאה my_format("The universe is basically an animal") תחזיר "The universe is basically an animal" ###Code def my_format(s,**params): for k,v in params.items(): s=s.replace('{'+k+'}',v) return s my_format("The universe is basically an animal") ###Output _____no_output_____ ###Markdown חוק וסדר נוכל לשלב יחד את כל הטכניקות שלמדנו עד עכשיו לפונקציה אחת. ניצור, לדוגמה, פונקציה שמחשבת עלות הכנה של עוגה. הפונקציה תקבל: את רשימת הרכיבים הקיימים בסופר ואת המחירים שלהם. את רשימת הרכיבים הדרושים כדי להכין עוגה (נניח ששם כל רכיב הוא מילה בודדת). אם ללקוח מגיעה הנחה. שיעור ההנחה, באחוזים. כברירת מחדל, אם ללקוח מגיעה הנחה – שיעורה הוא 10%. לצורך פישוט התרגיל, נתעלם לרגע מעניין הכמויות במתכון :) ###Code def calculate_cake_price(apply_discount, *ingredients, discount_rate=10, **prices): if not apply_discount: discount_rate = 0 final_price = 0 for ingredient in ingredients: final_price = final_price + prices.get(ingredient) final_price = final_price - (final_price * discount_rate / 100) return final_price calculate_cake_price(True, 'chocolate', 'cream', chocolate=30, cream=20, water=5) ###Output _____no_output_____ ###Markdown הפונקציה נכתבה כדי להדגים את הטכניקה, והיא נראית די רע. ראו כמה קשה להבין איזה ארגומנט שייך לאיזה פרמטר בקריאה לפונקציה. יש להפעיל שיקול דעת לפני שימוש בטכניקות של קבלת פרמטרים מרובים. שימו לב לסדר הפרמטרים בכותרת הפונקציה: הארגומנטים שמיקומם קבוע ואנחנו יודעים מי הם הולכים להיות (apply_discount). הארגומנטים שמיקומם קבוע ואנחנו לא יודעים מי הם הולכים להיות (ingredients). הארגומנטים ששמותיהם ידועים וערך ברירת המחדל שלהם נקבע בראש הפונקציה (discount_rate). ערכים נוספים ששמותיהם לא ידועים מראש (prices). נסו לחשוב: למה נקבע דווקא הסדר הזה? איך הייתם כותבים את אותה הפונקציה בדיוק בלי שימוש בטכניקות שלמדנו? השימוש בערכי ברירת מחדל מותר. חשוב! פתרו לפני שתמשיכו! ערכי ברירת מחדל שאפשר לשנותם יש מקרה קצה של ערכי ברירת מחדל שגורם לפייתון להתנהג קצת מוזר. זה קורה כשערך ברירת המחדל שהוגדר בכותרת הפונקציה הוא mutable: ###Code def append(item, l=[]): l.append(item) return l print(append(4, [1, 2, 3])) print(append('a')) ###Output _____no_output_____ ###Markdown עד כאן נראה כאילו הפונקציה פועלת באופן שהיינו מצפים ממנה. ערך ברירת המחדל של הפרמטר l הוא רשימה ריקה, ולכן בקריאה השנייה חוזרת רשימה עם איבר בודד, ['a']. נקרא לפונקציה עוד כמה פעמים, ונגלה משהו מוזר: ###Code print(append('b')) print(append('c')) print(append('d')) print(append(4, [1, 2, 3])) print(append('e')) ###Output _____no_output_____ ###Markdown משונה ולא הגיוני! ציפינו לקבל את הרשימה ['b'] ואז את הרשימה ['c'] וכן הלאה. במקום זה בכל פעם מצטרף איבר חדש לרשימה. למה? הסיבה לכך היא שפייתון קוראת את כותרת הפונקציה רק פעם אחת – בשלב ההגדרה של הפונקציה. בשלב הזה שבו פייתון תקרא את כותרת הפונקציה, ערך ברירת המחדל של l יצביע לרשימה ריקה. מאותו רגע, בכל פעם שלא נעביר ל־l ערך, l תהיה אותה רשימת ברירת מחדל שהגדרנו בהתחלה. נדגים זאת בעזרת הדפסת ה־id של הרשימה: ###Code def view_memory_of_l(item, l=[]): l.append(item) print(f"{l} --> {id(l)}") return l same_list1 = view_memory_of_l('a') same_list2 = view_memory_of_l('b') same_list3 = view_memory_of_l('c') new_list1 = view_memory_of_l(4, [1, 2, 3]) new_list2 = view_memory_of_l(5, [1, 2, 3]) new_list3 = view_memory_of_l(6, [1, 2, 3]) ###Output _____no_output_____ ###Markdown כיצד נפתור את הבעיה? דבר ראשון – נשתדל שלא להגדיר משתנים מטיפוס שהוא mutable בתוך כותרת הפונקציה. אם נרצה בכל זאת שהפרמטר יקבל רשימה כברירת מחדל, נעשה זאת כך: ###Code def append(item, l=None): if l == None: l = [] l.append(item) return l print(append(4, [1, 2, 3])) print(append('a')) ###Output _____no_output_____ ###Markdown שימו לב שהתופעה לא משתחזרת במבנים שהם immutable, כיוון שכשמם כן הם – אי אפשר לשנותם: ###Code def increment(i=0): i = i + 1 return i print(increment(100)) print(increment()) print(increment()) print(increment()) print(increment(100)) ###Output _____no_output_____ ###Markdown דוגמאות נוספות חיקוי מדויק של פונקציית get למילונים: נרענן את זיכרוננו בנוגע ל־unpacking: ###Code range_arguments = [1, 10, 3] range_result = range(*range_arguments) print(list(range_result)) ###Output _____no_output_____ ###Markdown או: ###Code preformatted_message = "My name is {me}, and my sister is {sister}" parameters = {'me': 'Mei', 'sister': 'Satsuki'} message = preformatted_message.format(**parameters) print(message) ###Output _____no_output_____ ###Markdown אם כך, נוכל לכתוב: ###Code def get(dictionary, *args, **kwargs): return dictionary.get(*args, **kwargs) ###Output _____no_output_____ ###Markdown שימו לב שהכוכביות בשורה הראשונה עוזרות לנו לקבל מספר משתנה של ארגומנטים. הכוכביות בשורה השנייה הן unpacking, כפי שלמדנו בשבוע שעבר. החיקוי הזה לא מועיל לנו במיוחד כרגע, אבל הוא יעבוד עבור כל סוג של פעולה. מונחים ערכי ברירת מחדלDefault Parameters. פרמטרים שנקבע להם ערך ברירת מחדל בכותרת הפונקציה.ארגומנטים לפי מיקוםPositional Arguments. ערכים המועברים כארגומנטים בקריאה לפונקציה לפי המיקום שלהם, וללא שם לידם.ארגומנטים לפי שםKeyword Arguments. ערכים המועברים כארגומנטים בקריאה לפונקציה לפי השם שלהם שנמצא לפני השווה.מספר משתנה של ארגומנטיםנקרא לרוב *args. מאפשר לנו לקבל מספר בלתי מוגבל של ארגומנטים לפי מיקום.מספר משתנה של ארגומנטים עם שמותנקרא לרוב **kwargs. מאפשר לנו לקבל מספר בלתי מוגבל של ארגומנטים לפי שם. תרגילים כפלו לי שתו לי כתבו פונקציה בשם avg שמקבלת מספר בלתי מוגבל של ארגומנטים, ומדפיסה את הממוצע שלהם. הקריאה avg(5, 6) תחזיר 5.5 הקריאה avg(10, 5, 3) תחזיר 6 הקריאה avg(2) תחזיר 2 הקריאה avg() תחזיר None או שגיאה, לבחירתכם ###Code def avg(*params): return sum(params)/len(params) avg(10, 5, 3) ###Output _____no_output_____ ###Markdown Cup of join כתבו פונקציה בשם join, שמקבלת מספר בלתי מוגבל של רשימות, כל רשימה כפרמטר. על הפונקציה להיות מסוגלת לקבל פרמטר נוסף בשם sep. על הפונקציה להחזיר רשימה אחת המורכבת מכלל הרשימות שהתקבלו כפרמטרים. אם סופק הפרמטר sep, יש לשרשר אותו כאיבר בין כל שתי רשימות. אם הוא לא סופק, יש לשרשר את התו "-" במקום. הקריאה join([1, 2], [8], [9, 5, 6], sep='@') תחזיר [1, 2, '@', 8, '@', 9, 5, 6] הקריאה join([1, 2], [8], [9, 5, 6]) תחזיר [1, 2, '-', 8, '-', 9, 5, 6] הקריאה join([1]) תחזיר [1] הקריאה join() תחזיר None או שגיאה, לבחירתכם ###Code from functools import reduce def join(sep="-",*lst): if type(sep) != str: lst = [sep] + list(lst) sep="-" return reduce(lambda a, b: a + [sep] + b, lst) join("+",[1, 2], [8], [9, 5, 6]) ###Output _____no_output_____ ###Markdown חתכת עוגה ממשו פונקציה בשם get_recipe_price, שלה יש: פרמטר בשם prices, שיקבל מילון של מצרכים הדרושים כדי להכין מתכון מסוים. מפתח המילון יהיה שם המוצר, וערך המילון יהיה המחיר שלו ל־100 גרם. הניחו ששמו של כל רכיב הוא מילה אחת, ללא רווחים וללא תווים מיוחדים. פרמטר רשות בשם optionals שיקבל רשימה של רכיבים שנתעלם מהם, משמע – לא נקנה מהם בכלל. אם הפרמטר לא יצוין, יש להתייחס לכל הרכיבים שהועברו. עבור כל רכיב שהועבר ב־ingredients, יש להעביר ארגומנט הנושא את שמו של הרכיב. ערך הארגומנט צריך להיות כמות הרכיב (בגרמים) שממנה אנחנו רוצים לקנות עבור המתכון. הפונקציה תחזיר את המחיר שעלינו לשלם על קניית כל המצרכים. הקריאה get_recipe_price({'chocolate': 18, 'milk': 8}, chocolate=200, milk=100) תחזיר 44 הקריאה get_recipe_price({'chocolate': 18, 'milk': 8}, optionals=['milk'], chocolate=300) תחזיר 54 הקריאה get_recipe_price({}) תחזיר 0 ###Code def get_recipe_price(prices,opt=[],**ingredients): for key in opt: prices.pop(key) price=0 for k,v in prices.items(): price = price + v*(ingredients.get(k,100)/100) return price get_recipe_price({}) ###Output _____no_output_____ ###Markdown פונקציות – חלק 2 הקדמה עד כה למדנו להכיר את עולמן של הפונקציות מקרוב: פונקציות הן כלי שימושי שמאפשר לנו לחלק את הקוד לתתי־משימות מוגדרות, ולשמור עליו מסודר וקל לתחזוק.לפונקציה יש "קלט" שהוא הפרמטרים שלה, ו"פלט" שהוא ערך ההחזרה שלה.אפשר לקרוא לפונקציה בציון שמה, סוגריים, ורשימת הארגומנטים שרוצים להעביר לפרמטרים שלה. במחברת זו נרכוש כלים נוספים שיאפשרו לנו גמישות רבה יותר בהגדרת פונקציות ובשימוש בהן. שימוש מתקדם בפונקציות העברת ארגומנטים בעזרת שם כאשר אנחנו קוראים לפונקציה, יישלחו לפי הסדר הארגומנטים שנעביר אל הפרמטרים שמוגדרים בכותרת הפונקציה. מצב כזה נקרא positional arguments ("ארגומנטים לפי מיקום"). נסתכל על פונקציה שמקבלת טווח (סוף והתחלה, בסדר הזה) ומחזירה רשימה של כל המספרים השלמים בטווח: ###Code def my_range(end, start): numbers = [] i = start while i < end: numbers.append(i) i += 1 return numbers my_range(5, 0) ###Output _____no_output_____ ###Markdown לפעמים נרצה לשנות את סדר הארגומנטים שאנחנו שולחים לפונקציה. נעשה זאת בקריאה לפונקציה, על־ידי העברת שם הארגומנט ולאחר מכן העברת הערך שאנחנו רוצים להעביר אליו: ###Code my_range(start=0, end=5) ###Output _____no_output_____ ###Markdown בשורה הזו הפכנו את סדר הארגומנטים. כיוון שבקריאה כתבנו את שמות הפרמטרים התואמים לכותרת הפונקציה, הערכים נשלחו למקום הנכון. השיטה הזו נקראת keyword arguments ("ארגומנטים לפי שם"), ובה אנחנו מעבירים את הארגומנטים שלנו לפי שמות הפרמטרים בכותרת הפונקציה. אנחנו משתמשים בשיטה הזו אפילו כשאנחנו לא רוצים לשנות את סדר הארגומנטים, אלא רק לעשות קצת סדר בקוד. נבחן, לדוגמה, את המקרה של הפונקציה random.randrange – נעים יותר לראות קריאה לפונקציה עם שמות הפרמטרים: ###Code import random random.randrange(100, 200) # מובן פחות random.randrange(start=100, stop=200) # מובן יותר ###Output _____no_output_____ ###Markdown למרות השימוש בסימן =, לא מדובר פה בהשמה במובן הקלאסי שלה. זוהי צורת כתיבה מיוחדת בקריאה לפונקציות שהמטרה שלה היא לסמן "העבר לפרמטר ששמו כך־וכך את הערך כך־וכך". פרמטרים עם ערכי ברירת מחדל נזכר בפונקציה get של מילון, שמאפשרת לקבל ממנו ערך לפי מפתח מסוים. אם המפתח שאנחנו מחפשים לא קיים במילון, הפונקציה מחזירה None: ###Code ghibli_release_dates = { 'Castle in the Sky': '1986-08-02', 'My Neighbor Totoro': '1988-04-16', 'Spirited Away': '2001-07-20', 'Ponyo': '2008-07-19', } ponyo_release_date = ghibli_release_dates.get('Ponyo') men_in_black_release_date = ghibli_release_dates.get('Men in Black') print(f"Ponyo release date: {ponyo_release_date}") print(f"Men in Black release date: {men_in_black_release_date}") ###Output _____no_output_____ ###Markdown נממש את הפונקציה get בעצמנו. לשם הנוחות, ייראה השימוש שונה במקצת: ###Code def get(dictionary, key): if key in dictionary: return dictionary[key] return None ponyo_release_date = get(ghibli_release_dates, 'Ponyo') men_in_black_release_date = get(ghibli_release_dates, 'Men in Black') print(f"Ponyo release date: {ponyo_release_date}") print(f"Men in Black release date: {men_in_black_release_date}") ###Output _____no_output_____ ###Markdown המימוש שלנו לא מושלם. הפעולה המקורית, get על מילון, פועלת בצורה מתוחכמת יותר. אפשר להעביר לה פרמטר נוסף, שקובע מה יחזור אם המפתח שהעברנו בפרמטר הראשון לא נמצא במילון: ###Code ponyo_release_date = ghibli_release_dates.get('Ponyo', '???') men_in_black_release_date = ghibli_release_dates.get('Men in Black', '???') print(f"Ponyo release date: {ponyo_release_date}") print(f"Men in Black release date: {men_in_black_release_date}") ###Output _____no_output_____ ###Markdown שימו לב להתנהגות המיוחדת של הפעולה get! אם המפתח שהעברנו בארגומנט הראשון לא קיים במילון, היא מחזירה את הערך שכתוב בארגומנט השני. אפשר להעביר לה ארגומנט אחד, ואפשר להעביר לה שני ארגומנטים. היא מתפקדת כראוי בשני המצבים. זו לא פעם ראשונה שאנחנו רואים פונקציות כאלו. למעשה, בשבוע שעבר למדנו על פעולות builtins רבות שמתנהגות כך:range, enumerate ו־round, כולן יודעות לקבל מספר משתנה של ארגומנטים. נניח לפעולה get בינתיים. אל דאגה, נחזור אליה בקרוב. בזמן שאנחנו נחים מפעולות על מילונים יום האהבה מתקרב, וחנות הוורדים הקרובה מעוניינת להעלות את מחירי כל מוצריה בשקל אחד. התבקשנו לבנות עבורם פונקציה שמקבלת רשימת מחירים, ומחזירה רשימה שבה כל איבר גדול ב־1 מרשימת המחירים המקורית. ניגש לעבודה: ###Code def get_new_prices(l): l2 = [] for item in l: l2.append(item + 1) return l2 prices = [42, 73, 300] print(get_new_prices(prices)) ###Output _____no_output_____ ###Markdown בתוך זמן קצר הפונקציה שבנינו הופכת ללהיט היסטרי בחנויות הוורדים. מנהל קרטל הוורדים הבין־לאומי ג'וזפה ורדי יוצר איתנו קשר, ומבקש לשכלל התוכנה כך שיוכל להעלות את מחירי המוצרים כרצונו. כדי לעמוד בדרישה, נבנה פונקציה שמקבלת רשימה, ובנוסף אליה את המחיר שיתווסף לכל איבר ברשימה זו. כך, אם הקורא לפונקציה יעביר כארגומנט השני את הערך 2, כל איבר ברשימה יגדל ב־2. נממש בקלילות: ###Code def get_new_prices(l, increment_by): l2 = [] for item in l: l2.append(item + increment_by) return l2 prices = [42, 73, 300] print(get_new_prices(prices, 1)) print(get_new_prices(prices, 2)) ###Output _____no_output_____ ###Markdown ורדי פוצח בשירה מרוב אושר, ומבקש שכלול אחרון לפונקציה, אם אפשר. אם הקורא לפונקציה העביר לה רק את רשימת המחירים, העלו את כל המחירים בשקל, כברירת מחדל. אם כן הועבר הארגומנט השני, העלו את המחיר לפי הערך שצוין באותו ארגומנט. הפעם אנחנו מתחבטים קצת יותר, מגרדים בראש, קוראים כמה מדריכי פייתון ומגיעים לבסוף לתשובה הבאה: ###Code def get_new_prices(l, increment_by=1): l2 = [] for item in l: l2.append(item + increment_by) return l2 prices = [42, 73, 300] print(prices) print(get_new_prices(prices)) print(get_new_prices(prices, 5)) ###Output _____no_output_____ ###Markdown כשאנחנו רוצים להגדיר פרמטר עם ערך ברירת מחדל, נוכל לקבוע את ערך ברירת המחדל שלו בכותרת הפונקציה. אם יועבר ארגומנט שכזה לפונקציה – פייתון תשתמש בערך שהועבר. אם לא – יילקח ערך ברירת המחדל שהוגדר בכותרת הפונקציה. במקרה שלנו הגדרנו את הפרמטר increment_by עם ערך ברירת המחדל 1. קריאה לפונקציה עם ארגומנט אחד בלבד (רשימת המחירים) תגדיל את כל המחירים ב־1, שהרי הוא ערך ברירת המחדל. קריאה לפונקציה עם שני ארגומנטים (רשימת המחירים, סכום ההעלאה) תגדיל את כל המחירים בסכום ההעלאה שהועבר. חשוב להבין שקריאה לפונקציה עם ערכים במקום ערכי ברירת המחדל, לא תשנה את ערך ברירת המחדל בקריאות הבאות: ###Code print(get_new_prices(prices, 5)) print(get_new_prices(prices)) ###Output _____no_output_____ ###Markdown ממשו את פונקציית get המלאה. הפונקציה תקבל מילון, מפתח ו"ערך לשעת חירום". החזירו את הערך השייך למפתח שהתקבל. אחרת – החזירו את הערך לשעת החירום שהועבר לפונקציה. אם לא הועבר ערך לשעת חירום והמפתח לא נמצא במילון, החזירו None. חשוב! פתרו לפני שתמשיכו! נדגים את אותו עיקרון עם כמה ערכי ברירת מחדל. אם הדרישה הייתה, לדוגמה, להוסיף לפונקציה גם אפשרות להנחה במחירי הפרחים, היינו יכולים לממש זאת כך: ###Code def get_new_prices(l, increment_by=1, discount=0): l2 = [] for item in l: new_price = item + increment_by - discount l2.append(new_price) return l2 prices = [42, 73, 300] print(prices) print(get_new_prices(prices, 10, 1)) # העלאה של 10, הנחה של 1 print(get_new_prices(prices, 5)) # העלאה של 5 ###Output _____no_output_____ ###Markdown אך מה יקרה כשנרצה לתת רק הנחה? במקרה כזה, כשנרצה "לדלג" מעל אחד מערכי ברירת המחדל, נצטרך להעביר את שמות הפרמטרים בקריאה לפונקציה. בדוגמה הבאה אנחנו מעלים את המחיר ב־1 (כי זו ברירת המחדל), ומורידים אותו ב־5: ###Code prices = [42, 73, 300] print(prices) print(get_new_prices(prices, discount=5)) ###Output _____no_output_____ ###Markdown זה אמנם עניין של סגנון, אבל יש יופי וסדר בציון שמות הפרמטרים גם כשלא חייבים: ###Code print(get_new_prices(prices, increment_by=10, discount=1)) ###Output _____no_output_____ ###Markdown מספר משתנה של ארגומנטים הפונקציה הפייתונית max, למשל, מתנהגת באופן משונה. היא יודעת לקבל כל מספר שהוא של ארגומנטים, ולהחליט מי מהם הוא הגדול ביותר. ראו בעצמכם! ###Code max(13, 256, 278, 887, 989, 457, 6510, 18, 865, 901, 401, 704, 640) ###Output _____no_output_____ ###Markdown נוכל גם אנחנו לממש פונקציה שמקבלת מספר משתנה של פרמטרים די בקלות. נתחיל מלממש פונקציה טיפשית למדי, שמקבלת מספר משתנה של פרמטרים ומדפיסה אותם: ###Code def silly_function(*parameters): print(parameters) print(type(parameters)) print('-' * 20) silly_function('Shmulik', 'Shlomo') silly_function('Shmulik', 1, 1, 2, 3, 5, 8, 13) silly_function() ###Output _____no_output_____ ###Markdown מה התרחש בדוגמה האחרונה, בעצם? כשפרמטר מוגדר בכותרת הפונקציה עם הסימן כוכבית, אפשר לשלוח אל אותו פרמטר מספר בלתי מוגבל של ארגומנטים. הערך שייכנס לפרמטר יהיה מסוג tuple, שאיבריו הם כל האיברים שהועברו כארגומנטים. לצורך ההדגמה, נבנה פונקציה שמקבלת פרמטרים ומדפיסה אותם בזה אחר זה: ###Code def silly_function2(*parameters): print(f"Printing all the items in {parameters}:") for parameter in parameters: print(parameter) print("-" * 20) silly_function2('Shmulik', 'Shlomo') silly_function2('Shmulik', 1, 1, 2, 3, 5, 8, 13) silly_function2() ###Output _____no_output_____ ###Markdown שחקו עם הפונקציה silly_function2 וודאו שהבנתם מה מתרחש בה. כשתסיימו, נסו לממש את הפונקציה max בעצמכם. חשוב! פתרו לפני שתמשיכו! נממש את max: ###Code def my_max(*numbers): if not numbers: # אם לא סופקו ארגומנטים, אין מקסימום return None maximum = numbers[0] for number in numbers: if number > maximum: maximum = number return maximum my_max(13, 256, 278, 887, 989, 457, 6510, 18, 865, 901, 401, 704, 640) ###Output _____no_output_____ ###Markdown כותרת הפונקציה יכולה לכלול משתנים נוספים לפני הכוכבית. נראה לדוגמה פונקציה שמקבלת גובה הנחה ואת מחירי כל המוצרים שקנינו, ומחזירה את הסכום הסופי שעלינו לשלם: ###Code def get_final_price(discount, *prices): return sum(prices) - discount get_final_price(10000, 3.141, 90053) ###Output _____no_output_____ ###Markdown אף שבמבט ראשון הפונקציה get_final_price עשויה להיראות מגניבה, כדאי להיזהר משימוש מוגזם בתכונה הזו של פייתון. הדוגמה הזו אמנם מדגימה גמישות יוצאת דופן של פייתון, אבל ככלל היא דוגמה גרועה מאוד לשימוש בכוכבית. שימו לב כמה נוח יותר להבין את המימוש הבא ל־get_final_price, וכמה נוח יותר להבין את הקריאה לפונקציה הזו: ###Code def get_final_price(prices, discount): return sum(prices) - discount get_final_price(prices=(3.141, 90053), discount=10000) ###Output _____no_output_____ ###Markdown תרגול ביניים: סולל דרך כתבו פונקציה בשם create_path שיכולה לקבל מספר בלתי מוגבל של ארגומנטים. הפרמטר הראשון יהיה אות הכונן שבו הקבצים מאוחסנים (לרוב "C"), והפרמטרים שאחריו יהיו שמות של תיקיות וקבצים. שרשרו אותם בעזרת התו \ כדי ליצור מהם מחרוזת המייצגת נתיב במחשב. אחרי האות של הכונן שימו נקודתיים. הניחו שהקלט שהמשתמש הכניס הוא תקין. הנה כמה דוגמאות לקריאות לפונקציה ולערכי ההחזרה שלה: הקריאה create_path("C", "Users", "Yam") תחזיר "C:\Users\Yam"הקריאה create_path("C", "Users", "Yam", "HaimonLimon.mp4") תחזיר "C:\Users\Yam\HaimonLimon.mp4"הקריאה create_path("D", "1337.png") תחזיר "D:\1337.png"הקריאה create_path("C") תחזיר "C:"הקריאה create_path() תגרום לשגיאה מספר משתנה של ארגומנטים עם שמות בתחילת המחברת למדנו כיצד מעבירים לפונקציות ארגומנטים בעזרת שם: ###Code def print_introduction(name, age): return f"My name is {name} and I am {age} years old." print_introduction(age=2019, name="Gandalf") ###Output _____no_output_____ ###Markdown אבל מה אם נרצה להעביר לפונקציה שלנו מספר בלתי מוגבל של ארגומנטים לפי שם? נביא כדוגמה את הפעולה format על מחרוזות.format היא פונקציה גמישה בכל הנוגע לכמות ולשמות של הארגומנטים שמועברים לה לפי שם. נראה שתי דוגמאות לשימוש בה, שימוש שבמבט ראשון עשוי להיראות קסום: ###Code message = "My name is {name} and I am {age} years old" formatted_message = message.format(name="Gandalf", age=2019) print(formatted_message) song = "I'll {action} a story of a {animal}.\nA {animal} who's {key} is {value}." formatted_song = song.format(action="sing", animal="duck", key="name", value="Alfred Kwak") print(formatted_song) ###Output _____no_output_____ ###Markdown נכתוב גם אנחנו פונקציה שמסוגלת לקבל מספר בלתי מוגבל של ארגומנטים לפי שם. ניעזר תחילה בידידתנו הוותיקה, silly_function, כדי לראות איך הקסם קורה: ###Code def silly_function(**kwargs): print(kwargs) print(type(kwargs)) silly_function(a=5, b=6, address="221B Baker St, London, England.") ###Output _____no_output_____ ###Markdown ההתנהגות הזו מתרחשת כיוון שהשתמשנו בשתי כוכביות לפני שם המשתנה. השימוש בשתי כוכביות מאפשר לנו להעביר מספר בלתי מוגבל של ארגומנטים עם שם, באופן שמזכיר קצת את השימוש בכוכבית שראינו קודם. המשתנה שבו נשמרים הנתונים הוא מסוג מילון, ובו המפתחות יהיו שמות הארגומנטים שהועברו, והערכים – הערכים שהועברו לאותם שמות. אחרי שהבנו איך הסיפור הזה עובד, בואו ננסה ליצור פונקציה מעניינת יותר. הפונקציה שנכתוב תקבל כארגומנטים כמה גרם מכל רכיב צריך כדי להכין סושי, ותדפיס לנו מתכון: ###Code def print_sushi_recipe(**ingredients_and_amounts): for ingredient, amount in ingredients_and_amounts.items(): print(f"{amount} grams of {ingredient}") print_sushi_recipe(rice=300, water=300, vinegar=15, sugar=10, salt=3, fish=600) ###Output _____no_output_____ ###Markdown בדוגמה זו השתמשנו בעובדה שהפרמטר שמוגדר בעזרת שתי כוכביות הוא בהכרח מילון. עברנו על כל המפתחות והערכים שבו בעזרת הפעולה items, והדפסנו את המתכון, רכיב אחר רכיב. גרמו לפונקציה print_sushi_recipe להדפיס את הרכיבים לפי סדר משקלם, מהנמוך לגבוה. פרמטר המוגדר בעזרת שתי כוכביות תמיד יופיע בסוף רשימת הפרמטרים. תרגול ביניים: גזור פזורפ כתבו פונקציה בשם my_format שמקבלת מחרוזת, ומספר בלתי מוגבל של פרמטרים עם שמות. הפונקציה תחליף כל הופעה של {key} במחרוזת, אם key הועבר כפרמטר לפונקציה. הערך שבו {key} יוחלף הוא הערך שהועבר ל־key במסגרת העברת הארגומנטים לפונקציה. הפונקציה לא תשתמש בפעולה format של מחרוזות או בפונקציות שלא למדנו עד כה. הנה כמה דוגמאות לקריאות לפונקציה ולערכי ההחזרה שלה: הקריאה my_format("I'm Mr. {name}, look at me!", name="Meeseeks") תחזיר "I'm Mr. Meeseeks, look at me!" הקריאה my_format("{a} {b} {c} {c}", a="wubba", b="lubba", c="dub") תחזיר "wubba lubba dub dub" הקריאה my_format("The universe is basically an animal", animal="Chicken") תחזיר "The universe is basically an animal" הקריאה my_format("The universe is basically an animal") תחזיר "The universe is basically an animal" חוק וסדר נוכל לשלב יחד את כל הטכניקות שלמדנו עד עכשיו לפונקציה אחת. ניצור, לדוגמה, פונקציה שמחשבת עלות הכנה של עוגה. הפונקציה תקבל:את רשימת הרכיבים הקיימים בסופר ואת המחירים שלהם.את רשימת הרכיבים הדרושים כדי להכין עוגה (נניח ששם כל רכיב הוא מילה בודדת).אם ללקוח מגיעה הנחה.שיעור ההנחה, באחוזים. כברירת מחדל, אם ללקוח מגיעה הנחה – שיעורה הוא 10%. לצורך פישוט התרגיל, נתעלם לרגע מעניין הכמויות במתכון :) ###Code def calculate_cake_price(apply_discount, *ingredients, discount_rate=10, **prices): if not apply_discount: discount_rate = 0 final_price = 0 for ingredient in ingredients: final_price = final_price + prices.get(ingredient) final_price = final_price - (final_price * discount_rate / 100) return final_price calculate_cake_price(True, 'chocolate', 'cream', chocolate=30, cream=20, water=5) ###Output _____no_output_____ ###Markdown הפונקציה נכתבה כדי להדגים את הטכניקה, והיא נראית די רע. ראו כמה קשה להבין איזה ארגומנט שייך לאיזה פרמטר בקריאה לפונקציה. יש להפעיל שיקול דעת לפני שימוש בטכניקות של קבלת פרמטרים מרובים. שימו לב לסדר הפרמטרים בכותרת הפונקציה:הארגומנטים שמיקומם קבוע ואנחנו יודעים מי הם הולכים להיות (apply_discount).הארגומנטים שמיקומם קבוע ואנחנו לא יודעים מי הם הולכים להיות (ingredients).הארגומנטים ששמותיהם ידועים וערך ברירת המחדל שלהם נקבע בראש הפונקציה (discount_rate).ערכים נוספים ששמותיהם לא ידועים מראש (prices). נסו לחשוב: למה נקבע דווקא הסדר הזה? איך הייתם כותבים את אותה הפונקציה בדיוק בלי שימוש בטכניקות שלמדנו? השימוש בערכי ברירת מחדל מותר. חשוב! פתרו לפני שתמשיכו! ערכי ברירת מחדל שאפשר לשנותם יש מקרה קצה של ערכי ברירת מחדל שגורם לפייתון להתנהג קצת מוזר. זה קורה כשערך ברירת המחדל שהוגדר בכותרת הפונקציה הוא mutable: ###Code def append(item, l=[]): l.append(item) return l print(append(4, [1, 2, 3])) print(append('a')) ###Output _____no_output_____ ###Markdown עד כאן נראה כאילו הפונקציה פועלת באופן שהיינו מצפים ממנה. ערך ברירת המחדל של הפרמטר l הוא רשימה ריקה, ולכן בקריאה השנייה חוזרת רשימה עם איבר בודד, ['a']. נקרא לפונקציה עוד כמה פעמים, ונגלה משהו מוזר: ###Code print(append('b')) print(append('c')) print(append('d')) print(append(4, [1, 2, 3])) print(append('e')) ###Output _____no_output_____ ###Markdown משונה ולא הגיוני! ציפינו לקבל את הרשימה ['b'] ואז את הרשימה ['c'] וכן הלאה. במקום זה בכל פעם מצטרף איבר חדש לרשימה. למה? הסיבה לכך היא שפייתון קוראת את כותרת הפונקציה רק פעם אחת – בשלב ההגדרה של הפונקציה. בשלב הזה שבו פייתון תקרא את כותרת הפונקציה, ערך ברירת המחדל של l יצביע לרשימה ריקה. מאותו רגע, בכל פעם שלא נעביר ל־l ערך, l תהיה אותה רשימת ברירת מחדל שהגדרנו בהתחלה. נדגים זאת בעזרת הדפסת ה־id של הרשימה: ###Code def view_memory_of_l(item, l=[]): l.append(item) print(f"{l} --> {id(l)}") return l same_list1 = view_memory_of_l('a') same_list2 = view_memory_of_l('b') same_list3 = view_memory_of_l('c') new_list1 = view_memory_of_l(4, [1, 2, 3]) new_list2 = view_memory_of_l(5, [1, 2, 3]) new_list3 = view_memory_of_l(6, [1, 2, 3]) ###Output _____no_output_____ ###Markdown כיצד נפתור את הבעיה? דבר ראשון – נשתדל שלא להגדיר משתנים מטיפוס שהוא mutable בתוך כותרת הפונקציה. אם נרצה בכל זאת שהפרמטר יקבל רשימה כברירת מחדל, נעשה זאת כך: ###Code def append(item, l=None): if l == None: l = [] l.append(item) return l print(append(4, [1, 2, 3])) print(append('a')) ###Output _____no_output_____ ###Markdown שימו לב שהתופעה לא משתחזרת במבנים שהם immutable, כיוון שכשמם כן הם – אי אפשר לשנותם: ###Code def increment(i=0): i = i + 1 return i print(increment(100)) print(increment()) print(increment()) print(increment()) print(increment(100)) ###Output _____no_output_____ ###Markdown דוגמאות נוספות חיקוי מדויק של פונקציית get למילונים: נרענן את זיכרוננו בנוגע ל־unpacking: ###Code range_arguments = [1, 10, 3] range_result = range(*range_arguments) print(list(range_result)) ###Output _____no_output_____ ###Markdown או: ###Code preformatted_message = "My name is {me}, and my sister is {sister}" parameters = {'me': 'Mei', 'sister': 'Satsuki'} message = preformatted_message.format(**parameters) print(message) ###Output _____no_output_____ ###Markdown אם כך, נוכל לכתוב: ###Code def get(dictionary, *args, **kwargs): return dictionary.get(*args, **kwargs) ###Output _____no_output_____ ###Markdown פונקציות – חלק 2 הקדמה עד כה למדנו להכיר את עולמן של הפונקציות מקרוב: פונקציות הן כלי שימושי שמאפשר לנו לחלק את הקוד לתתי־משימות מוגדרות, ולשמור עליו מסודר וקל לתחזוק. לפונקציה יש "קלט" שהוא הפרמטרים שלה, ו"פלט" שהוא ערך ההחזרה שלה. אפשר לקרוא לפונקציה בציון שמה, סוגריים, ורשימת הארגומנטים שרוצים להעביר לפרמטרים שלה. במחברת זו נרכוש כלים נוספים שיאפשרו לנו גמישות רבה יותר בהגדרת פונקציות ובשימוש בהן. שימוש מתקדם בפונקציות העברת ארגומנטים בעזרת שם כאשר אנחנו קוראים לפונקציה, יישלחו לפי הסדר הארגומנטים שנעביר אל הפרמטרים שמוגדרים בכותרת הפונקציה. מצב כזה נקרא positional arguments ("ארגומנטים לפי מיקום"). נסתכל על פונקציה שמקבלת טווח (סוף והתחלה, בסדר הזה) ומחזירה רשימה של כל המספרים השלמים בטווח: ###Code def my_range(end, start): numbers = [] i = start while i < end: numbers.append(i) i += 1 return numbers my_range(5, 0) ###Output _____no_output_____ ###Markdown לפעמים נרצה לשנות את סדר הארגומנטים שאנחנו שולחים לפונקציה. נעשה זאת בקריאה לפונקציה, על־ידי העברת שם הארגומנט ולאחר מכן העברת הערך שאנחנו רוצים להעביר אליו: ###Code my_range(start=0, end=5) ###Output _____no_output_____ ###Markdown בשורה הזו הפכנו את סדר הארגומנטים. כיוון שבקריאה כתבנו את שמות הפרמטרים התואמים לכותרת הפונקציה, הערכים נשלחו למקום הנכון. השיטה הזו נקראת keyword arguments ("ארגומנטים לפי שם"), ובה אנחנו מעבירים את הארגומנטים שלנו לפי שמות הפרמטרים בכותרת הפונקציה. אנחנו משתמשים בשיטה הזו אפילו כשאנחנו לא רוצים לשנות את סדר הארגומנטים, אלא רק לעשות קצת סדר בקוד. נבחן, לדוגמה, את המקרה של הפונקציה random.randrange – נעים יותר לראות קריאה לפונקציה עם שמות הפרמטרים: ###Code import random random.randrange(100, 200) # מובן פחות random.randrange(start=100, stop=200) # מובן יותר ###Output _____no_output_____ ###Markdown למרות השימוש בסימן =, לא מדובר פה בהשמה במובן הקלאסי שלה. זוהי צורת כתיבה מיוחדת בקריאה לפונקציות שהמטרה שלה היא לסמן "העבר לפרמטר ששמו כך־וכך את הערך כך־וכך". פרמטרים עם ערכי ברירת מחדל נזכר בפונקציה get של מילון, שמאפשרת לקבל ממנו ערך לפי מפתח מסוים. אם המפתח שאנחנו מחפשים לא קיים במילון, הפונקציה מחזירה None: ###Code ghibli_release_dates = { 'Castle in the Sky': '1986-08-02', 'My Neighbor Totoro': '1988-04-16', 'Spirited Away': '2001-07-20', 'Ponyo': '2008-07-19', } ponyo_release_date = ghibli_release_dates.get('Ponyo') men_in_black_release_date = ghibli_release_dates.get('Men in Black') print(f"Ponyo release date: {ponyo_release_date}") print(f"Men in Black release date: {men_in_black_release_date}") ###Output _____no_output_____ ###Markdown נממש את הפונקציה get בעצמנו. לשם הנוחות, ייראה השימוש שונה במקצת: ###Code def get(dictionary, key): if key in dictionary: return dictionary[key] return None ponyo_release_date = get(ghibli_release_dates, 'Ponyo') men_in_black_release_date = get(ghibli_release_dates, 'Men in Black') print(f"Ponyo release date: {ponyo_release_date}") print(f"Men in Black release date: {men_in_black_release_date}") ###Output _____no_output_____ ###Markdown המימוש שלנו לא מושלם. הפעולה המקורית, get על מילון, פועלת בצורה מתוחכמת יותר. אפשר להעביר לה פרמטר נוסף, שקובע מה יחזור אם המפתח שהעברנו בפרמטר הראשון לא נמצא במילון: ###Code ponyo_release_date = ghibli_release_dates.get('Ponyo', '???') men_in_black_release_date = ghibli_release_dates.get('Men in Black', '???') print(f"Ponyo release date: {ponyo_release_date}") print(f"Men in Black release date: {men_in_black_release_date}") ###Output _____no_output_____ ###Markdown שימו לב להתנהגות המיוחדת של הפעולה get! אם המפתח שהעברנו בארגומנט הראשון לא קיים במילון, היא מחזירה את הערך שכתוב בארגומנט השני. אפשר להעביר לה ארגומנט אחד, ואפשר להעביר לה שני ארגומנטים. היא מתפקדת כראוי בשני המצבים. זו לא פעם ראשונה שאנחנו רואים פונקציות כאלו. למעשה, בשבוע שעבר למדנו על פעולות builtins רבות שמתנהגות כך: range, enumerate ו־round, כולן יודעות לקבל מספר משתנה של ארגומנטים. נניח לפעולה get בינתיים. אל דאגה, נחזור אליה בקרוב. בזמן שאנחנו נחים מפעולות על מילונים יום האהבה מתקרב, וחנות הוורדים הקרובה מעוניינת להעלות את מחירי כל מוצריה בשקל אחד. התבקשנו לבנות עבורם פונקציה שמקבלת רשימת מחירים, ומחזירה רשימה שבה כל איבר גדול ב־1 מרשימת המחירים המקורית. ניגש לעבודה: ###Code def get_new_prices(l): l2 = [] for item in l: l2.append(item + 1) return l2 prices = [42, 73, 300] print(get_new_prices(prices)) ###Output _____no_output_____ ###Markdown בתוך זמן קצר הפונקציה שבנינו הופכת ללהיט היסטרי בחנויות הוורדים. מנהל קרטל הוורדים הבין־לאומי ג'וזפה ורדי יוצר איתנו קשר, ומבקש לשכלל התוכנה כך שיוכל להעלות את מחירי המוצרים כרצונו. כדי לעמוד בדרישה, נבנה פונקציה שמקבלת רשימה, ובנוסף אליה את המחיר שיתווסף לכל איבר ברשימה זו. כך, אם הקורא לפונקציה יעביר כארגומנט השני את הערך 2, כל איבר ברשימה יגדל ב־2. נממש בקלילות: ###Code def get_new_prices(l, increment_by): l2 = [] for item in l: l2.append(item + increment_by) return l2 prices = [42, 73, 300] print(get_new_prices(prices, 1)) print(get_new_prices(prices, 2)) ###Output _____no_output_____ ###Markdown ורדי פוצח בשירה מרוב אושר, ומבקש שכלול אחרון לפונקציה, אם אפשר. אם הקורא לפונקציה העביר לה רק את רשימת המחירים, העלו את כל המחירים בשקל, כברירת מחדל. אם כן הועבר הארגומנט השני, העלו את המחיר לפי הערך שצוין באותו ארגומנט. הפעם אנחנו מתחבטים קצת יותר, מגרדים בראש, קוראים כמה מדריכי פייתון ומגיעים לבסוף לתשובה הבאה: ###Code def get_new_prices(l, increment_by=1): l2 = [] for item in l: l2.append(item + increment_by) return l2 prices = [42, 73, 300] print(prices) print(get_new_prices(prices)) print(get_new_prices(prices, 5)) ###Output _____no_output_____ ###Markdown כשאנחנו רוצים להגדיר פרמטר עם ערך ברירת מחדל, נוכל לקבוע את ערך ברירת המחדל שלו בכותרת הפונקציה. אם יועבר ארגומנט שכזה לפונקציה – פייתון תשתמש בערך שהועבר. אם לא – יילקח ערך ברירת המחדל שהוגדר בכותרת הפונקציה. במקרה שלנו הגדרנו את הפרמטר increment_by עם ערך ברירת המחדל 1. קריאה לפונקציה עם ארגומנט אחד בלבד (רשימת המחירים) תגדיל את כל המחירים ב־1, שהרי הוא ערך ברירת המחדל. קריאה לפונקציה עם שני ארגומנטים (רשימת המחירים, סכום ההעלאה) תגדיל את כל המחירים בסכום ההעלאה שהועבר. חשוב להבין שקריאה לפונקציה עם ערכים במקום ערכי ברירת המחדל, לא תשנה את ערך ברירת המחדל בקריאות הבאות: ###Code print(get_new_prices(prices, 5)) print(get_new_prices(prices)) ###Output _____no_output_____ ###Markdown ממשו את פונקציית get המלאה. הפונקציה תקבל מילון, מפתח ו"ערך לשעת חירום". החזירו את הערך השייך למפתח שהתקבל. אחרת – החזירו את הערך לשעת החירום שהועבר לפונקציה. אם לא הועבר ערך לשעת חירום והמפתח לא נמצא במילון, החזירו None. חשוב! פתרו לפני שתמשיכו! נדגים את אותו עיקרון עם כמה ערכי ברירת מחדל. אם הדרישה הייתה, לדוגמה, להוסיף לפונקציה גם אפשרות להנחה במחירי הפרחים, היינו יכולים לממש זאת כך: ###Code def get_new_prices(l, increment_by=1, discount=0): l2 = [] for item in l: new_price = item + increment_by - discount l2.append(new_price) return l2 prices = [42, 73, 300] print(prices) print(get_new_prices(prices, 10, 1)) # העלאה של 10, הנחה של 1 print(get_new_prices(prices, 5)) # העלאה של 5 ###Output _____no_output_____ ###Markdown אך מה יקרה כשנרצה לתת רק הנחה? במקרה כזה, כשנרצה "לדלג" מעל אחד מערכי ברירת המחדל, נצטרך להעביר את שמות הפרמטרים בקריאה לפונקציה. בדוגמה הבאה אנחנו מעלים את המחיר ב־1 (כי זו ברירת המחדל), ומורידים אותו ב־5: ###Code prices = [42, 73, 300] print(prices) print(get_new_prices(prices, discount=5)) ###Output _____no_output_____ ###Markdown זה אמנם עניין של סגנון, אבל יש יופי וסדר בציון שמות הפרמטרים גם כשלא חייבים: ###Code print(get_new_prices(prices, increment_by=10, discount=1)) ###Output _____no_output_____ ###Markdown מספר משתנה של ארגומנטים הפונקציה הפייתונית max, למשל, מתנהגת באופן משונה. היא יודעת לקבל כל מספר שהוא של ארגומנטים, ולהחליט מי מהם הוא הגדול ביותר. ראו בעצמכם! ###Code max(13, 256, 278, 887, 989, 457, 6510, 18, 865, 901, 401, 704, 640) ###Output _____no_output_____ ###Markdown נוכל גם אנחנו לממש פונקציה שמקבלת מספר משתנה של פרמטרים די בקלות. נתחיל מלממש פונקציה טיפשית למדי, שמקבלת מספר משתנה של פרמטרים ומדפיסה אותם: ###Code def silly_function(*parameters): print(parameters) print(type(parameters)) print('-' * 20) silly_function('Shmulik', 'Shlomo') silly_function('Shmulik', 1, 1, 2, 3, 5, 8, 13) silly_function() ###Output _____no_output_____ ###Markdown מה התרחש בדוגמה האחרונה, בעצם? כשפרמטר מוגדר בכותרת הפונקציה עם הסימן כוכבית, אפשר לשלוח אל אותו פרמטר מספר בלתי מוגבל של ארגומנטים. הערך שייכנס לפרמטר יהיה מסוג tuple, שאיבריו הם כל האיברים שהועברו כארגומנטים. לצורך ההדגמה, נבנה פונקציה שמקבלת פרמטרים ומדפיסה אותם בזה אחר זה: ###Code def silly_function2(*parameters): print(f"Printing all the items in {parameters}:") for parameter in parameters: print(parameter) print("-" * 20) silly_function2('Shmulik', 'Shlomo') silly_function2('Shmulik', 1, 1, 2, 3, 5, 8, 13) silly_function2() ###Output _____no_output_____ ###Markdown שחקו עם הפונקציה silly_function2 וודאו שהבנתם מה מתרחש בה. כשתסיימו, נסו לממש את הפונקציה max בעצמכם. חשוב! פתרו לפני שתמשיכו! נממש את max: ###Code def my_max(*numbers): if not numbers: # אם לא סופקו ארגומנטים, אין מקסימום return None maximum = numbers[0] for number in numbers: if number > maximum: maximum = number return maximum my_max(13, 256, 278, 887, 989, 457, 6510, 18, 865, 901, 401, 704, 640) ###Output _____no_output_____ ###Markdown כותרת הפונקציה יכולה לכלול משתנים נוספים לפני הכוכבית. נראה לדוגמה פונקציה שמקבלת גובה הנחה ואת מחירי כל המוצרים שקנינו, ומחזירה את הסכום הסופי שעלינו לשלם: ###Code def get_final_price(discount, *prices): return sum(prices) - discount get_final_price(10000, 3.141, 90053) ###Output _____no_output_____ ###Markdown אף שבמבט ראשון הפונקציה get_final_price עשויה להיראות מגניבה, כדאי להיזהר משימוש מוגזם בתכונה הזו של פייתון. הדוגמה הזו אמנם מדגימה גמישות יוצאת דופן של פייתון, אבל ככלל היא דוגמה גרועה מאוד לשימוש בכוכבית. שימו לב כמה נוח יותר להבין את המימוש הבא ל־get_final_price, וכמה נוח יותר להבין את הקריאה לפונקציה הזו: ###Code def get_final_price(prices, discount): return sum(prices) - discount get_final_price(prices=(3.141, 90053), discount=10000) ###Output _____no_output_____ ###Markdown תרגול ביניים: סולל דרך כתבו פונקציה בשם create_path שיכולה לקבל מספר בלתי מוגבל של ארגומנטים. הפרמטר הראשון יהיה אות הכונן שבו הקבצים מאוחסנים (לרוב "C"), והפרמטרים שאחריו יהיו שמות של תיקיות וקבצים. שרשרו אותם בעזרת התו \ כדי ליצור מהם מחרוזת המייצגת נתיב במחשב. אחרי האות של הכונן שימו נקודתיים. הניחו שהקלט שהמשתמש הכניס הוא תקין. הנה כמה דוגמאות לקריאות לפונקציה ולערכי ההחזרה שלה: הקריאה create_path("C", "Users", "Yam") תחזיר "C:\Users\Yam" הקריאה create_path("C", "Users", "Yam", "HaimonLimon.mp4") תחזיר "C:\Users\Yam\HaimonLimon.mp4" הקריאה create_path("D", "1337.png") תחזיר "D:\1337.png" הקריאה create_path("C") תחזיר "C:" הקריאה create_path() תגרום לשגיאה מספר משתנה של ארגומנטים עם שמות בתחילת המחברת למדנו כיצד מעבירים לפונקציות ארגומנטים בעזרת שם: ###Code def print_introduction(name, age): return f"My name is {name} and I am {age} years old." print_introduction(age=2019, name="Gandalf") ###Output _____no_output_____ ###Markdown אבל מה אם נרצה להעביר לפונקציה שלנו מספר בלתי מוגבל של ארגומנטים לפי שם? נביא כדוגמה את הפעולה format על מחרוזות. format היא פונקציה גמישה בכל הנוגע לכמות ולשמות של הארגומנטים שמועברים לה לפי שם. נראה שתי דוגמאות לשימוש בה, שימוש שבמבט ראשון עשוי להיראות קסום: ###Code message = "My name is {name} and I am {age} years old" formatted_message = message.format(name="Gandalf", age=2019) print(formatted_message) song = "I'll {action} a story of a {animal}.\nA {animal} who's {key} is {value}." formatted_song = song.format(action="sing", animal="duck", key="name", value="Alfred Kwak") print(formatted_song) ###Output _____no_output_____ ###Markdown נכתוב גם אנחנו פונקציה שמסוגלת לקבל מספר בלתי מוגבל של ארגומנטים לפי שם. ניעזר תחילה בידידתנו הוותיקה, silly_function, כדי לראות איך הקסם קורה: ###Code def silly_function(**kwargs): print(kwargs) print(type(kwargs)) silly_function(a=5, b=6, address="221B Baker St, London, England.") ###Output _____no_output_____ ###Markdown ההתנהגות הזו מתרחשת כיוון שהשתמשנו בשתי כוכביות לפני שם המשתנה. השימוש בשתי כוכביות מאפשר לנו להעביר מספר בלתי מוגבל של ארגומנטים עם שם, באופן שמזכיר קצת את השימוש בכוכבית שראינו קודם. המשתנה שבו נשמרים הנתונים הוא מסוג מילון, ובו המפתחות יהיו שמות הארגומנטים שהועברו, והערכים – הערכים שהועברו לאותם שמות. אחרי שהבנו איך הסיפור הזה עובד, בואו ננסה ליצור פונקציה מעניינת יותר. הפונקציה שנכתוב תקבל כארגומנטים כמה גרם מכל רכיב צריך כדי להכין סושי, ותדפיס לנו מתכון: ###Code def print_sushi_recipe(**ingredients_and_amounts): for ingredient, amount in ingredients_and_amounts.items(): print(f"{amount} grams of {ingredient}") print_sushi_recipe(rice=300, water=300, vinegar=15, sugar=10, salt=3, fish=600) ###Output _____no_output_____ ###Markdown בדוגמה זו השתמשנו בעובדה שהפרמטר שמוגדר בעזרת שתי כוכביות הוא בהכרח מילון. עברנו על כל המפתחות והערכים שבו בעזרת הפעולה items, והדפסנו את המתכון, רכיב אחר רכיב. גרמו לפונקציה print_sushi_recipe להדפיס את הרכיבים לפי סדר משקלם, מהנמוך לגבוה. פרמטר המוגדר בעזרת שתי כוכביות תמיד יופיע בסוף רשימת הפרמטרים. תרגול ביניים: גזור פזורפ כתבו פונקציה בשם my_format שמקבלת מחרוזת, ומספר בלתי מוגבל של פרמטרים עם שמות. הפונקציה תחליף כל הופעה של {key} במחרוזת, אם key הועבר כפרמטר לפונקציה. הערך שבו {key} יוחלף הוא הערך שהועבר ל־key במסגרת העברת הארגומנטים לפונקציה. הפונקציה לא תשתמש בפעולה format של מחרוזות או בפונקציות שלא למדנו עד כה. הנה כמה דוגמאות לקריאות לפונקציה ולערכי ההחזרה שלה: הקריאה my_format("I'm Mr. {name}, look at me!", name="Meeseeks") תחזיר "I'm Mr. Meeseeks, look at me!" הקריאה my_format("{a} {b} {c} {c}", a="wubba", b="lubba", c="dub") תחזיר "wubba lubba dub dub" הקריאה my_format("The universe is basically an animal", animal="Chicken") תחזיר "The universe is basically an animal" הקריאה my_format("The universe is basically an animal") תחזיר "The universe is basically an animal" חוק וסדר נוכל לשלב יחד את כל הטכניקות שלמדנו עד עכשיו לפונקציה אחת. ניצור, לדוגמה, פונקציה שמחשבת עלות הכנה של עוגה. הפונקציה תקבל: את רשימת הרכיבים הקיימים בסופר ואת המחירים שלהם. את רשימת הרכיבים הדרושים כדי להכין עוגה (נניח ששם כל רכיב הוא מילה בודדת). אם ללקוח מגיעה הנחה. שיעור ההנחה, באחוזים. כברירת מחדל, אם ללקוח מגיעה הנחה – שיעורה הוא 10%. לצורך פישוט התרגיל, נתעלם לרגע מעניין הכמויות במתכון :) ###Code def calculate_cake_price(apply_discount, *ingredients, discount_rate=10, **prices): if not apply_discount: discount_rate = 0 final_price = 0 for ingredient in ingredients: final_price = final_price + prices.get(ingredient) final_price = final_price - (final_price * discount_rate / 100) return final_price calculate_cake_price(True, 'chocolate', 'cream', chocolate=30, cream=20, water=5) ###Output _____no_output_____ ###Markdown הפונקציה נכתבה כדי להדגים את הטכניקה, והיא נראית די רע. ראו כמה קשה להבין איזה ארגומנט שייך לאיזה פרמטר בקריאה לפונקציה. יש להפעיל שיקול דעת לפני שימוש בטכניקות של קבלת פרמטרים מרובים. שימו לב לסדר הפרמטרים בכותרת הפונקציה: הארגומנטים שמיקומם קבוע ואנחנו יודעים מי הם הולכים להיות (apply_discount). הארגומנטים שמיקומם קבוע ואנחנו לא יודעים מי הם הולכים להיות (ingredients). הארגומנטים ששמותיהם ידועים וערך ברירת המחדל שלהם נקבע בראש הפונקציה (discount_rate). ערכים נוספים ששמותיהם לא ידועים מראש (prices). נסו לחשוב: למה נקבע דווקא הסדר הזה? איך הייתם כותבים את אותה הפונקציה בדיוק בלי שימוש בטכניקות שלמדנו? השימוש בערכי ברירת מחדל מותר. חשוב! פתרו לפני שתמשיכו! ערכי ברירת מחדל שאפשר לשנותם יש מקרה קצה של ערכי ברירת מחדל שגורם לפייתון להתנהג קצת מוזר. זה קורה כשערך ברירת המחדל שהוגדר בכותרת הפונקציה הוא mutable: ###Code def append(item, l=[]): l.append(item) return l print(append(4, [1, 2, 3])) print(append('a')) ###Output _____no_output_____ ###Markdown עד כאן נראה כאילו הפונקציה פועלת באופן שהיינו מצפים ממנה. ערך ברירת המחדל של הפרמטר l הוא רשימה ריקה, ולכן בקריאה השנייה חוזרת רשימה עם איבר בודד, ['a']. נקרא לפונקציה עוד כמה פעמים, ונגלה משהו מוזר: ###Code print(append('b')) print(append('c')) print(append('d')) print(append(4, [1, 2, 3])) print(append('e')) ###Output _____no_output_____ ###Markdown משונה ולא הגיוני! ציפינו לקבל את הרשימה ['b'] ואז את הרשימה ['c'] וכן הלאה. במקום זה בכל פעם מצטרף איבר חדש לרשימה. למה? הסיבה לכך היא שפייתון קוראת את כותרת הפונקציה רק פעם אחת – בשלב ההגדרה של הפונקציה. בשלב הזה שבו פייתון תקרא את כותרת הפונקציה, ערך ברירת המחדל של l יצביע לרשימה ריקה. מאותו רגע, בכל פעם שלא נעביר ל־l ערך, l תהיה אותה רשימת ברירת מחדל שהגדרנו בהתחלה. נדגים זאת בעזרת הדפסת ה־id של הרשימה: ###Code def view_memory_of_l(item, l=[]): l.append(item) print(f"{l} --> {id(l)}") return l same_list1 = view_memory_of_l('a') same_list2 = view_memory_of_l('b') same_list3 = view_memory_of_l('c') new_list1 = view_memory_of_l(4, [1, 2, 3]) new_list2 = view_memory_of_l(5, [1, 2, 3]) new_list3 = view_memory_of_l(6, [1, 2, 3]) ###Output _____no_output_____ ###Markdown כיצד נפתור את הבעיה? דבר ראשון – נשתדל שלא להגדיר משתנים מטיפוס שהוא mutable בתוך כותרת הפונקציה. אם נרצה בכל זאת שהפרמטר יקבל רשימה כברירת מחדל, נעשה זאת כך: ###Code def append(item, l=None): if l == None: l = [] l.append(item) return l print(append(4, [1, 2, 3])) print(append('a')) ###Output _____no_output_____ ###Markdown שימו לב שהתופעה לא משתחזרת במבנים שהם immutable, כיוון שכשמם כן הם – אי אפשר לשנותם: ###Code def increment(i=0): i = i + 1 return i print(increment(100)) print(increment()) print(increment()) print(increment()) print(increment(100)) ###Output _____no_output_____ ###Markdown דוגמאות נוספות חיקוי מדויק של פונקציית get למילונים: נרענן את זיכרוננו בנוגע ל־unpacking: ###Code range_arguments = [1, 10, 3] range_result = range(*range_arguments) print(list(range_result)) ###Output _____no_output_____ ###Markdown או: ###Code preformatted_message = "My name is {me}, and my sister is {sister}" parameters = {'me': 'Mei', 'sister': 'Satsuki'} message = preformatted_message.format(**parameters) print(message) ###Output _____no_output_____ ###Markdown אם כך, נוכל לכתוב: ###Code def get(dictionary, *args, **kwargs): return dictionary.get(*args, **kwargs) ###Output _____no_output_____
notebooks/si-03-tfidf-cluster-family-inspection.ipynb
###Markdown TFIDF Cluster Family InspectionIn this notebook, we compute the Term Frequency-Inverse Document Frequency statisticsused to validate our cluster family names as reported in the SI.Executing this notebook requires access to the text data contained in the individual clusters,which is not provided in the data accompanying the paper.For the United States, the input data can be computed by running our preprocessing and clustering pipelines on the publicly available XMLfrom the Office of the Law Revision Counsel.For Germany, we cannot make the input data available due to licensing restrictions. Preparations ###Code import networkx as nx from gensim.utils import simple_preprocess from gensim import corpora, models import pandas as pd import multiprocessing from legal_data_clustering.utils.graph_api import cluster_families ###Output _____no_output_____ ###Markdown Computing the statistics ###Code def load_cluster_families(base_path): G = nx.read_gpickle( base_path+'13_cluster_evolution_graph/all_0-0_1-0_-1_a-infomap_n100_m1-0_s0_c1000.gpickle.gz' ) cluster_families_data = cluster_families(G,threshold=.15)[:50] leading_clusters = [c[0] for c in cluster_families_data] return cluster_families_data, leading_clusters def read_cluster_texts(node, base_path): year, cluster = node.split('_') with open(f'{base_path}12_cluster_texts/{year}_0-0_1-0_-1_a-infomap_n100_m1-0_s0_c1000/community_{cluster}.txt') as f: return f.read() def process_cluster_familie(clusters, base_path): doc = ' '.join( read_cluster_texts(c, base_path) for c in clusters ) return simple_preprocess(doc) def compute_tfidf_csv(dataset): base_path = f'../../legal-networks-data/{dataset}/' cluster_families_data, leading_clusters = load_cluster_families(base_path) dictionary = corpora.Dictionary() BoW_corpus = [] for i, c in enumerate(cluster_families_data): doc = process_cluster_familie(c, base_path) bow = dictionary.doc2bow(doc, allow_update=True) BoW_corpus.append(bow) print('done', i) tfidf = models.TfidfModel(BoW_corpus, smartirs='ntc') data = [ {dictionary[key]: freq for key, freq in doc} for doc in tfidf[BoW_corpus] ] data_sorted = [ sorted([x for x in cluster_family.items()], key=lambda y: y[-1], reverse=True) for cluster_family in data ] df = pd.DataFrame({ leading: [word for word, cnt in fam_data[:250]] for leading, fam_data in zip(leading_clusters, data_sorted) }) df.to_csv(f'../results/tfidf_cluster_family_inspection_{dataset}.csv') compute_tfidf_csv('us_reg') compute_tfidf_csv('de_reg') ###Output done 0 done 1 done 2 done 3 done 4 done 5 done 6 done 7 done 8 done 9 done 10 done 11 done 12 done 13 done 14 done 15 done 16 done 17 done 18 done 19 done 20 done 21 done 22 done 23 done 24 done 25 done 26 done 27 done 28 done 29 done 30 done 31 done 32 done 33 done 34 done 35 done 36 done 37 done 38 done 39 done 40 done 41 done 42 done 43 done 44 done 45 done 46 done 47 done 48 done 49
5-sequence-models/week3/Neural_machine_translation_with_attention_v4a.ipynb
###Markdown Neural Machine TranslationWelcome to your first programming assignment for this week! * You will build a Neural Machine Translation (NMT) model to translate human-readable dates ("25th of June, 2009") into machine-readable dates ("2009-06-25"). * You will do this using an attention model, one of the most sophisticated sequence-to-sequence models. This notebook was produced together with NVIDIA's Deep Learning Institute. Updates If you were working on the notebook before this update...* The current notebook is version "4a".* You can find your original work saved in the notebook with the previous version name ("v4") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* Clarified names of variables to be consistent with the lectures and consistent within the assignment - pre-attention bi-directional LSTM: the first LSTM that processes the input data. - 'a': the hidden state of the pre-attention LSTM. - post-attention LSTM: the LSTM that outputs the translation. - 's': the hidden state of the post-attention LSTM. - energies "e". The output of the dense function that takes "a" and "s" as inputs. - All references to "output activation" are updated to "hidden state". - "post-activation" sequence model is updated to "post-attention sequence model". - 3.1: "Getting the activations from the Network" renamed to "Getting the attention weights from the network." - Appropriate mentions of "activation" replaced "attention weights." - Sequence of alphas corrected to be a sequence of "a" hidden states.* one_step_attention: - Provides sample code for each Keras layer, to show how to call the functions. - Reminds students to provide the list of hidden states in a specific order, in order to pause the autograder.* model - Provides sample code for each Keras layer, to show how to call the functions. - Added a troubleshooting note about handling errors. - Fixed typo: outputs should be of length 10 and not 11.* define optimizer and compile model - Provides sample code for each Keras layer, to show how to call the functions.* Spelling, grammar and wording corrections. Let's load all the packages you will need for this assignment. ###Code from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply from keras.layers import RepeatVector, Dense, Activation, Lambda from keras.optimizers import Adam from keras.utils import to_categorical from keras.models import load_model, Model import keras.backend as K import numpy as np from faker import Faker import random from tqdm import tqdm from babel.dates import format_date from nmt_utils import * import matplotlib.pyplot as plt %matplotlib inline ###Output Using TensorFlow backend. ###Markdown 1 - Translating human readable dates into machine readable dates* The model you will build here could be used to translate from one language to another, such as translating from English to Hindi. * However, language translation requires massive datasets and usually takes days of training on GPUs. * To give you a place to experiment with these models without using massive datasets, we will perform a simpler "date translation" task. * The network will input a date written in a variety of possible formats (*e.g. "the 29th of August 1958", "03/30/1968", "24 JUNE 1987"*) * The network will translate them into standardized, machine readable dates (*e.g. "1958-08-29", "1968-03-30", "1987-06-24"*). * We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD. <!-- Take a look at [nmt_utils.py](./nmt_utils.py) to see all the formatting. Count and figure out how the formats work, you will need this knowledge later. !--> 1.1 - DatasetWe will train the model on a dataset of 10,000 human readable dates and their equivalent, standardized, machine readable dates. Let's run the following cells to load the dataset and print some examples. ###Code m = 10000 dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m) dataset[:10] ###Output _____no_output_____ ###Markdown You've loaded:- `dataset`: a list of tuples of (human readable date, machine readable date).- `human_vocab`: a python dictionary mapping all characters used in the human readable dates to an integer-valued index.- `machine_vocab`: a python dictionary mapping all characters used in machine readable dates to an integer-valued index. - **Note**: These indices are not necessarily consistent with `human_vocab`. - `inv_machine_vocab`: the inverse dictionary of `machine_vocab`, mapping from indices back to characters. Let's preprocess the data and map the raw text data into the index values. - We will set Tx=30 - We assume Tx is the maximum length of the human readable date. - If we get a longer input, we would have to truncate it.- We will set Ty=10 - "YYYY-MM-DD" is 10 characters long. ###Code Tx = 30 Ty = 10 X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty) print("X.shape:", X.shape) print("Y.shape:", Y.shape) print("Xoh.shape:", Xoh.shape) print("Yoh.shape:", Yoh.shape) ###Output X.shape: (10000, 30) Y.shape: (10000, 10) Xoh.shape: (10000, 30, 37) Yoh.shape: (10000, 10, 11) ###Markdown You now have:- `X`: a processed version of the human readable dates in the training set. - Each character in X is replaced by an index (integer) mapped to the character using `human_vocab`. - Each date is padded to ensure a length of $T_x$ using a special character (). - `X.shape = (m, Tx)` where m is the number of training examples in a batch.- `Y`: a processed version of the machine readable dates in the training set. - Each character is replaced by the index (integer) it is mapped to in `machine_vocab`. - `Y.shape = (m, Ty)`. - `Xoh`: one-hot version of `X` - Each index in `X` is converted to the one-hot representation (if the index is 2, the one-hot version has the index position 2 set to 1, and the remaining positions are 0. - `Xoh.shape = (m, Tx, len(human_vocab))`- `Yoh`: one-hot version of `Y` - Each index in `Y` is converted to the one-hot representation. - `Yoh.shape = (m, Tx, len(machine_vocab))`. - `len(machine_vocab) = 11` since there are 10 numeric digits (0 to 9) and the `-` symbol. * Let's also look at some examples of preprocessed training examples. * Feel free to play with `index` in the cell below to navigate the dataset and see how source/target dates are preprocessed. ###Code index = 0 print("Source date:", dataset[index][0]) print("Target date:", dataset[index][1]) print() print("Source after preprocessing (indices):", X[index]) print("Target after preprocessing (indices):", Y[index]) print() print("Source after preprocessing (one-hot):", Xoh[index]) print("Target after preprocessing (one-hot):", Yoh[index]) ###Output Source date: 9 may 1998 Target date: 1998-05-09 Source after preprocessing (indices): [12 0 24 13 34 0 4 12 12 11 36 36 36 36 36 36 36 36 36 36 36 36 36 36 36 36 36 36 36 36] Target after preprocessing (indices): [ 2 10 10 9 0 1 6 0 1 10] Source after preprocessing (one-hot): [[ 0. 0. 0. ..., 0. 0. 0.] [ 1. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.] ..., [ 0. 0. 0. ..., 0. 0. 1.] [ 0. 0. 0. ..., 0. 0. 1.] [ 0. 0. 0. ..., 0. 0. 1.]] Target after preprocessing (one-hot): [[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.] [ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]] ###Markdown 2 - Neural machine translation with attention* If you had to translate a book's paragraph from French to English, you would not read the whole paragraph, then close the book and translate. * Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down. * The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step. 2.1 - Attention mechanismIn this part, you will implement the attention mechanism presented in the lecture videos. * Here is a figure to remind you how the model works. * The diagram on the left shows the attention model. * The diagram on the right shows what one "attention" step does to calculate the attention variables $\alpha^{\langle t, t' \rangle}$. * The attention variables $\alpha^{\langle t, t' \rangle}$ are used to compute the context variable $context^{\langle t \rangle}$ for each timestep in the output ($t=1, \ldots, T_y$). **Figure 1**: Neural machine translation with attention Here are some properties of the model that you may notice: Pre-attention and Post-attention LSTMs on both sides of the attention mechanism- There are two separate LSTMs in this model (see diagram on the left): pre-attention and post-attention LSTMs.- *Pre-attention* Bi-LSTM is the one at the bottom of the picture is a Bi-directional LSTM and comes *before* the attention mechanism. - The attention mechanism is shown in the middle of the left-hand diagram. - The pre-attention Bi-LSTM goes through $T_x$ time steps- *Post-attention* LSTM: at the top of the diagram comes *after* the attention mechanism. - The post-attention LSTM goes through $T_y$ time steps. - The post-attention LSTM passes the hidden state $s^{\langle t \rangle}$ and cell state $c^{\langle t \rangle}$ from one time step to the next. An LSTM has both a hidden state and cell state* In the lecture videos, we were using only a basic RNN for the post-attention sequence model * This means that the state captured by the RNN was outputting only the hidden state $s^{\langle t\rangle}$. * In this assignment, we are using an LSTM instead of a basic RNN. * So the LSTM has both the hidden state $s^{\langle t\rangle}$ and the cell state $c^{\langle t\rangle}$. Each time step does not use predictions from the previous time step* Unlike previous text generation examples earlier in the course, in this model, the post-attention LSTM at time $t$ does not take the previous time step's prediction $y^{\langle t-1 \rangle}$ as input.* The post-attention LSTM at time 't' only takes the hidden state $s^{\langle t\rangle}$ and cell state $c^{\langle t\rangle}$ as input. * We have designed the model this way because unlike language generation (where adjacent characters are highly correlated) there isn't as strong a dependency between the previous character and the next character in a YYYY-MM-DD date. Concatenation of hidden states from the forward and backward pre-attention LSTMs- $\overrightarrow{a}^{\langle t \rangle}$: hidden state of the forward-direction, pre-attention LSTM.- $\overleftarrow{a}^{\langle t \rangle}$: hidden state of the backward-direction, pre-attention LSTM.- $a^{\langle t \rangle} = [\overrightarrow{a}^{\langle t \rangle}, \overleftarrow{a}^{\langle t \rangle}]$: the concatenation of the activations of both the forward-direction $\overrightarrow{a}^{\langle t \rangle}$ and backward-directions $\overleftarrow{a}^{\langle t \rangle}$ of the pre-attention Bi-LSTM. Computing "energies" $e^{\langle t, t' \rangle}$ as a function of $s^{\langle t-1 \rangle}$ and $a^{\langle t' \rangle}$- Recall in the lesson videos "Attention Model", at time 6:45 to 8:16, the definition of "e" as a function of $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$. - "e" is called the "energies" variable. - $s^{\langle t-1 \rangle}$ is the hidden state of the post-attention LSTM - $a^{\langle t' \rangle}$ is the hidden state of the pre-attention LSTM. - $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ are fed into a simple neural network, which learns the function to output $e^{\langle t, t' \rangle}$. - $e^{\langle t, t' \rangle}$ is then used when computing the attention $a^{\langle t, t' \rangle}$ that $y^{\langle t \rangle}$ should pay to $a^{\langle t' \rangle}$. - The diagram on the right of figure 1 uses a `RepeatVector` node to copy $s^{\langle t-1 \rangle}$'s value $T_x$ times.- Then it uses `Concatenation` to concatenate $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$.- The concatenation of $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ is fed into a "Dense" layer, which computes $e^{\langle t, t' \rangle}$. - $e^{\langle t, t' \rangle}$ is then passed through a softmax to compute $\alpha^{\langle t, t' \rangle}$.- Note that the diagram doesn't explicitly show variable $e^{\langle t, t' \rangle}$, but $e^{\langle t, t' \rangle}$ is above the Dense layer and below the Softmax layer in the diagram in the right half of figure 1.- We'll explain how to use `RepeatVector` and `Concatenation` in Keras below. Implementation Details Let's implement this neural translator. You will start by implementing two functions: `one_step_attention()` and `model()`. one_step_attention* The inputs to the one_step_attention at time step $t$ are: - $[a^{},a^{}, ..., a^{}]$: all hidden states of the pre-attention Bi-LSTM. - $s^{}$: the previous hidden state of the post-attention LSTM * one_step_attention computes: - $[\alpha^{},\alpha^{}, ..., \alpha^{}]$: the attention weights - $context^{ \langle t \rangle }$: the context vector: $$context^{} = \sum_{t' = 1}^{T_x} \alpha^{}a^{}\tag{1}$$ Clarifying 'context' and 'c'- In the lecture videos, the context was denoted $c^{\langle t \rangle}$- In the assignment, we are calling the context $context^{\langle t \rangle}$. - This is to avoid confusion with the post-attention LSTM's internal memory cell variable, which is also denoted $c^{\langle t \rangle}$. Implement `one_step_attention`**Exercise**: Implement `one_step_attention()`. * The function `model()` will call the layers in `one_step_attention()` $T_y$ using a for-loop.* It is important that all $T_y$ copies have the same weights. * It should not reinitialize the weights every time. * In other words, all $T_y$ steps should have shared weights. * Here's how you can implement layers with shareable weights in Keras: 1. Define the layer objects in a variable scope that is outside of the `one_step_attention` function. For example, defining the objects as global variables would work. - Note that defining these variables inside the scope of the function `model` would technically work, since `model` will then call the `one_step_attention` function. For the purposes of making grading and troubleshooting easier, we are defining these as global variables. Note that the automatic grader will expect these to be global variables as well. 2. Call these objects when propagating the input.* We have defined the layers you need as global variables. * Please run the following cells to create them. * Please note that the automatic grader expects these global variables with the given variable names. For grading purposes, please do not rename the global variables.* Please check the Keras documentation to learn more about these layers. The layers are functions. Below are examples of how to call these functions. * [RepeatVector()](https://keras.io/layers/core/repeatvector)```Pythonvar_repeated = repeat_layer(var1)``` * [Concatenate()](https://keras.io/layers/merge/concatenate) ```Pythonconcatenated_vars = concatenate_layer([var1,var2,var3])``` * [Dense()](https://keras.io/layers/core/dense) ```Pythonvar_out = dense_layer(var_in)``` * [Activation()](https://keras.io/layers/core/activation) ```Pythonactivation = activation_layer(var_in) ``` * [Dot()](https://keras.io/layers/merge/dot) ```Pythondot_product = dot_layer([var1,var2])``` ###Code # Defined shared layers as global variables repeator = RepeatVector(Tx) concatenator = Concatenate(axis=-1) densor1 = Dense(10, activation = "tanh") densor2 = Dense(1, activation = "relu") activator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook dotor = Dot(axes = 1) # GRADED FUNCTION: one_step_attention def one_step_attention(a, s_prev): """ Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights "alphas" and the hidden states "a" of the Bi-LSTM. Arguments: a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a) s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s) Returns: context -- context vector, input of the next (post-attention) LSTM cell """ ### START CODE HERE ### # Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states "a" (≈ 1 line) s_prev = repeator(s_prev) # Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line) # For grading purposes, please list 'a' first and 's_prev' second, in this order. concat = concatenator([a, s_prev]) # Use densor1 to propagate concat through a small fully-connected neural network to compute the "intermediate energies" variable e. (≈1 lines) e = densor1(concat) # Use densor2 to propagate e through a small fully-connected neural network to compute the "energies" variable energies. (≈1 lines) energies = densor2(e) # Use "activator" on "energies" to compute the attention weights "alphas" (≈ 1 line) alphas = activator(energies) # Use dotor together with "alphas" and "a" to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line) context = dotor([alphas, a]) ### END CODE HERE ### return context ###Output _____no_output_____ ###Markdown You will be able to check the expected output of `one_step_attention()` after you've coded the `model()` function. model* `model` first runs the input through a Bi-LSTM to get $[a^{},a^{}, ..., a^{}]$. * Then, `model` calls `one_step_attention()` $T_y$ times using a `for` loop. At each iteration of this loop: - It gives the computed context vector $context^{}$ to the post-attention LSTM. - It runs the output of the post-attention LSTM through a dense layer with softmax activation. - The softmax generates a prediction $\hat{y}^{}$. **Exercise**: Implement `model()` as explained in figure 1 and the text above. Again, we have defined global layers that will share weights to be used in `model()`. ###Code n_a = 32 # number of units for the pre-attention, bi-directional LSTM's hidden state 'a' n_s = 64 # number of units for the post-attention LSTM's hidden state "s" # Please note, this is the post attention LSTM cell. # For the purposes of passing the automatic grader # please do not modify this global variable. This will be corrected once the automatic grader is also updated. post_activation_LSTM_cell = LSTM(n_s, return_state = True) # post-attention LSTM output_layer = Dense(len(machine_vocab), activation=softmax) ###Output _____no_output_____ ###Markdown Now you can use these layers $T_y$ times in a `for` loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps: 1. Propagate the input `X` into a bi-directional LSTM. * [Bidirectional](https://keras.io/layers/wrappers/bidirectional) * [LSTM](https://keras.io/layers/recurrent/lstm) * Remember that we want the LSTM to return a full sequence instead of just the last hidden state. Sample code:```Pythonsequence_of_hidden_states = Bidirectional(LSTM(units=..., return_sequences=...))(the_input_X)``` 2. Iterate for $t = 0, \cdots, T_y-1$: 1. Call `one_step_attention()`, passing in the sequence of hidden states $[a^{\langle 1 \rangle},a^{\langle 2 \rangle}, ..., a^{ \langle T_x \rangle}]$ from the pre-attention bi-directional LSTM, and the previous hidden state $s^{}$ from the post-attention LSTM to calculate the context vector $context^{}$. 2. Give $context^{}$ to the post-attention LSTM cell. - Remember to pass in the previous hidden-state $s^{\langle t-1\rangle}$ and cell-states $c^{\langle t-1\rangle}$ of this LSTM * This outputs the new hidden state $s^{}$ and the new cell state $c^{}$. Sample code: ```Python next_hidden_state, _ , next_cell_state = post_activation_LSTM_cell(inputs=..., initial_state=[prev_hidden_state, prev_cell_state]) ``` Please note that the layer is actually the "post attention LSTM cell". For the purposes of passing the automatic grader, please do not modify the naming of this global variable. This will be fixed when we deploy updates to the automatic grader. 3. Apply a dense, softmax layer to $s^{}$, get the output. Sample code: ```Python output = output_layer(inputs=...) ``` 4. Save the output by adding it to the list of outputs.3. Create your Keras model instance. * It should have three inputs: * `X`, the one-hot encoded inputs to the model, of shape ($T_{x}, humanVocabSize)$ * $s^{\langle 0 \rangle}$, the initial hidden state of the post-attention LSTM * $c^{\langle 0 \rangle}$), the initial cell state of the post-attention LSTM * The output is the list of outputs. Sample code ```Python model = Model(inputs=[...,...,...], outputs=...) ``` ###Code # GRADED FUNCTION: model def model(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size): """ Arguments: Tx -- length of the input sequence Ty -- length of the output sequence n_a -- hidden state size of the Bi-LSTM n_s -- hidden state size of the post-attention LSTM human_vocab_size -- size of the python dictionary "human_vocab" machine_vocab_size -- size of the python dictionary "machine_vocab" Returns: model -- Keras model instance """ # Define the inputs of your model with a shape (Tx,) # Define s0 (initial hidden state) and c0 (initial cell state) # for the decoder LSTM with shape (n_s,) X = Input(shape=(Tx, human_vocab_size)) s0 = Input(shape=(n_s,), name='s0') c0 = Input(shape=(n_s,), name='c0') s = s0 c = c0 # Initialize empty list of outputs outputs = [] ### START CODE HERE ### # Step 1: Define your pre-attention Bi-LSTM. (≈ 1 line) a = Bidirectional(LSTM(n_a, return_sequences=True))(X) # Step 2: Iterate for Ty steps for t in range(Ty): # Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line) context = one_step_attention(a,s) # Step 2.B: Apply the post-attention LSTM cell to the "context" vector. # Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line) s, _, c = post_activation_LSTM_cell(context,initial_state=[s,c]) # Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line) out = output_layer(s) # Step 2.D: Append "out" to the "outputs" list (≈ 1 line) outputs.append(out) # Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line) model = Model(inputs = [X,s0,c0],outputs=outputs) ### END CODE HERE ### return model ###Output _____no_output_____ ###Markdown Run the following cell to create your model. ###Code model = model(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab)) ###Output _____no_output_____ ###Markdown Troubleshooting Note* If you are getting repeated errors after an initially incorrect implementation of "model", but believe that you have corrected the error, you may still see error messages when building your model. * A solution is to save and restart your kernel (or shutdown then restart your notebook), and re-run the cells. Let's get a summary of the model to check if it matches the expected output. ###Code model.summary() ###Output ____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== input_1 (InputLayer) (None, 30, 37) 0 ____________________________________________________________________________________________________ s0 (InputLayer) (None, 64) 0 ____________________________________________________________________________________________________ bidirectional_1 (Bidirectional) (None, 30, 64) 17920 input_1[0][0] ____________________________________________________________________________________________________ repeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0] lstm_1[0][0] lstm_1[1][0] lstm_1[2][0] lstm_1[3][0] lstm_1[4][0] lstm_1[5][0] lstm_1[6][0] lstm_1[7][0] lstm_1[8][0] ____________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 30, 128) 0 bidirectional_1[0][0] repeat_vector_1[0][0] bidirectional_1[0][0] repeat_vector_1[1][0] bidirectional_1[0][0] repeat_vector_1[2][0] bidirectional_1[0][0] repeat_vector_1[3][0] bidirectional_1[0][0] repeat_vector_1[4][0] bidirectional_1[0][0] repeat_vector_1[5][0] bidirectional_1[0][0] repeat_vector_1[6][0] bidirectional_1[0][0] repeat_vector_1[7][0] bidirectional_1[0][0] repeat_vector_1[8][0] bidirectional_1[0][0] repeat_vector_1[9][0] ____________________________________________________________________________________________________ dense_1 (Dense) (None, 30, 10) 1290 concatenate_1[0][0] concatenate_1[1][0] concatenate_1[2][0] concatenate_1[3][0] concatenate_1[4][0] concatenate_1[5][0] concatenate_1[6][0] concatenate_1[7][0] concatenate_1[8][0] concatenate_1[9][0] ____________________________________________________________________________________________________ dense_2 (Dense) (None, 30, 1) 11 dense_1[0][0] dense_1[1][0] dense_1[2][0] dense_1[3][0] dense_1[4][0] dense_1[5][0] dense_1[6][0] dense_1[7][0] dense_1[8][0] dense_1[9][0] ____________________________________________________________________________________________________ attention_weights (Activation) (None, 30, 1) 0 dense_2[0][0] dense_2[1][0] dense_2[2][0] dense_2[3][0] dense_2[4][0] dense_2[5][0] dense_2[6][0] dense_2[7][0] dense_2[8][0] dense_2[9][0] ____________________________________________________________________________________________________ dot_1 (Dot) (None, 1, 64) 0 attention_weights[0][0] bidirectional_1[0][0] attention_weights[1][0] bidirectional_1[0][0] attention_weights[2][0] bidirectional_1[0][0] attention_weights[3][0] bidirectional_1[0][0] attention_weights[4][0] bidirectional_1[0][0] attention_weights[5][0] bidirectional_1[0][0] attention_weights[6][0] bidirectional_1[0][0] attention_weights[7][0] bidirectional_1[0][0] attention_weights[8][0] bidirectional_1[0][0] attention_weights[9][0] bidirectional_1[0][0] ____________________________________________________________________________________________________ c0 (InputLayer) (None, 64) 0 ____________________________________________________________________________________________________ lstm_1 (LSTM) [(None, 64), (None, 6 33024 dot_1[0][0] s0[0][0] c0[0][0] dot_1[1][0] lstm_1[0][0] lstm_1[0][2] dot_1[2][0] lstm_1[1][0] lstm_1[1][2] dot_1[3][0] lstm_1[2][0] lstm_1[2][2] dot_1[4][0] lstm_1[3][0] lstm_1[3][2] dot_1[5][0] lstm_1[4][0] lstm_1[4][2] dot_1[6][0] lstm_1[5][0] lstm_1[5][2] dot_1[7][0] lstm_1[6][0] lstm_1[6][2] dot_1[8][0] lstm_1[7][0] lstm_1[7][2] dot_1[9][0] lstm_1[8][0] lstm_1[8][2] ____________________________________________________________________________________________________ dense_3 (Dense) (None, 11) 715 lstm_1[0][0] lstm_1[1][0] lstm_1[2][0] lstm_1[3][0] lstm_1[4][0] lstm_1[5][0] lstm_1[6][0] lstm_1[7][0] lstm_1[8][0] lstm_1[9][0] ==================================================================================================== Total params: 52,960 Trainable params: 52,960 Non-trainable params: 0 ____________________________________________________________________________________________________ ###Markdown **Expected Output**:Here is the summary you should see **Total params:** 52,960 **Trainable params:** 52,960 **Non-trainable params:** 0 **bidirectional_1's output shape ** (None, 30, 64) **repeat_vector_1's output shape ** (None, 30, 64) **concatenate_1's output shape ** (None, 30, 128) **attention_weights's output shape ** (None, 30, 1) **dot_1's output shape ** (None, 1, 64) **dense_3's output shape ** (None, 11) Compile the model* After creating your model in Keras, you need to compile it and define the loss function, optimizer and metrics you want to use. * Loss function: 'categorical_crossentropy'. * Optimizer: [Adam](https://keras.io/optimizers/adam) [optimizer](https://keras.io/optimizers/usage-of-optimizers) - learning rate = 0.005 - $\beta_1 = 0.9$ - $\beta_2 = 0.999$ - decay = 0.01 * metric: 'accuracy' Sample code```Pythonoptimizer = Adam(lr=..., beta_1=..., beta_2=..., decay=...)model.compile(optimizer=..., loss=..., metrics=[...])``` ###Code ### START CODE HERE ### (≈2 lines) optimizer = Adam(lr=0.005, beta_1=0.9, beta_2=0.999, decay=0.01) opt = model.compile(optimizer=optimizer, metrics=['accuracy'], loss = 'categorical_crossentropy') ### END CODE HERE ### ###Output _____no_output_____ ###Markdown Define inputs and outputs, and fit the modelThe last step is to define all your inputs and outputs to fit the model:- You have input X of shape $(m = 10000, T_x = 30)$ containing the training examples.- You need to create `s0` and `c0` to initialize your `post_attention_LSTM_cell` with zeros.- Given the `model()` you coded, you need the "outputs" to be a list of 10 elements of shape (m, T_y). - The list `outputs[i][0], ..., outputs[i][Ty]` represents the true labels (characters) corresponding to the $i^{th}$ training example (`X[i]`). - `outputs[i][j]` is the true label of the $j^{th}$ character in the $i^{th}$ training example. ###Code s0 = np.zeros((m, n_s)) c0 = np.zeros((m, n_s)) outputs = list(Yoh.swapaxes(0,1)) ###Output _____no_output_____ ###Markdown Let's now fit the model and run it for one epoch. ###Code model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100) ###Output Epoch 1/1 10000/10000 [==============================] - 54s - loss: 16.7106 - dense_3_loss_1: 1.1354 - dense_3_loss_2: 0.9854 - dense_3_loss_3: 1.7414 - dense_3_loss_4: 2.6659 - dense_3_loss_5: 0.8078 - dense_3_loss_6: 1.3133 - dense_3_loss_7: 2.7032 - dense_3_loss_8: 0.9927 - dense_3_loss_9: 1.7525 - dense_3_loss_10: 2.6130 - dense_3_acc_1: 0.5357 - dense_3_acc_2: 0.7170 - dense_3_acc_3: 0.3034 - dense_3_acc_4: 0.0849 - dense_3_acc_5: 0.9131 - dense_3_acc_6: 0.3224 - dense_3_acc_7: 0.0442 - dense_3_acc_8: 0.9144 - dense_3_acc_9: 0.2279 - dense_3_acc_10: 0.0966 ###Markdown While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples: Thus, `dense_2_acc_8: 0.89` means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data. We have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.) ###Code model.load_weights('models/model.h5') ###Output _____no_output_____ ###Markdown You can now see the results on new examples. ###Code EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001'] for example in EXAMPLES: source = string_to_int(example, Tx, human_vocab) source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1) prediction = model.predict([source, s0, c0]) prediction = np.argmax(prediction, axis = -1) output = [inv_machine_vocab[int(i)] for i in prediction] print("source:", example) print("output:", ''.join(output),"\n") ###Output source: 3 May 1979 output: 1979-05-03 source: 5 April 09 output: 2009-05-05 source: 21th of August 2016 output: 2016-08-21 source: Tue 10 Jul 2007 output: 2007-07-10 source: Saturday May 9 2018 output: 2018-05-09 source: March 3 2001 output: 2001-03-03 source: March 3rd 2001 output: 2001-03-03 source: 1 March 2001 output: 2001-03-01 ###Markdown You can also change these examples to test with your own examples. The next part will give you a better sense of what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character. 3 - Visualizing Attention (Optional / Ungraded)Since the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (such as the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what each part of the output is looking at which part of the input.Consider the task of translating "Saturday 9 May 2018" to "2018-05-09". If we visualize the computed $\alpha^{\langle t, t' \rangle}$ we get this: **Figure 8**: Full Attention MapNotice how the output ignores the "Saturday" portion of the input. None of the output timesteps are paying much attention to that portion of the input. We also see that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input's "18" in order to generate "2018." 3.1 - Getting the attention weights from the networkLets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\alpha^{\langle t, t' \rangle}$. To figure out where the attention values are located, let's start by printing a summary of the model . ###Code model.summary() ###Output ____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== input_1 (InputLayer) (None, 30, 37) 0 ____________________________________________________________________________________________________ s0 (InputLayer) (None, 64) 0 ____________________________________________________________________________________________________ bidirectional_1 (Bidirectional) (None, 30, 64) 17920 input_1[0][0] ____________________________________________________________________________________________________ repeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0] lstm_1[0][0] lstm_1[1][0] lstm_1[2][0] lstm_1[3][0] lstm_1[4][0] lstm_1[5][0] lstm_1[6][0] lstm_1[7][0] lstm_1[8][0] ____________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 30, 128) 0 bidirectional_1[0][0] repeat_vector_1[0][0] bidirectional_1[0][0] repeat_vector_1[1][0] bidirectional_1[0][0] repeat_vector_1[2][0] bidirectional_1[0][0] repeat_vector_1[3][0] bidirectional_1[0][0] repeat_vector_1[4][0] bidirectional_1[0][0] repeat_vector_1[5][0] bidirectional_1[0][0] repeat_vector_1[6][0] bidirectional_1[0][0] repeat_vector_1[7][0] bidirectional_1[0][0] repeat_vector_1[8][0] bidirectional_1[0][0] repeat_vector_1[9][0] ____________________________________________________________________________________________________ dense_1 (Dense) (None, 30, 10) 1290 concatenate_1[0][0] concatenate_1[1][0] concatenate_1[2][0] concatenate_1[3][0] concatenate_1[4][0] concatenate_1[5][0] concatenate_1[6][0] concatenate_1[7][0] concatenate_1[8][0] concatenate_1[9][0] ____________________________________________________________________________________________________ dense_2 (Dense) (None, 30, 1) 11 dense_1[0][0] dense_1[1][0] dense_1[2][0] dense_1[3][0] dense_1[4][0] dense_1[5][0] dense_1[6][0] dense_1[7][0] dense_1[8][0] dense_1[9][0] ____________________________________________________________________________________________________ attention_weights (Activation) (None, 30, 1) 0 dense_2[0][0] dense_2[1][0] dense_2[2][0] dense_2[3][0] dense_2[4][0] dense_2[5][0] dense_2[6][0] dense_2[7][0] dense_2[8][0] dense_2[9][0] ____________________________________________________________________________________________________ dot_1 (Dot) (None, 1, 64) 0 attention_weights[0][0] bidirectional_1[0][0] attention_weights[1][0] bidirectional_1[0][0] attention_weights[2][0] bidirectional_1[0][0] attention_weights[3][0] bidirectional_1[0][0] attention_weights[4][0] bidirectional_1[0][0] attention_weights[5][0] bidirectional_1[0][0] attention_weights[6][0] bidirectional_1[0][0] attention_weights[7][0] bidirectional_1[0][0] attention_weights[8][0] bidirectional_1[0][0] attention_weights[9][0] bidirectional_1[0][0] ____________________________________________________________________________________________________ c0 (InputLayer) (None, 64) 0 ____________________________________________________________________________________________________ lstm_1 (LSTM) [(None, 64), (None, 6 33024 dot_1[0][0] s0[0][0] c0[0][0] dot_1[1][0] lstm_1[0][0] lstm_1[0][2] dot_1[2][0] lstm_1[1][0] lstm_1[1][2] dot_1[3][0] lstm_1[2][0] lstm_1[2][2] dot_1[4][0] lstm_1[3][0] lstm_1[3][2] dot_1[5][0] lstm_1[4][0] lstm_1[4][2] dot_1[6][0] lstm_1[5][0] lstm_1[5][2] dot_1[7][0] lstm_1[6][0] lstm_1[6][2] dot_1[8][0] lstm_1[7][0] lstm_1[7][2] dot_1[9][0] lstm_1[8][0] lstm_1[8][2] ____________________________________________________________________________________________________ dense_3 (Dense) (None, 11) 715 lstm_1[0][0] lstm_1[1][0] lstm_1[2][0] lstm_1[3][0] lstm_1[4][0] lstm_1[5][0] lstm_1[6][0] lstm_1[7][0] lstm_1[8][0] lstm_1[9][0] ==================================================================================================== Total params: 52,960 Trainable params: 52,960 Non-trainable params: 0 ____________________________________________________________________________________________________ ###Markdown Navigate through the output of `model.summary()` above. You can see that the layer named `attention_weights` outputs the `alphas` of shape (m, 30, 1) before `dot_2` computes the context vector for every time step $t = 0, \ldots, T_y-1$. Let's get the attention weights from this layer.The function `attention_map()` pulls out the attention values from your model and plots them. ###Code attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, "Tuesday 09 Oct 1993", num = 7, n_s = 64); ###Output _____no_output_____ ###Markdown Neural Machine TranslationWelcome to your first programming assignment for this week! * You will build a Neural Machine Translation (NMT) model to translate human-readable dates ("25th of June, 2009") into machine-readable dates ("2009-06-25"). * You will do this using an attention model, one of the most sophisticated sequence-to-sequence models. This notebook was produced together with NVIDIA's Deep Learning Institute. Table of Contents- [Packages](0)- [1 - Translating Human Readable Dates Into Machine Readable Dates](1) - [1.1 - Dataset](1-1)- [2 - Neural Machine Translation with Attention](2) - [2.1 - Attention Mechanism](2-1) - [Exercise 1 - one_step_attention](ex-1) - [Exercise 2 - modelf](ex-2) - [Exercise 3 - Compile the Model](ex-3)- [3 - Visualizing Attention (Optional / Ungraded)](3) - [3.1 - Getting the Attention Weights From the Network](3-1) Packages ###Code from tensorflow.keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply from tensorflow.keras.layers import RepeatVector, Dense, Activation, Lambda from tensorflow.keras.optimizers import Adam from tensorflow.keras.utils import to_categorical from tensorflow.keras.models import load_model, Model import tensorflow.keras.backend as K import tensorflow as tf import numpy as np from faker import Faker import random from tqdm import tqdm from babel.dates import format_date from nmt_utils import * import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 1 - Translating Human Readable Dates Into Machine Readable Dates* The model you will build here could be used to translate from one language to another, such as translating from English to Hindi. * However, language translation requires massive datasets and usually takes days of training on GPUs. * To give you a place to experiment with these models without using massive datasets, we will perform a simpler "date translation" task. * The network will input a date written in a variety of possible formats (*e.g. "the 29th of August 1958", "03/30/1968", "24 JUNE 1987"*) * The network will translate them into standardized, machine readable dates (*e.g. "1958-08-29", "1968-03-30", "1987-06-24"*). * We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD. <!-- Take a look at [nmt_utils.py](./nmt_utils.py) to see all the formatting. Count and figure out how the formats work, you will need this knowledge later. !--> 1.1 - DatasetWe will train the model on a dataset of 10,000 human readable dates and their equivalent, standardized, machine readable dates. Let's run the following cells to load the dataset and print some examples. ###Code m = 10000 dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m) dataset[:10] ###Output _____no_output_____ ###Markdown You've loaded:- `dataset`: a list of tuples of (human readable date, machine readable date).- `human_vocab`: a python dictionary mapping all characters used in the human readable dates to an integer-valued index.- `machine_vocab`: a python dictionary mapping all characters used in machine readable dates to an integer-valued index. - **Note**: These indices are not necessarily consistent with `human_vocab`. - `inv_machine_vocab`: the inverse dictionary of `machine_vocab`, mapping from indices back to characters. Let's preprocess the data and map the raw text data into the index values. - We will set Tx=30 - We assume Tx is the maximum length of the human readable date. - If we get a longer input, we would have to truncate it.- We will set Ty=10 - "YYYY-MM-DD" is 10 characters long. ###Code Tx = 30 Ty = 10 X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty) print("X.shape:", X.shape) print("Y.shape:", Y.shape) print("Xoh.shape:", Xoh.shape) print("Yoh.shape:", Yoh.shape) ###Output X.shape: (10000, 30) Y.shape: (10000, 10) Xoh.shape: (10000, 30, 37) Yoh.shape: (10000, 10, 11) ###Markdown You now have:- `X`: a processed version of the human readable dates in the training set. - Each character in X is replaced by an index (integer) mapped to the character using `human_vocab`. - Each date is padded to ensure a length of $T_x$ using a special character (). - `X.shape = (m, Tx)` where m is the number of training examples in a batch.- `Y`: a processed version of the machine readable dates in the training set. - Each character is replaced by the index (integer) it is mapped to in `machine_vocab`. - `Y.shape = (m, Ty)`. - `Xoh`: one-hot version of `X` - Each index in `X` is converted to the one-hot representation (if the index is 2, the one-hot version has the index position 2 set to 1, and the remaining positions are 0. - `Xoh.shape = (m, Tx, len(human_vocab))`- `Yoh`: one-hot version of `Y` - Each index in `Y` is converted to the one-hot representation. - `Yoh.shape = (m, Ty, len(machine_vocab))`. - `len(machine_vocab) = 11` since there are 10 numeric digits (0 to 9) and the `-` symbol. * Let's also look at some examples of preprocessed training examples. * Feel free to play with `index` in the cell below to navigate the dataset and see how source/target dates are preprocessed. ###Code index = 0 print("Source date:", dataset[index][0]) print("Target date:", dataset[index][1]) print() print("Source after preprocessing (indices):", X[index]) print("Target after preprocessing (indices):", Y[index]) print() print("Source after preprocessing (one-hot):", Xoh[index]) print("Target after preprocessing (one-hot):", Yoh[index]) ###Output Source date: 30 october 1981 Target date: 1981-10-30 Source after preprocessing (indices): [ 6 3 0 26 15 30 26 14 17 28 0 4 12 11 4 36 36 36 36 36 36 36 36 36 36 36 36 36 36 36] Target after preprocessing (indices): [ 2 10 9 2 0 2 1 0 4 1] Source after preprocessing (one-hot): [[0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [1. 0. 0. ... 0. 0. 0.] ... [0. 0. 0. ... 0. 0. 1.] [0. 0. 0. ... 0. 0. 1.] [0. 0. 0. ... 0. 0. 1.]] Target after preprocessing (one-hot): [[0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.] [0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.] [0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.] [1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.] [0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] ###Markdown 2 - Neural Machine Translation with Attention* If you had to translate a book's paragraph from French to English, you would not read the whole paragraph, then close the book and translate. * Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down. * The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step. 2.1 - Attention MechanismIn this part, you will implement the attention mechanism presented in the lecture videos. * Here is a figure to remind you how the model works. * The diagram on the left shows the attention model. * The diagram on the right shows what one "attention" step does to calculate the attention variables $\alpha^{\langle t, t' \rangle}$. * The attention variables $\alpha^{\langle t, t' \rangle}$ are used to compute the context variable $context^{\langle t \rangle}$ for each timestep in the output ($t=1, \ldots, T_y$). **Figure 1**: Neural machine translation with attention Here are some properties of the model that you may notice: Pre-attention and Post-attention LSTMs on both sides of the attention mechanism- There are two separate LSTMs in this model (see diagram on the left): pre-attention and post-attention LSTMs.- *Pre-attention* Bi-LSTM is the one at the bottom of the picture is a Bi-directional LSTM and comes *before* the attention mechanism. - The attention mechanism is shown in the middle of the left-hand diagram. - The pre-attention Bi-LSTM goes through $T_x$ time steps- *Post-attention* LSTM: at the top of the diagram comes *after* the attention mechanism. - The post-attention LSTM goes through $T_y$ time steps. - The post-attention LSTM passes the hidden state $s^{\langle t \rangle}$ and cell state $c^{\langle t \rangle}$ from one time step to the next. An LSTM has both a hidden state and cell state* In the lecture videos, we were using only a basic RNN for the post-attention sequence model * This means that the state captured by the RNN was outputting only the hidden state $s^{\langle t\rangle}$. * In this assignment, we are using an LSTM instead of a basic RNN. * So the LSTM has both the hidden state $s^{\langle t\rangle}$ and the cell state $c^{\langle t\rangle}$. Each time step does not use predictions from the previous time step* Unlike previous text generation examples earlier in the course, in this model, the post-attention LSTM at time $t$ does not take the previous time step's prediction $y^{\langle t-1 \rangle}$ as input.* The post-attention LSTM at time 't' only takes the hidden state $s^{\langle t\rangle}$ and cell state $c^{\langle t\rangle}$ as input. * We have designed the model this way because unlike language generation (where adjacent characters are highly correlated) there isn't as strong a dependency between the previous character and the next character in a YYYY-MM-DD date. Concatenation of hidden states from the forward and backward pre-attention LSTMs- $\overrightarrow{a}^{\langle t \rangle}$: hidden state of the forward-direction, pre-attention LSTM.- $\overleftarrow{a}^{\langle t \rangle}$: hidden state of the backward-direction, pre-attention LSTM.- $a^{\langle t \rangle} = [\overrightarrow{a}^{\langle t \rangle}, \overleftarrow{a}^{\langle t \rangle}]$: the concatenation of the activations of both the forward-direction $\overrightarrow{a}^{\langle t \rangle}$ and backward-directions $\overleftarrow{a}^{\langle t \rangle}$ of the pre-attention Bi-LSTM. Computing "energies" $e^{\langle t, t' \rangle}$ as a function of $s^{\langle t-1 \rangle}$ and $a^{\langle t' \rangle}$- Recall in the lesson videos "Attention Model", at time 6:45 to 8:16, the definition of "e" as a function of $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$. - "e" is called the "energies" variable. - $s^{\langle t-1 \rangle}$ is the hidden state of the post-attention LSTM - $a^{\langle t' \rangle}$ is the hidden state of the pre-attention LSTM. - $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ are fed into a simple neural network, which learns the function to output $e^{\langle t, t' \rangle}$. - $e^{\langle t, t' \rangle}$ is then used when computing the attention $a^{\langle t, t' \rangle}$ that $y^{\langle t \rangle}$ should pay to $a^{\langle t' \rangle}$. - The diagram on the right of figure 1 uses a `RepeatVector` node to copy $s^{\langle t-1 \rangle}$'s value $T_x$ times.- Then it uses `Concatenation` to concatenate $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$.- The concatenation of $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ is fed into a "Dense" layer, which computes $e^{\langle t, t' \rangle}$. - $e^{\langle t, t' \rangle}$ is then passed through a softmax to compute $\alpha^{\langle t, t' \rangle}$.- Note that the diagram doesn't explicitly show variable $e^{\langle t, t' \rangle}$, but $e^{\langle t, t' \rangle}$ is above the Dense layer and below the Softmax layer in the diagram in the right half of figure 1.- We'll explain how to use `RepeatVector` and `Concatenation` in Keras below. Implementation Details Let's implement this neural translator. You will start by implementing two functions: `one_step_attention()` and `model()`. one_step_attention* The inputs to the one_step_attention at time step $t$ are: - $[a^{},a^{}, ..., a^{}]$: all hidden states of the pre-attention Bi-LSTM. - $s^{}$: the previous hidden state of the post-attention LSTM * one_step_attention computes: - $[\alpha^{},\alpha^{}, ..., \alpha^{}]$: the attention weights - $context^{ \langle t \rangle }$: the context vector: $$context^{} = \sum_{t' = 1}^{T_x} \alpha^{}a^{}\tag{1}$$ Clarifying 'context' and 'c'- In the lecture videos, the context was denoted $c^{\langle t \rangle}$- In the assignment, we are calling the context $context^{\langle t \rangle}$. - This is to avoid confusion with the post-attention LSTM's internal memory cell variable, which is also denoted $c^{\langle t \rangle}$. Exercise 1 - one_step_attention Implement `one_step_attention()`. * The function `model()` will call the layers in `one_step_attention()` $T_y$ times using a for-loop.* It is important that all $T_y$ copies have the same weights. * It should not reinitialize the weights every time. * In other words, all $T_y$ steps should have shared weights. * Here's how you can implement layers with shareable weights in Keras: 1. Define the layer objects in a variable scope that is outside of the `one_step_attention` function. For example, defining the objects as global variables would work. - Note that defining these variables inside the scope of the function `model` would technically work, since `model` will then call the `one_step_attention` function. For the purposes of making grading and troubleshooting easier, we are defining these as global variables. Note that the automatic grader will expect these to be global variables as well. 2. Call these objects when propagating the input.* We have defined the layers you need as global variables. * Please run the following cells to create them. * Please note that the automatic grader expects these global variables with the given variable names. For grading purposes, please do not rename the global variables.* Please check the Keras documentation to learn more about these layers. The layers are functions. Below are examples of how to call these functions. * [RepeatVector()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/RepeatVector)```Pythonvar_repeated = repeat_layer(var1)``` * [Concatenate()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Concatenate) ```Pythonconcatenated_vars = concatenate_layer([var1,var2,var3])``` * [Dense()](https://keras.io/layers/core/dense) ```Pythonvar_out = dense_layer(var_in)``` * [Activation()](https://keras.io/layers/core/activation) ```Pythonactivation = activation_layer(var_in) ``` * [Dot()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dot) ```Pythondot_product = dot_layer([var1,var2])``` ###Code # Defined shared layers as global variables repeator = RepeatVector(Tx) concatenator = Concatenate(axis=-1) densor1 = Dense(10, activation = "tanh") densor2 = Dense(1, activation = "relu") activator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook dotor = Dot(axes = 1) # UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED FUNCTION: one_step_attention def one_step_attention(a, s_prev): """ Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights "alphas" and the hidden states "a" of the Bi-LSTM. Arguments: a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a) s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s) Returns: context -- context vector, input of the next (post-attention) LSTM cell """ ### START CODE HERE ### # Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states "a" (≈ 1 line) s_prev = repeator(s_prev) # Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line) # For grading purposes, please list 'a' first and 's_prev' second, in this order. concat = concatenator([a, s_prev]) # Use densor1 to propagate concat through a small fully-connected neural network to compute the "intermediate energies" variable e. (≈1 lines) e = densor1(concat) # Use densor2 to propagate e through a small fully-connected neural network to compute the "energies" variable energies. (≈1 lines) energies = densor2(e) # Use "activator" on "energies" to compute the attention weights "alphas" (≈ 1 line) alphas = activator(energies) # Use dotor together with "alphas" and "a", in this order, to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line) context = dotor([alphas, a]) ### END CODE HERE ### return context # UNIT TEST def one_step_attention_test(target): m = 10 Tx = 30 n_a = 32 n_s = 64 #np.random.seed(10) a = np.random.uniform(1, 0, (m, Tx, 2 * n_a)).astype(np.float32) s_prev =np.random.uniform(1, 0, (m, n_s)).astype(np.float32) * 1 context = target(a, s_prev) assert type(context) == tf.python.framework.ops.EagerTensor, "Unexpected type. It should be a Tensor" assert tuple(context.shape) == (m, 1, n_s), "Unexpected output shape" assert np.all(context.numpy() > 0), "All output values must be > 0 in this example" assert np.all(context.numpy() < 1), "All output values must be < 1 in this example" #assert np.allclose(context[0][0][0:5].numpy(), [0.50877404, 0.57160693, 0.45448175, 0.50074816, 0.53651875]), "Unexpected values in the result" print("\033[92mAll tests passed!") one_step_attention_test(one_step_attention) ###Output All tests passed! ###Markdown Exercise 2 - modelfImplement `modelf()` as explained in figure 1 and the instructions:* `modelf` first runs the input through a Bi-LSTM to get $[a^{},a^{}, ..., a^{}]$. * Then, `modelf` calls `one_step_attention()` $T_y$ times using a `for` loop. At each iteration of this loop: - It gives the computed context vector $context^{}$ to the post-attention LSTM. - It runs the output of the post-attention LSTM through a dense layer with softmax activation. - The softmax generates a prediction $\hat{y}^{}$. Again, we have defined global layers that will share weights to be used in `modelf()`. ###Code n_a = 32 # number of units for the pre-attention, bi-directional LSTM's hidden state 'a' n_s = 64 # number of units for the post-attention LSTM's hidden state "s" # Please note, this is the post attention LSTM cell. post_activation_LSTM_cell = LSTM(n_s, return_state = True) # Please do not modify this global variable. output_layer = Dense(len(machine_vocab), activation=softmax) ###Output _____no_output_____ ###Markdown Now you can use these layers $T_y$ times in a `for` loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps: 1. Propagate the input `X` into a bi-directional LSTM. * [Bidirectional](https://keras.io/layers/wrappers/bidirectional) * [LSTM](https://keras.io/layers/recurrent/lstm) * Remember that we want the LSTM to return a full sequence instead of just the last hidden state. Sample code:```Pythonsequence_of_hidden_states = Bidirectional(LSTM(units=..., return_sequences=...))(the_input_X)``` 2. Iterate for $t = 0, \cdots, T_y-1$: 1. Call `one_step_attention()`, passing in the sequence of hidden states $[a^{\langle 1 \rangle},a^{\langle 2 \rangle}, ..., a^{ \langle T_x \rangle}]$ from the pre-attention bi-directional LSTM, and the previous hidden state $s^{}$ from the post-attention LSTM to calculate the context vector $context^{}$. 2. Give $context^{}$ to the post-attention LSTM cell. - Remember to pass in the previous hidden-state $s^{\langle t-1\rangle}$ and cell-states $c^{\langle t-1\rangle}$ of this LSTM * This outputs the new hidden state $s^{}$ and the new cell state $c^{}$. Sample code: ```Python next_hidden_state, _ , next_cell_state = post_activation_LSTM_cell(inputs=..., initial_state=[prev_hidden_state, prev_cell_state]) ``` Please note that the layer is actually the "post attention LSTM cell". For the purposes of passing the automatic grader, please do not modify the naming of this global variable. This will be fixed when we deploy updates to the automatic grader. 3. Apply a dense, softmax layer to $s^{}$, get the output. Sample code: ```Python output = output_layer(inputs=...) ``` 4. Save the output by adding it to the list of outputs.3. Create your Keras model instance. * It should have three inputs: * `X`, the one-hot encoded inputs to the model, of shape ($T_{x}, humanVocabSize)$ * $s^{\langle 0 \rangle}$, the initial hidden state of the post-attention LSTM * $c^{\langle 0 \rangle}$, the initial cell state of the post-attention LSTM * The output is the list of outputs. Sample code ```Python model = Model(inputs=[...,...,...], outputs=...) ``` ###Code # UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED FUNCTION: model def modelf(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size): """ Arguments: Tx -- length of the input sequence Ty -- length of the output sequence n_a -- hidden state size of the Bi-LSTM n_s -- hidden state size of the post-attention LSTM human_vocab_size -- size of the python dictionary "human_vocab" machine_vocab_size -- size of the python dictionary "machine_vocab" Returns: model -- Keras model instance """ # Define the inputs of your model with a shape (Tx,) # Define s0 (initial hidden state) and c0 (initial cell state) # for the decoder LSTM with shape (n_s,) X = Input(shape=(Tx, human_vocab_size)) s0 = Input(shape=(n_s,), name='s0') c0 = Input(shape=(n_s,), name='c0') s = s0 c = c0 # Initialize empty list of outputs outputs = [] ### START CODE HERE ### # Step 1: Define your pre-attention Bi-LSTM. (≈ 1 line) a = Bidirectional(LSTM(units=n_a, return_sequences=True))(X) # Step 2: Iterate for Ty steps for t in range(Ty): # Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line) context = one_step_attention(a, s) # Step 2.B: Apply the post-attention LSTM cell to the "context" vector. # Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line) s, _, c = post_activation_LSTM_cell(context, initial_state=[s, c]) # Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line) out = output_layer(s) # Step 2.D: Append "out" to the "outputs" list (≈ 1 line) outputs.append(out) # Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line) model = Model(inputs=[X, s0, c0], outputs=outputs) ### END CODE HERE ### return model # UNIT TEST from test_utils import * def modelf_test(target): m = 10 Tx = 30 n_a = 32 n_s = 64 len_human_vocab = 37 len_machine_vocab = 11 model = target(Tx, Ty, n_a, n_s, len_human_vocab, len_machine_vocab) print(summary(model)) expected_summary = [['InputLayer', [(None, 30, 37)], 0], ['InputLayer', [(None, 64)], 0], ['Bidirectional', (None, 30, 64), 17920], ['RepeatVector', (None, 30, 64), 0, 30], ['Concatenate', (None, 30, 128), 0], ['Dense', (None, 30, 10), 1290, 'tanh'], ['Dense', (None, 30, 1), 11, 'relu'], ['Activation', (None, 30, 1), 0], ['Dot', (None, 1, 64), 0], ['InputLayer', [(None, 64)], 0], ['LSTM',[(None, 64), (None, 64), (None, 64)], 33024,[(None, 1, 64), (None, 64), (None, 64)],'tanh'], ['Dense', (None, 11), 715, 'softmax']] assert len(model.outputs) == 10, f"Wrong output shape. Expected 10 != {len(model.outputs)}" comparator(summary(model), expected_summary) modelf_test(modelf) ###Output [['InputLayer', [(None, 30, 37)], 0], ['InputLayer', [(None, 64)], 0], ['Bidirectional', (None, 30, 64), 17920], ['RepeatVector', (None, 30, 64), 0, 30], ['Concatenate', (None, 30, 128), 0], ['Dense', (None, 30, 10), 1290, 'tanh'], ['Dense', (None, 30, 1), 11, 'relu'], ['Activation', (None, 30, 1), 0], ['Dot', (None, 1, 64), 0], ['InputLayer', [(None, 64)], 0], ['LSTM', [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], 'tanh'], ['Dense', (None, 11), 715, 'softmax']] All tests passed! ###Markdown Run the following cell to create your model. ###Code model = modelf(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab)) ###Output _____no_output_____ ###Markdown Troubleshooting Note* If you are getting repeated errors after an initially incorrect implementation of "model", but believe that you have corrected the error, you may still see error messages when building your model. * A solution is to save and restart your kernel (or shutdown then restart your notebook), and re-run the cells. Let's get a summary of the model to check if it matches the expected output. ###Code model.summary() ###Output Model: "functional_15" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_8 (InputLayer) [(None, 30, 37)] 0 __________________________________________________________________________________________________ s0 (InputLayer) [(None, 64)] 0 __________________________________________________________________________________________________ bidirectional_7 (Bidirectional) (None, 30, 64) 17920 input_8[0][0] __________________________________________________________________________________________________ repeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0] lstm_5[30][0] lstm_5[31][0] lstm_5[32][0] lstm_5[33][0] lstm_5[34][0] lstm_5[35][0] lstm_5[36][0] lstm_5[37][0] lstm_5[38][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 30, 128) 0 bidirectional_7[0][0] repeat_vector_1[30][0] bidirectional_7[0][0] repeat_vector_1[31][0] bidirectional_7[0][0] repeat_vector_1[32][0] bidirectional_7[0][0] repeat_vector_1[33][0] bidirectional_7[0][0] repeat_vector_1[34][0] bidirectional_7[0][0] repeat_vector_1[35][0] bidirectional_7[0][0] repeat_vector_1[36][0] bidirectional_7[0][0] repeat_vector_1[37][0] bidirectional_7[0][0] repeat_vector_1[38][0] bidirectional_7[0][0] repeat_vector_1[39][0] __________________________________________________________________________________________________ dense_3 (Dense) (None, 30, 10) 1290 concatenate_1[30][0] concatenate_1[31][0] concatenate_1[32][0] concatenate_1[33][0] concatenate_1[34][0] concatenate_1[35][0] concatenate_1[36][0] concatenate_1[37][0] concatenate_1[38][0] concatenate_1[39][0] __________________________________________________________________________________________________ dense_4 (Dense) (None, 30, 1) 11 dense_3[30][0] dense_3[31][0] dense_3[32][0] dense_3[33][0] dense_3[34][0] dense_3[35][0] dense_3[36][0] dense_3[37][0] dense_3[38][0] dense_3[39][0] __________________________________________________________________________________________________ attention_weights (Activation) (None, 30, 1) 0 dense_4[30][0] dense_4[31][0] dense_4[32][0] dense_4[33][0] dense_4[34][0] dense_4[35][0] dense_4[36][0] dense_4[37][0] dense_4[38][0] dense_4[39][0] __________________________________________________________________________________________________ dot_1 (Dot) (None, 1, 64) 0 attention_weights[30][0] bidirectional_7[0][0] attention_weights[31][0] bidirectional_7[0][0] attention_weights[32][0] bidirectional_7[0][0] attention_weights[33][0] bidirectional_7[0][0] attention_weights[34][0] bidirectional_7[0][0] attention_weights[35][0] bidirectional_7[0][0] attention_weights[36][0] bidirectional_7[0][0] attention_weights[37][0] bidirectional_7[0][0] attention_weights[38][0] bidirectional_7[0][0] attention_weights[39][0] bidirectional_7[0][0] __________________________________________________________________________________________________ c0 (InputLayer) [(None, 64)] 0 __________________________________________________________________________________________________ lstm_5 (LSTM) [(None, 64), (None, 33024 dot_1[30][0] s0[0][0] c0[0][0] dot_1[31][0] lstm_5[30][0] lstm_5[30][2] dot_1[32][0] lstm_5[31][0] lstm_5[31][2] dot_1[33][0] lstm_5[32][0] lstm_5[32][2] dot_1[34][0] lstm_5[33][0] lstm_5[33][2] dot_1[35][0] lstm_5[34][0] lstm_5[34][2] dot_1[36][0] lstm_5[35][0] lstm_5[35][2] dot_1[37][0] lstm_5[36][0] lstm_5[36][2] dot_1[38][0] lstm_5[37][0] lstm_5[37][2] dot_1[39][0] lstm_5[38][0] lstm_5[38][2] __________________________________________________________________________________________________ dense_5 (Dense) (None, 11) 715 lstm_5[30][0] lstm_5[31][0] lstm_5[32][0] lstm_5[33][0] lstm_5[34][0] lstm_5[35][0] lstm_5[36][0] lstm_5[37][0] lstm_5[38][0] lstm_5[39][0] ================================================================================================== Total params: 52,960 Trainable params: 52,960 Non-trainable params: 0 __________________________________________________________________________________________________ ###Markdown **Expected Output**:Here is the summary you should see **Total params:** 52,960 **Trainable params:** 52,960 **Non-trainable params:** 0 **bidirectional_1's output shape ** (None, 30, 64) **repeat_vector_1's output shape ** (None, 30, 64) **concatenate_1's output shape ** (None, 30, 128) **attention_weights's output shape ** (None, 30, 1) **dot_1's output shape ** (None, 1, 64) **dense_3's output shape ** (None, 11) Exercise 3 - Compile the Model* After creating your model in Keras, you need to compile it and define the loss function, optimizer and metrics you want to use. * Loss function: 'categorical_crossentropy'. * Optimizer: [Adam](https://keras.io/optimizers/adam) [optimizer](https://keras.io/optimizers/usage-of-optimizers) - learning rate = 0.005 - $\beta_1 = 0.9$ - $\beta_2 = 0.999$ - decay = 0.01 * metric: 'accuracy' Sample code```Pythonoptimizer = Adam(lr=..., beta_1=..., beta_2=..., decay=...)model.compile(optimizer=..., loss=..., metrics=[...])``` ###Code ### START CODE HERE ### (≈2 lines) opt = Adam(learning_rate=0.005, beta_1=0.9, beta_2=0.999, decay=0.01) # Adam(...) model.compile(loss = 'categorical_crossentropy', optimizer = opt, metrics = ['accuracy']) ### END CODE HERE ### # UNIT TESTS assert opt.lr == 0.005, "Set the lr parameter to 0.005" assert opt.beta_1 == 0.9, "Set the beta_1 parameter to 0.9" assert opt.beta_2 == 0.999, "Set the beta_2 parameter to 0.999" assert opt.decay == 0.01, "Set the decay parameter to 0.01" assert model.loss == "categorical_crossentropy", "Wrong loss. Use 'categorical_crossentropy'" assert model.optimizer == opt, "Use the optimizer that you have instantiated" assert model.compiled_metrics._user_metrics[0] == 'accuracy', "set metrics to ['accuracy']" print("\033[92mAll tests passed!") ###Output All tests passed! ###Markdown Define inputs and outputs, and fit the modelThe last step is to define all your inputs and outputs to fit the model:- You have input `Xoh` of shape $(m = 10000, T_x = 30, human\_vocab=37)$ containing the training examples.- You need to create `s0` and `c0` to initialize your `post_attention_LSTM_cell` with zeros.- Given the `model()` you coded, you need the "outputs" to be a list of 10 elements of shape (m, T_y). - The list `outputs[i][0], ..., outputs[i][Ty]` represents the true labels (characters) corresponding to the $i^{th}$ training example (`Xoh[i]`). - `outputs[i][j]` is the true label of the $j^{th}$ character in the $i^{th}$ training example. ###Code s0 = np.zeros((m, n_s)) c0 = np.zeros((m, n_s)) outputs = list(Yoh.swapaxes(0,1)) ###Output _____no_output_____ ###Markdown Let's now fit the model and run it for one epoch. ###Code model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100) ###Output 100/100 [==============================] - 11s 105ms/step - loss: 16.2926 - dense_5_loss: 1.1597 - dense_5_1_loss: 0.9994 - dense_5_2_loss: 1.7896 - dense_5_3_loss: 2.6431 - dense_5_4_loss: 0.7504 - dense_5_5_loss: 1.2167 - dense_5_6_loss: 2.6471 - dense_5_7_loss: 0.8713 - dense_5_8_loss: 1.6680 - dense_5_9_loss: 2.5472 - dense_5_accuracy: 0.5536 - dense_5_1_accuracy: 0.6846 - dense_5_2_accuracy: 0.2987 - dense_5_3_accuracy: 0.0853 - dense_5_4_accuracy: 0.9458 - dense_5_5_accuracy: 0.3621 - dense_5_6_accuracy: 0.0749 - dense_5_7_accuracy: 0.9661 - dense_5_8_accuracy: 0.2608 - dense_5_9_accuracy: 0.1060 ###Markdown While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples: Thus, `dense_2_acc_8: 0.89` means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data. We have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.) ###Code model.load_weights('models/model.h5') ###Output _____no_output_____ ###Markdown You can now see the results on new examples. ###Code EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001'] s00 = np.zeros((1, n_s)) c00 = np.zeros((1, n_s)) for example in EXAMPLES: source = string_to_int(example, Tx, human_vocab) #print(source) source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1) source = np.swapaxes(source, 0, 1) source = np.expand_dims(source, axis=0) prediction = model.predict([source, s00, c00]) prediction = np.argmax(prediction, axis = -1) output = [inv_machine_vocab[int(i)] for i in prediction] print("source:", example) print("output:", ''.join(output),"\n") ###Output source: 3 May 1979 output: 1979-05-33 source: 5 April 09 output: 2009-04-05 source: 21th of August 2016 output: 2016-08-20 source: Tue 10 Jul 2007 output: 2007-07-10 source: Saturday May 9 2018 output: 2018-05-09 source: March 3 2001 output: 2001-03-03 source: March 3rd 2001 output: 2001-03-03 source: 1 March 2001 output: 2001-03-01 ###Markdown You can also change these examples to test with your own examples. The next part will give you a better sense of what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character. 3 - Visualizing Attention (Optional / Ungraded)Since the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (such as the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what each part of the output is looking at which part of the input.Consider the task of translating "Saturday 9 May 2018" to "2018-05-09". If we visualize the computed $\alpha^{\langle t, t' \rangle}$ we get this: **Figure 8**: Full Attention MapNotice how the output ignores the "Saturday" portion of the input. None of the output timesteps are paying much attention to that portion of the input. We also see that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input's "18" in order to generate "2018." 3.1 - Getting the Attention Weights From the NetworkLets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\alpha^{\langle t, t' \rangle}$. To figure out where the attention values are located, let's start by printing a summary of the model . ###Code model.summary() ###Output Model: "functional_15" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_8 (InputLayer) [(None, 30, 37)] 0 __________________________________________________________________________________________________ s0 (InputLayer) [(None, 64)] 0 __________________________________________________________________________________________________ bidirectional_7 (Bidirectional) (None, 30, 64) 17920 input_8[0][0] __________________________________________________________________________________________________ repeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0] lstm_5[30][0] lstm_5[31][0] lstm_5[32][0] lstm_5[33][0] lstm_5[34][0] lstm_5[35][0] lstm_5[36][0] lstm_5[37][0] lstm_5[38][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 30, 128) 0 bidirectional_7[0][0] repeat_vector_1[30][0] bidirectional_7[0][0] repeat_vector_1[31][0] bidirectional_7[0][0] repeat_vector_1[32][0] bidirectional_7[0][0] repeat_vector_1[33][0] bidirectional_7[0][0] repeat_vector_1[34][0] bidirectional_7[0][0] repeat_vector_1[35][0] bidirectional_7[0][0] repeat_vector_1[36][0] bidirectional_7[0][0] repeat_vector_1[37][0] bidirectional_7[0][0] repeat_vector_1[38][0] bidirectional_7[0][0] repeat_vector_1[39][0] __________________________________________________________________________________________________ dense_3 (Dense) (None, 30, 10) 1290 concatenate_1[30][0] concatenate_1[31][0] concatenate_1[32][0] concatenate_1[33][0] concatenate_1[34][0] concatenate_1[35][0] concatenate_1[36][0] concatenate_1[37][0] concatenate_1[38][0] concatenate_1[39][0] __________________________________________________________________________________________________ dense_4 (Dense) (None, 30, 1) 11 dense_3[30][0] dense_3[31][0] dense_3[32][0] dense_3[33][0] dense_3[34][0] dense_3[35][0] dense_3[36][0] dense_3[37][0] dense_3[38][0] dense_3[39][0] __________________________________________________________________________________________________ attention_weights (Activation) (None, 30, 1) 0 dense_4[30][0] dense_4[31][0] dense_4[32][0] dense_4[33][0] dense_4[34][0] dense_4[35][0] dense_4[36][0] dense_4[37][0] dense_4[38][0] dense_4[39][0] __________________________________________________________________________________________________ dot_1 (Dot) (None, 1, 64) 0 attention_weights[30][0] bidirectional_7[0][0] attention_weights[31][0] bidirectional_7[0][0] attention_weights[32][0] bidirectional_7[0][0] attention_weights[33][0] bidirectional_7[0][0] attention_weights[34][0] bidirectional_7[0][0] attention_weights[35][0] bidirectional_7[0][0] attention_weights[36][0] bidirectional_7[0][0] attention_weights[37][0] bidirectional_7[0][0] attention_weights[38][0] bidirectional_7[0][0] attention_weights[39][0] bidirectional_7[0][0] __________________________________________________________________________________________________ c0 (InputLayer) [(None, 64)] 0 __________________________________________________________________________________________________ lstm_5 (LSTM) [(None, 64), (None, 33024 dot_1[30][0] s0[0][0] c0[0][0] dot_1[31][0] lstm_5[30][0] lstm_5[30][2] dot_1[32][0] lstm_5[31][0] lstm_5[31][2] dot_1[33][0] lstm_5[32][0] lstm_5[32][2] dot_1[34][0] lstm_5[33][0] lstm_5[33][2] dot_1[35][0] lstm_5[34][0] lstm_5[34][2] dot_1[36][0] lstm_5[35][0] lstm_5[35][2] dot_1[37][0] lstm_5[36][0] lstm_5[36][2] dot_1[38][0] lstm_5[37][0] lstm_5[37][2] dot_1[39][0] lstm_5[38][0] lstm_5[38][2] __________________________________________________________________________________________________ dense_5 (Dense) (None, 11) 715 lstm_5[30][0] lstm_5[31][0] lstm_5[32][0] lstm_5[33][0] lstm_5[34][0] lstm_5[35][0] lstm_5[36][0] lstm_5[37][0] lstm_5[38][0] lstm_5[39][0] ================================================================================================== Total params: 52,960 Trainable params: 52,960 Non-trainable params: 0 __________________________________________________________________________________________________ ###Markdown Navigate through the output of `model.summary()` above. You can see that the layer named `attention_weights` outputs the `alphas` of shape (m, 30, 1) before `dot_2` computes the context vector for every time step $t = 0, \ldots, T_y-1$. Let's get the attention weights from this layer.The function `attention_map()` pulls out the attention values from your model and plots them.**Note**: We are aware that you might run into an error running the cell below despite a valid implementation for Exercise 2 - `modelf` above. If you get the error kindly report it on this [Topic](https://discourse.deeplearning.ai/t/error-in-optional-ungraded-part-of-neural-machine-translation-w3a1/1096) on [Discourse](https://discourse.deeplearning.ai) as it'll help us improve our content. If you haven’t joined our Discourse community you can do so by clicking on the link: http://bit.ly/dls-discourseAnd don’t worry about the error, it will not affect the grading for this assignment. ###Code attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, "Tuesday 09 Oct 1993", num = 7, n_s = 64); ###Output _____no_output_____
07 Teaching Machines/donow/Kromreig_Georgia_7_donow.ipynb
###Markdown Apply logistic regression to categorize whether a county had high mortality rate due to contamination 1. Import the necessary packages to read in the data, plot, and create a logistic regression model ###Code import pandas as pd %matplotlib inline import numpy as np from sklearn.linear_model import LogisticRegression import statsmodels.formula.api as smf ###Output _____no_output_____ ###Markdown 2. Read in the hanford.csv file in the `data/` folder ###Code df = pd.read_csv("hanford.csv") ###Output _____no_output_____ ###Markdown 3. Calculate the basic descriptive statistics on the data ###Code df.describe() df.corr() ###Output _____no_output_____ ###Markdown 4. Find a reasonable threshold to say exposure is high and recode the data ###Code # I could define "high exposure" as 1.5 x IQR, which would be: Q3-Q1, or 6.41-2.49 high_exposure = 4.08*1.5 df['Exposure'].describe() ###Output _____no_output_____ ###Markdown 5. Create a logistic regression model ###Code lm = smf.ols(formula="Mortality~Exposure",data=df).fit() #notice the formula regresses Y on X (Y~X) intercept, slope = lm.params lm.params ###Output _____no_output_____ ###Markdown 6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50 ###Code #y=mx+b ###Output _____no_output_____
code/day_1/1 - Easy as Pi.ipynb
###Markdown Imports ###Code from os import environ, path environ["SPARK_HOME"] = "/home/students/spark-2.2.0" import findspark findspark.init() import pyspark import random ###Output _____no_output_____ ###Markdown Get Some Context ###Code # Create a Spark context to use sc = pyspark.SparkContext(appName="EasyAsPi") ###Output _____no_output_____ ###Markdown Calculate Pi ###Code # Run the pi example num_samples = 100000000 def inside(p): x, y = random.random(), random.random() return x*x + y*y < 1 count = sc.parallelize(range(0, num_samples)).filter(inside).count() pi = 4 * count / num_samples print(pi) ###Output _____no_output_____ ###Markdown Shut it Down ###Code # Close the Spark context sc.stop() ###Output _____no_output_____
Neural Networks and Deep Learning/Week 2/Logistic Regression as a Neural Network/Logistic_Regression_with_a_Neural_Network_mindset_v6a.ipynb
###Markdown Logistic Regression with a Neural Network mindsetWelcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.**Instructions:**- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.**You will learn to:**- Build the general architecture of a learning algorithm, including: - Initializing parameters - Calculating the cost function and its gradient - Using an optimization algorithm (gradient descent) - Gather all three functions above into a main model function, in the right order. UpdatesThis notebook has been updated over the past few months. The prior version was named "v5", and the current versionis now named '6a' If you were working on a previous version:* You can find your prior work by looking in the file directory for the older files (named by version name).* To view the file directory, click on the "Coursera" icon in the top left corner of this notebook.* Please copy your work from the older versions to the new version, in order to submit your work for grading. List of Updates* Forward propagation formula, indexing now starts at 1 instead of 0.* Optimization function comment now says "print cost every 100 training iterations" instead of "examples".* Fixed grammar in the comments.* Y_prediction_test variable name is used consistently.* Plot's axis label now says "iterations (hundred)" instead of "iterations".* When testing the model, the test image is normalized by dividing by 255. 1 - Packages First, let's run the cell below to import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end. ###Code import numpy as np import matplotlib.pyplot as plt import h5py import scipy from PIL import Image from scipy import ndimage from lr_utils import load_dataset %matplotlib inline ###Output _____no_output_____ ###Markdown 2 - Overview of the Problem set **Problem Statement**: You are given a dataset ("data.h5") containing: - a training set of m_train images labeled as cat (y=1) or non-cat (y=0) - a test set of m_test images labeled as cat or non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.Let's get more familiar with the dataset. Load the data by running the following code. ###Code # Loading the data (cat/non-cat) train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset() ###Output _____no_output_____ ###Markdown We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images. ###Code # Example of a picture index = 25 plt.imshow(train_set_x_orig[index]) print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.") ###Output y = [1], it's a 'cat' picture. ###Markdown Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. **Exercise:** Find the values for: - m_train (number of training examples) - m_test (number of test examples) - num_px (= height = width of a training image)Remember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`. ###Code ### START CODE HERE ### (≈ 3 lines of code) m_train = len(train_set_x_orig) m_test = len(test_set_x_orig) num_px = len(train_set_x_orig[0]) ### END CODE HERE ### print ("Number of training examples: m_train = " + str(m_train)) print ("Number of testing examples: m_test = " + str(m_test)) print ("Height/Width of each image: num_px = " + str(num_px)) print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)") print ("train_set_x shape: " + str(train_set_x_orig.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x shape: " + str(test_set_x_orig.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) ###Output Number of training examples: m_train = 209 Number of testing examples: m_test = 50 Height/Width of each image: num_px = 64 Each image is of size: (64, 64, 3) train_set_x shape: (209, 64, 64, 3) train_set_y shape: (1, 209) test_set_x shape: (50, 64, 64, 3) test_set_y shape: (1, 50) ###Markdown **Expected Output for m_train, m_test and num_px**: **m_train** 209 **m_test** 50 **num_px** 64 For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.**Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\_px $*$ num\_px $*$ 3, 1).A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use: ```pythonX_flatten = X.reshape(X.shape[0], -1).T X.T is the transpose of X``` ###Code # Reshape the training and test examples ### START CODE HERE ### (≈ 2 lines of code) train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T ### END CODE HERE ### print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0])) ###Output train_set_x_flatten shape: (12288, 209) train_set_y shape: (1, 209) test_set_x_flatten shape: (12288, 50) test_set_y shape: (1, 50) sanity check after reshaping: [17 31 56 22 33] ###Markdown **Expected Output**: **train_set_x_flatten shape** (12288, 209) **train_set_y shape** (1, 209) **test_set_x_flatten shape** (12288, 50) **test_set_y shape** (1, 50) **sanity check after reshaping** [17 31 56 22 33] To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel). Let's standardize our dataset. ###Code train_set_x = train_set_x_flatten/255. test_set_x = test_set_x_flatten/255. ###Output _____no_output_____ ###Markdown **What you need to remember:**Common steps for pre-processing a new dataset are:- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)- "Standardize" the data 3 - General Architecture of the learning algorithm It's time to design a simple algorithm to distinguish cat images from non-cat images.You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!****Mathematical expression of the algorithm**:For one example $x^{(i)}$:$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$The cost is then computed by summing over all training examples:$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$**Key steps**:In this exercise, you will carry out the following steps: - Initialize the parameters of the model - Learn the parameters for the model by minimizing the cost - Use the learned parameters to make predictions (on the test set) - Analyse the results and conclude 4 - Building the parts of our algorithm The main steps for building a Neural Network are:1. Define the model structure (such as number of input features) 2. Initialize the model's parameters3. Loop: - Calculate current loss (forward propagation) - Calculate current gradient (backward propagation) - Update parameters (gradient descent)You often build 1-3 separately and integrate them into one function we call `model()`. 4.1 - Helper functions**Exercise**: Using your code from "Python Basics", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp(). ###Code # GRADED FUNCTION: sigmoid def sigmoid(z): """ Compute the sigmoid of z Arguments: z -- A scalar or numpy array of any size. Return: s -- sigmoid(z) """ ### START CODE HERE ### (≈ 1 line of code) s = 1 / (1 + np.exp(-z)) ### END CODE HERE ### return s print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2])))) ###Output sigmoid([0, 2]) = [ 0.5 0.88079708] ###Markdown **Expected Output**: **sigmoid([0, 2])** [ 0.5 0.88079708] 4.2 - Initializing parameters**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation. ###Code # GRADED FUNCTION: initialize_with_zeros def initialize_with_zeros(dim): """ This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0. Argument: dim -- size of the w vector we want (or number of parameters in this case) Returns: w -- initialized vector of shape (dim, 1) b -- initialized scalar (corresponds to the bias) """ ### START CODE HERE ### (≈ 1 line of code) w = np.zeros((dim, 1)) b = 0 ### END CODE HERE ### assert(w.shape == (dim, 1)) assert(isinstance(b, float) or isinstance(b, int)) return w, b dim = 2 w, b = initialize_with_zeros(dim) print ("w = " + str(w)) print ("b = " + str(b)) ###Output w = [[ 0.] [ 0.]] b = 0 ###Markdown **Expected Output**: ** w ** [[ 0.] [ 0.]] ** b ** 0 For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1). 4.3 - Forward and Backward propagationNow that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.**Hints**:Forward Propagation:- You get X- You compute $A = \sigma(w^T X + b) = (a^{(1)}, a^{(2)}, ..., a^{(m-1)}, a^{(m)})$- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$Here are the two formulas you will be using: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$ ###Code # GRADED FUNCTION: propagate def propagate(w, b, X, Y): """ Implement the cost function and its gradient for the propagation explained above Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples) Return: cost -- negative log-likelihood cost for logistic regression dw -- gradient of the loss with respect to w, thus same shape as w db -- gradient of the loss with respect to b, thus same shape as b Tips: - Write your code step by step for the propagation. np.log(), np.dot() """ m = X.shape[1] # FORWARD PROPAGATION (FROM X TO COST) ### START CODE HERE ### (≈ 2 lines of code) A = sigmoid(np.dot(w.T, X) + b) # compute activation cost = (-1 / m) * np.sum(Y * np.log(A) + (1 - Y) * np.log(1 - A)) # compute cost ### END CODE HERE ### # BACKWARD PROPAGATION (TO FIND GRAD) ### START CODE HERE ### (≈ 2 lines of code) dw = (1 / m) * np.dot(X, (A - Y).T) db = (1 / m) * np.sum(A - Y) ### END CODE HERE ### assert(dw.shape == w.shape) assert(db.dtype == float) cost = np.squeeze(cost) assert(cost.shape == ()) grads = {"dw": dw, "db": db} return grads, cost w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]]) grads, cost = propagate(w, b, X, Y) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) print ("cost = " + str(cost)) ###Output dw = [[ 0.99845601] [ 2.39507239]] db = 0.00145557813678 cost = 5.80154531939 ###Markdown **Expected Output**: ** dw ** [[ 0.99845601] [ 2.39507239]] ** db ** 0.00145557813678 ** cost ** 5.801545319394553 4.4 - Optimization- You have initialized your parameters.- You are also able to compute a cost function and its gradient.- Now, you want to update the parameters using gradient descent.**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate. ###Code # GRADED FUNCTION: optimize def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): """ This function optimizes w and b by running a gradient descent algorithm Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of shape (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples) num_iterations -- number of iterations of the optimization loop learning_rate -- learning rate of the gradient descent update rule print_cost -- True to print the loss every 100 steps Returns: params -- dictionary containing the weights w and bias b grads -- dictionary containing the gradients of the weights and bias with respect to the cost function costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve. Tips: You basically need to write down two steps and iterate through them: 1) Calculate the cost and the gradient for the current parameters. Use propagate(). 2) Update the parameters using gradient descent rule for w and b. """ costs = [] for i in range(num_iterations): # Cost and gradient calculation (≈ 1-4 lines of code) ### START CODE HERE ### grads, cost = propagate(w, b, X, Y) ### END CODE HERE ### # Retrieve derivatives from grads dw = grads["dw"] db = grads["db"] # update rule (≈ 2 lines of code) ### START CODE HERE ### w = w - learning_rate * dw b = b - learning_rate * db ### END CODE HERE ### # Record the costs if i % 100 == 0: costs.append(cost) # Print the cost every 100 training iterations if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" %(i, cost)) params = {"w": w, "b": b} grads = {"dw": dw, "db": db} return params, grads, costs params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False) print ("w = " + str(params["w"])) print ("b = " + str(params["b"])) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) ###Output w = [[ 0.19033591] [ 0.12259159]] b = 1.92535983008 dw = [[ 0.67752042] [ 1.41625495]] db = 0.219194504541 ###Markdown **Expected Output**: **w** [[ 0.19033591] [ 0.12259159]] **b** 1.92535983008 **dw** [[ 0.67752042] [ 1.41625495]] **db** 0.219194504541 **Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There are two steps to computing predictions:1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$2. Convert the entries of a into 0 (if activation 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this). ###Code # GRADED FUNCTION: predict def predict(w, b, X): ''' Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b) Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Returns: Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X ''' m = X.shape[1] Y_prediction = np.zeros((1,m)) w = w.reshape(X.shape[0], 1) # Compute vector "A" predicting the probabilities of a cat being present in the picture ### START CODE HERE ### (≈ 1 line of code) A = sigmoid(np.dot(w.T, X) + b) ### END CODE HERE ### for i in range(A.shape[1]): # Convert probabilities A[0,i] to actual predictions p[0,i] ### START CODE HERE ### (≈ 4 lines of code) Y_prediction[0, i] = 1 if A[0, i] > 0.5 else 0 ### END CODE HERE ### assert(Y_prediction.shape == (1, m)) return Y_prediction w = np.array([[0.1124579],[0.23106775]]) b = -0.3 X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]]) print ("predictions = " + str(predict(w, b, X))) ###Output predictions = [[ 1. 1. 0.]] ###Markdown **Expected Output**: **predictions** [[ 1. 1. 0.]] **What to remember:**You've implemented several functions that:- Initialize (w,b)- Optimize the loss iteratively to learn parameters (w,b): - computing the cost and its gradient - updating the parameters using gradient descent- Use the learned (w,b) to predict the labels for a given set of examples 5 - Merge all functions into a model You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.**Exercise:** Implement the model function. Use the following notation: - Y_prediction_test for your predictions on the test set - Y_prediction_train for your predictions on the train set - w, costs, grads for the outputs of optimize() ###Code # GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False): """ Builds the logistic regression model by calling the function you've implemented previously Arguments: X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train) Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train) X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test) Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test) num_iterations -- hyperparameter representing the number of iterations to optimize the parameters learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize() print_cost -- Set to true to print the cost every 100 iterations Returns: d -- dictionary containing information about the model. """ ### START CODE HERE ### # initialize parameters with zeros (≈ 1 line of code) w, b = initialize_with_zeros(X_train.shape[0]) # Gradient descent (≈ 1 line of code) parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost) # Retrieve parameters w and b from dictionary "parameters" w = parameters["w"] b = parameters["b"] # Predict test/train set examples (≈ 2 lines of code) Y_prediction_test = predict(w, b, X_test) Y_prediction_train = predict(w, b, X_train) ### END CODE HERE ### # Print train/test Errors print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100)) print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100)) d = {"costs": costs, "Y_prediction_test": Y_prediction_test, "Y_prediction_train" : Y_prediction_train, "w" : w, "b" : b, "learning_rate" : learning_rate, "num_iterations": num_iterations} return d ###Output _____no_output_____ ###Markdown Run the following cell to train your model. ###Code d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True) ###Output Cost after iteration 0: 0.693147 Cost after iteration 100: 0.584508 Cost after iteration 200: 0.466949 Cost after iteration 300: 0.376007 Cost after iteration 400: 0.331463 Cost after iteration 500: 0.303273 Cost after iteration 600: 0.279880 Cost after iteration 700: 0.260042 Cost after iteration 800: 0.242941 Cost after iteration 900: 0.228004 Cost after iteration 1000: 0.214820 Cost after iteration 1100: 0.203078 Cost after iteration 1200: 0.192544 Cost after iteration 1300: 0.183033 Cost after iteration 1400: 0.174399 Cost after iteration 1500: 0.166521 Cost after iteration 1600: 0.159305 Cost after iteration 1700: 0.152667 Cost after iteration 1800: 0.146542 Cost after iteration 1900: 0.140872 train accuracy: 99.04306220095694 % test accuracy: 70.0 % ###Markdown **Expected Output**: **Cost after iteration 0 ** 0.693147 $\vdots$ $\vdots$ **Train Accuracy** 99.04306220095694 % **Test Accuracy** 70.0 % **Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test accuracy is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set. ###Code # Example of a picture that was wrongly classified. index = 1 plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3))) print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.") ###Output y = 1, you predicted that it is a "cat" picture. ###Markdown Let's also plot the cost function and the gradients. ###Code # Plot learning curve (with costs) costs = np.squeeze(d['costs']) plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(d["learning_rate"])) plt.show() ###Output _____no_output_____ ###Markdown **Interpretation**:You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting. 6 - Further analysis (optional/ungraded exercise) Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$. Choice of learning rate **Reminder**:In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens. ###Code learning_rates = [0.01, 0.001, 0.0001] models = {} for i in learning_rates: print ("learning rate is: " + str(i)) models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False) print ('\n' + "-------------------------------------------------------" + '\n') for i in learning_rates: plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"])) plt.ylabel('cost') plt.xlabel('iterations (hundreds)') legend = plt.legend(loc='upper center', shadow=True) frame = legend.get_frame() frame.set_facecolor('0.90') plt.show() ###Output learning rate is: 0.01 train accuracy: 99.52153110047847 % test accuracy: 68.0 % ------------------------------------------------------- learning rate is: 0.001 train accuracy: 88.99521531100478 % test accuracy: 64.0 % ------------------------------------------------------- learning rate is: 0.0001 train accuracy: 68.42105263157895 % test accuracy: 36.0 % ------------------------------------------------------- ###Markdown **Interpretation**: - Different learning rates give different costs and thus different predictions results.- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). - A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.- In deep learning, we usually recommend that you: - Choose the learning rate that better minimizes the cost function. - If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.) 7 - Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)! ###Code ## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "my_image.jpg" # change this to the name of your image file ## END CODE HERE ## # We preprocess the image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) image = image/255. my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T my_predicted_image = predict(d["w"], d["b"], my_image) plt.imshow(image) print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.") ###Output _____no_output_____ ###Markdown Logistic Regression with a Neural Network mindsetWelcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.**Instructions:**- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.**You will learn to:**- Build the general architecture of a learning algorithm, including: - Initializing parameters - Calculating the cost function and its gradient - Using an optimization algorithm (gradient descent) - Gather all three functions above into a main model function, in the right order. UpdatesThis notebook has been updated over the past few months. The prior version was named "v5", and the current versionis now named '6a' If you were working on a previous version:* You can find your prior work by looking in the file directory for the older files (named by version name).* To view the file directory, click on the "Coursera" icon in the top left corner of this notebook.* Please copy your work from the older versions to the new version, in order to submit your work for grading. List of Updates* Forward propagation formula, indexing now starts at 1 instead of 0.* Optimization function comment now says "print cost every 100 training iterations" instead of "examples".* Fixed grammar in the comments.* Y_prediction_test variable name is used consistently.* Plot's axis label now says "iterations (hundred)" instead of "iterations".* When testing the model, the test image is normalized by dividing by 255. 1 - Packages First, let's run the cell below to import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end. ###Code import numpy as np import matplotlib.pyplot as plt import h5py import scipy from PIL import Image from scipy import ndimage from lr_utils import load_dataset %matplotlib inline ###Output _____no_output_____ ###Markdown 2 - Overview of the Problem set **Problem Statement**: You are given a dataset ("data.h5") containing: - a training set of m_train images labeled as cat (y=1) or non-cat (y=0) - a test set of m_test images labeled as cat or non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.Let's get more familiar with the dataset. Load the data by running the following code. ###Code # Loading the data (cat/non-cat) train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset() ###Output _____no_output_____ ###Markdown We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images. ###Code # Example of a picture index = 25 plt.imshow(train_set_x_orig[index]) print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.") ###Output y = [1], it's a 'cat' picture. ###Markdown Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. **Exercise:** Find the values for: - m_train (number of training examples) - m_test (number of test examples) - num_px (= height = width of a training image)Remember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`. ###Code ### START CODE HERE ### (≈ 3 lines of code) m_train = train_set_x_orig.shape[0] m_test = test_set_x_orig.shape[0] num_px = test_set_x_orig[0].shape[0] ### END CODE HERE ### print ("Number of training examples: m_train = " + str(m_train)) print ("Number of testing examples: m_test = " + str(m_test)) print ("Height/Width of each image: num_px = " + str(num_px)) print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)") print ("train_set_x shape: " + str(train_set_x_orig.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x shape: " + str(test_set_x_orig.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) ###Output Number of training examples: m_train = 209 Number of testing examples: m_test = 50 Height/Width of each image: num_px = 64 Each image is of size: (64, 64, 3) train_set_x shape: (209, 64, 64, 3) train_set_y shape: (1, 209) test_set_x shape: (50, 64, 64, 3) test_set_y shape: (1, 50) ###Markdown **Expected Output for m_train, m_test and num_px**: **m_train** 209 **m_test** 50 **num_px** 64 For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.**Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\_px $*$ num\_px $*$ 3, 1).A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use: ```pythonX_flatten = X.reshape(X.shape[0], -1).T X.T is the transpose of X``` ###Code # Reshape the training and test examples ### START CODE HERE ### (≈ 2 lines of code) train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T ### END CODE HERE ### print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0])) ###Output train_set_x_flatten shape: (12288, 209) train_set_y shape: (1, 209) test_set_x_flatten shape: (12288, 50) test_set_y shape: (1, 50) sanity check after reshaping: [17 31 56 22 33] ###Markdown **Expected Output**: **train_set_x_flatten shape** (12288, 209) **train_set_y shape** (1, 209) **test_set_x_flatten shape** (12288, 50) **test_set_y shape** (1, 50) **sanity check after reshaping** [17 31 56 22 33] To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel). Let's standardize our dataset. ###Code train_set_x = train_set_x_flatten/255. test_set_x = test_set_x_flatten/255. ###Output _____no_output_____ ###Markdown **What you need to remember:**Common steps for pre-processing a new dataset are:- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)- "Standardize" the data 3 - General Architecture of the learning algorithm It's time to design a simple algorithm to distinguish cat images from non-cat images.You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!****Mathematical expression of the algorithm**:For one example $x^{(i)}$:$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$The cost is then computed by summing over all training examples:$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$**Key steps**:In this exercise, you will carry out the following steps: - Initialize the parameters of the model - Learn the parameters for the model by minimizing the cost - Use the learned parameters to make predictions (on the test set) - Analyse the results and conclude 4 - Building the parts of our algorithm The main steps for building a Neural Network are:1. Define the model structure (such as number of input features) 2. Initialize the model's parameters3. Loop: - Calculate current loss (forward propagation) - Calculate current gradient (backward propagation) - Update parameters (gradient descent)You often build 1-3 separately and integrate them into one function we call `model()`. 4.1 - Helper functions**Exercise**: Using your code from "Python Basics", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp(). ###Code # GRADED FUNCTION: sigmoid def sigmoid(z): """ Compute the sigmoid of z Arguments: z -- A scalar or numpy array of any size. Return: s -- sigmoid(z) """ ### START CODE HERE ### (≈ 1 line of code) s = 1/(1+np.exp(-z)) ### END CODE HERE ### return s print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2])))) ###Output sigmoid([0, 2]) = [ 0.5 0.88079708] ###Markdown **Expected Output**: **sigmoid([0, 2])** [ 0.5 0.88079708] 4.2 - Initializing parameters**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation. ###Code # GRADED FUNCTION: initialize_with_zeros def initialize_with_zeros(dim): """ This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0. Argument: dim -- size of the w vector we want (or number of parameters in this case) Returns: w -- initialized vector of shape (dim, 1) b -- initialized scalar (corresponds to the bias) """ ### START CODE HERE ### (≈ 1 line of code) w = np.zeros((dim, 1)) b = 0 ### END CODE HERE ### assert(w.shape == (dim, 1)) assert(isinstance(b, float) or isinstance(b, int)) return w, b dim = 2 w, b = initialize_with_zeros(dim) print ("w = " + str(w)) print ("b = " + str(b)) ###Output w = [[ 0.] [ 0.]] b = 0 ###Markdown **Expected Output**: ** w ** [[ 0.] [ 0.]] ** b ** 0 For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1). 4.3 - Forward and Backward propagationNow that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.**Hints**:Forward Propagation:- You get X- You compute $A = \sigma(w^T X + b) = (a^{(1)}, a^{(2)}, ..., a^{(m-1)}, a^{(m)})$- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$Here are the two formulas you will be using: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$ ###Code # GRADED FUNCTION: propagate def propagate(w, b, X, Y): """ Implement the cost function and its gradient for the propagation explained above Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples) Return: cost -- negative log-likelihood cost for logistic regression dw -- gradient of the loss with respect to w, thus same shape as w db -- gradient of the loss with respect to b, thus same shape as b Tips: - Write your code step by step for the propagation. np.log(), np.dot() """ m = X.shape[1] # FORWARD PROPAGATION (FROM X TO COST) ### START CODE HERE ### (≈ 2 lines of code) A = sigmoid(np.dot(w.T, X) + b) # compute activation cost = -1/m*(np.sum(Y*np.log(A) + (1-Y)*np.log(1-A))) # compute cost ### END CODE HERE ### # BACKWARD PROPAGATION (TO FIND GRAD) ### START CODE HERE ### (≈ 2 lines of code) dw = 1/m*(np.dot(X, ((A-Y).T))) db = 1/m*(np.sum(A-Y)) ### END CODE HERE ### assert(dw.shape == w.shape) assert(db.dtype == float) cost = np.squeeze(cost) assert(cost.shape == ()) grads = {"dw": dw, "db": db} return grads, cost w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]]) grads, cost = propagate(w, b, X, Y) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) print ("cost = " + str(cost)) ###Output dw = [[ 0.99845601] [ 2.39507239]] db = 0.00145557813678 cost = 5.80154531939 ###Markdown **Expected Output**: ** dw ** [[ 0.99845601] [ 2.39507239]] ** db ** 0.00145557813678 ** cost ** 5.801545319394553 4.4 - Optimization- You have initialized your parameters.- You are also able to compute a cost function and its gradient.- Now, you want to update the parameters using gradient descent.**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate. ###Code # GRADED FUNCTION: optimize def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): """ This function optimizes w and b by running a gradient descent algorithm Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of shape (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples) num_iterations -- number of iterations of the optimization loop learning_rate -- learning rate of the gradient descent update rule print_cost -- True to print the loss every 100 steps Returns: params -- dictionary containing the weights w and bias b grads -- dictionary containing the gradients of the weights and bias with respect to the cost function costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve. Tips: You basically need to write down two steps and iterate through them: 1) Calculate the cost and the gradient for the current parameters. Use propagate(). 2) Update the parameters using gradient descent rule for w and b. """ costs = [] for i in range(num_iterations): # Cost and gradient calculation (≈ 1-4 lines of code) ### START CODE HERE ### grads, cost = propagate(w, b, X, Y) ### END CODE HERE ### # Retrieve derivatives from grads dw = grads["dw"] db = grads["db"] # update rule (≈ 2 lines of code) ### START CODE HERE ### w = w - learning_rate*dw b = b - learning_rate*db ### END CODE HERE ### # Record the costs if i % 100 == 0: costs.append(cost) # Print the cost every 100 training iterations if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" %(i, cost)) params = {"w": w, "b": b} grads = {"dw": dw, "db": db} return params, grads, costs params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False) print ("w = " + str(params["w"])) print ("b = " + str(params["b"])) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) ###Output w = [[ 0.19033591] [ 0.12259159]] b = 1.92535983008 dw = [[ 0.67752042] [ 1.41625495]] db = 0.219194504541 ###Markdown **Expected Output**: **w** [[ 0.19033591] [ 0.12259159]] **b** 1.92535983008 **dw** [[ 0.67752042] [ 1.41625495]] **db** 0.219194504541 **Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There are two steps to computing predictions:1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$2. Convert the entries of a into 0 (if activation 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this). ###Code # GRADED FUNCTION: predict def predict(w, b, X): ''' Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b) Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Returns: Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X ''' m = X.shape[1] Y_prediction = np.zeros((1,m)) w = w.reshape(X.shape[0], 1) # Compute vector "A" predicting the probabilities of a cat being present in the picture ### START CODE HERE ### (≈ 1 line of code) A = sigmoid(np.dot(w.T, X)+ b) ### END CODE HERE ### for i in range(A.shape[1]): # Convert probabilities A[0,i] to actual predictions p[0,i] ### START CODE HERE ### (≈ 4 lines of code) Y_prediction[0][i] = 1 if A[0][i]>0.5 else 0 ### END CODE HERE ### assert(Y_prediction.shape == (1, m)) return Y_prediction w = np.array([[0.1124579],[0.23106775]]) b = -0.3 X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]]) print ("predictions = " + str(predict(w, b, X))) ###Output predictions = [[ 1. 1. 0.]] ###Markdown **Expected Output**: **predictions** [[ 1. 1. 0.]] **What to remember:**You've implemented several functions that:- Initialize (w,b)- Optimize the loss iteratively to learn parameters (w,b): - computing the cost and its gradient - updating the parameters using gradient descent- Use the learned (w,b) to predict the labels for a given set of examples 5 - Merge all functions into a model You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.**Exercise:** Implement the model function. Use the following notation: - Y_prediction_test for your predictions on the test set - Y_prediction_train for your predictions on the train set - w, costs, grads for the outputs of optimize() ###Code # GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False): """ Builds the logistic regression model by calling the function you've implemented previously Arguments: X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train) Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train) X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test) Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test) num_iterations -- hyperparameter representing the number of iterations to optimize the parameters learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize() print_cost -- Set to true to print the cost every 100 iterations Returns: d -- dictionary containing information about the model. """ ### START CODE HERE ### # initialize parameters with zeros (≈ 1 line of code) w, b = initialize_with_zeros(X_train.shape[0]) # Gradient descent (≈ 1 line of code) parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost = False) # Retrieve parameters w and b from dictionary "parameters" w = parameters["w"] b = parameters["b"] # Predict test/train set examples (≈ 2 lines of code) Y_prediction_test = predict(w, b, X_test) Y_prediction_train = predict(w, b, X_train) ### END CODE HERE ### # Print train/test Errors print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100)) print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100)) d = {"costs": costs, "Y_prediction_test": Y_prediction_test, "Y_prediction_train" : Y_prediction_train, "w" : w, "b" : b, "learning_rate" : learning_rate, "num_iterations": num_iterations} return d ###Output _____no_output_____ ###Markdown Run the following cell to train your model. ###Code d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True) ###Output train accuracy: 99.04306220095694 % test accuracy: 70.0 % ###Markdown **Expected Output**: **Cost after iteration 0 ** 0.693147 $\vdots$ $\vdots$ **Train Accuracy** 99.04306220095694 % **Test Accuracy** 70.0 % **Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test accuracy is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set. ###Code # Example of a picture that was wrongly classified. index = 9 plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3))) print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.") ###Output y = 1, you predicted that it is a "cat" picture. ###Markdown Let's also plot the cost function and the gradients. ###Code # Plot learning curve (with costs) costs = np.squeeze(d['costs']) plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(d["learning_rate"])) plt.show() ###Output _____no_output_____ ###Markdown **Interpretation**:You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting. 6 - Further analysis (optional/ungraded exercise) Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$. Choice of learning rate **Reminder**:In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens. ###Code learning_rates = [0.01, 0.001, 0.0001] models = {} for i in learning_rates: print ("learning rate is: " + str(i)) models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False) print ('\n' + "-------------------------------------------------------" + '\n') for i in learning_rates: plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"])) plt.ylabel('cost') plt.xlabel('iterations (hundreds)') legend = plt.legend(loc='upper center', shadow=True) frame = legend.get_frame() frame.set_facecolor('0.90') plt.show() ###Output learning rate is: 0.01 train accuracy: 99.52153110047847 % test accuracy: 68.0 % ------------------------------------------------------- learning rate is: 0.001 train accuracy: 88.99521531100478 % test accuracy: 64.0 % ------------------------------------------------------- learning rate is: 0.0001 train accuracy: 68.42105263157895 % test accuracy: 36.0 % ------------------------------------------------------- ###Markdown **Interpretation**: - Different learning rates give different costs and thus different predictions results.- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). - A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.- In deep learning, we usually recommend that you: - Choose the learning rate that better minimizes the cost function. - If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.) 7 - Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)! ###Code ## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "my_image.jpg" # change this to the name of your image file ## END CODE HERE ## # We preprocess the image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) image = image/255. my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T my_predicted_image = predict(d["w"], d["b"], my_image) plt.imshow(image) print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.") ###Output _____no_output_____
networkx_tutorial_full.ipynb
###Markdown TutorialThis guide can help you start working with NetworkX. Creating a graphCreate an empty graph with no nodes and no edges. ###Code import networkx as nx G = nx.Graph() ###Output _____no_output_____ ###Markdown By definition, a `Graph` is a collection of nodes (vertices) along withidentified pairs of nodes (called edges, links, etc). In NetworkX, nodes canbe any [hashable](https://docs.python.org/3/glossary.htmlterm-hashable) object e.g., a text string, an image, an XML object,another Graph, a customized node object, etc. NodesThe graph `G` can be grown in several ways. NetworkX includes many graphgenerator functions and facilities to read and write graphs in many formats.To get started though we’ll look at simple manipulations. You can add one nodeat a time, ###Code G.add_node(1) ###Output _____no_output_____ ###Markdown or add nodes from any [iterable](https://docs.python.org/3/glossary.htmlterm-iterable) container, such as a list ###Code G.add_nodes_from([2, 3]) ###Output _____no_output_____ ###Markdown You can also add nodes along with nodeattributes if your container yields 2-tuples of the form`(node, node_attribute_dict)`:```>>> G.add_nodes_from([... (4, {"color": "red"}),... (5, {"color": "green"}),... ])```Node attributes are discussed further below.Nodes from one graph can be incorporated into another: ###Code H = nx.path_graph(10) G.add_nodes_from(H) ###Output _____no_output_____ ###Markdown `G` now contains the nodes of `H` as nodes of `G`.In contrast, you could use the graph `H` as a node in `G`. ###Code G.add_node(H) ###Output _____no_output_____ ###Markdown The graph `G` now contains `H` as a node. This flexibility is very powerful asit allows graphs of graphs, graphs of files, graphs of functions and much more.It is worth thinking about how to structure your application so that the nodesare useful entities. Of course you can always use a unique identifier in `G`and have a separate dictionary keyed by identifier to the node information ifyou prefer. Edges`G` can also be grown by adding one edge at a time, ###Code G.add_edge(1, 2) e = (2, 3) G.add_edge(*e) # unpack edge tuple* ###Output _____no_output_____ ###Markdown by adding a list of edges, ###Code G.add_edges_from([(1, 2), (1, 3)]) ###Output _____no_output_____ ###Markdown or by adding any ebunch of edges. An *ebunch* is any iterablecontainer of edge-tuples. An edge-tuple can be a 2-tuple of nodes or a 3-tuplewith 2 nodes followed by an edge attribute dictionary, e.g.,`(2, 3, {'weight': 3.1415})`. Edge attributes are discussed furtherbelow. ###Code G.add_edges_from(H.edges) ###Output _____no_output_____ ###Markdown There are no complaints when adding existing nodes or edges. For example,after removing all nodes and edges, ###Code G.clear() ###Output _____no_output_____ ###Markdown we add new nodes/edges and NetworkX quietly ignores any that arealready present. ###Code G.add_edges_from([(1, 2), (1, 3)]) G.add_node(1) G.add_edge(1, 2) G.add_node("spam") # adds node "spam" G.add_nodes_from("spam") # adds 4 nodes: 's', 'p', 'a', 'm' G.add_edge(3, 'm') ###Output _____no_output_____ ###Markdown At this stage the graph `G` consists of 8 nodes and 3 edges, as can be seen by: ###Code G.number_of_nodes() G.number_of_edges() ###Output _____no_output_____ ###Markdown Examining elements of a graphWe can examine the nodes and edges. Four basic graph properties facilitatereporting: `G.nodes`, `G.edges`, `G.adj` and `G.degree`. Theseare set-like views of the nodes, edges, neighbors (adjacencies), and degreesof nodes in a graph. They offer a continually updated read-only view intothe graph structure. They are also dict-like in that you can look up nodeand edge data attributes via the views and iterate with data attributesusing methods `.items()`, `.data('span')`.If you want a specific container type instead of a view, you can specify one.Here we use lists, though sets, dicts, tuples and other containers may bebetter in other contexts. ###Code list(G.nodes) list(G.edges) list(G.adj[1]) # or list(G.neighbors(1)) G.degree[1] # the number of edges incident to 1 ###Output _____no_output_____ ###Markdown One can specify to report the edges and degree from a subset of all nodesusing an nbunch. An *nbunch* is any of: `None` (meaning all nodes),a node, or an iterable container of nodes that is not itself a node in thegraph. ###Code G.edges([2, 'm']) G.degree([2, 3]) ###Output _____no_output_____ ###Markdown Removing elements from a graphOne can remove nodes and edges from the graph in a similar fashion to adding.Use methods`Graph.remove_node()`,`Graph.remove_nodes_from()`,`Graph.remove_edge()`and`Graph.remove_edges_from()`, e.g. ###Code G.remove_node(2) G.remove_nodes_from("spam") list(G.nodes) G.remove_edge(1, 3) ###Output _____no_output_____ ###Markdown Using the graph constructorsGraph objects do not have to be built up incrementally - data specifyinggraph structure can be passed directly to the constructors of the variousgraph classes.When creating a graph structure by instantiating one of the graphclasses you can specify data in several formats. ###Code G.add_edge(1, 2) H = nx.DiGraph(G) # create a DiGraph using the connections from G list(H.edges()) edgelist = [(0, 1), (1, 2), (2, 3)] H = nx.Graph(edgelist) ###Output _____no_output_____ ###Markdown What to use as nodes and edgesYou might notice that nodes and edges are not specified as NetworkXobjects. This leaves you free to use meaningful items as nodes andedges. The most common choices are numbers or strings, but a node canbe any hashable object (except `None`), and an edge can be associatedwith any object `x` using `G.add_edge(n1, n2, object=x)`.As an example, `n1` and `n2` could be protein objects from the RCSB ProteinData Bank, and `x` could refer to an XML record of publications detailingexperimental observations of their interaction.We have found this power quite useful, but its abusecan lead to surprising behavior unless one is familiar with Python.If in doubt, consider using `convert_node_labels_to_integers()` to obtaina more traditional graph with integer labels. Accessing edges and neighborsIn addition to the views `Graph.edges`, and `Graph.adj`,access to edges and neighbors is possible using subscript notation. ###Code G = nx.Graph([(1, 2, {"color": "yellow"})]) G[1] # same as G.adj[1] G[1][2] G.edges[1, 2] ###Output _____no_output_____ ###Markdown You can get/set the attributes of an edge using subscript notationif the edge already exists. ###Code G.add_edge(1, 3) G[1][3]['color'] = "blue" G.edges[1, 2]['color'] = "red" G.edges[1, 2] ###Output _____no_output_____ ###Markdown Fast examination of all (node, adjacency) pairs is achieved using`G.adjacency()`, or `G.adj.items()`.Note that for undirected graphs, adjacency iteration sees each edge twice. ###Code FG = nx.Graph() FG.add_weighted_edges_from([(1, 2, 0.125), (1, 3, 0.75), (2, 4, 1.2), (3, 4, 0.375)]) for n, nbrs in FG.adj.items(): for nbr, eattr in nbrs.items(): wt = eattr['weight'] if wt < 0.5: print(f"({n}, {nbr}, {wt:.3})") ###Output (1, 2, 0.125) (2, 1, 0.125) (3, 4, 0.375) (4, 3, 0.375) ###Markdown Convenient access to all edges is achieved with the edges property. ###Code for (u, v, wt) in FG.edges.data('weight'): if wt < 0.5: print(f"({u}, {v}, {wt:.3})") ###Output (1, 2, 0.125) (3, 4, 0.375) ###Markdown Adding attributes to graphs, nodes, and edgesAttributes such as weights, labels, colors, or whatever Python object you like,can be attached to graphs, nodes, or edges.Each graph, node, and edge can hold key/value attribute pairs in an associatedattribute dictionary (the keys must be hashable). By default these are empty,but attributes can be added or changed using `add_edge`, `add_node` or directmanipulation of the attribute dictionaries named `G.graph`, `G.nodes`, and`G.edges` for a graph `G`. Graph attributesAssign graph attributes when creating a new graph ###Code G = nx.Graph(day="Friday") G.graph ###Output _____no_output_____ ###Markdown Or you can modify attributes later ###Code G.graph['day'] = "Monday" G.graph ###Output _____no_output_____ ###Markdown Node attributesAdd node attributes using `add_node()`, `add_nodes_from()`, or `G.nodes` ###Code G.add_node(1, time='5pm') G.add_nodes_from([3], time='2pm') G.nodes[1] G.nodes[1]['room'] = 714 G.nodes.data() ###Output _____no_output_____ ###Markdown Note that adding a node to `G.nodes` does not add it to the graph, use`G.add_node()` to add new nodes. Similarly for edges. Edge AttributesAdd/change edge attributes using `add_edge()`, `add_edges_from()`,or subscript notation. ###Code G.add_edge(1, 2, weight=4.7 ) G.add_edges_from([(3, 4), (4, 5)], color='red') G.add_edges_from([(1, 2, {'color': 'blue'}), (2, 3, {'weight': 8})]) G[1][2]['weight'] = 4.7 G.edges[3, 4]['weight'] = 4.2 ###Output _____no_output_____ ###Markdown The special attribute `weight` should be numeric as it is used byalgorithms requiring weighted edges. Directed graphsThe `DiGraph` class provides additional methods and properties specificto directed edges, e.g.,`DiGraph.out_edges`, `DiGraph.in_degree`,`DiGraph.predecessors()`, `DiGraph.successors()` etc.To allow algorithms to work with both classes easily, the directed versions of`neighbors()` is equivalent to `successors()` while `degree` reportsthe sum of `in_degree` and `out_degree` even though that may feelinconsistent at times. ###Code DG = nx.DiGraph() DG.add_weighted_edges_from([(1, 2, 0.5), (3, 1, 0.75)]) DG.out_degree(1, weight='weight') DG.degree(1, weight='weight') list(DG.successors(1)) list(DG.neighbors(1)) ###Output _____no_output_____ ###Markdown Some algorithms work only for directed graphs and others are not welldefined for directed graphs. Indeed the tendency to lump directedand undirected graphs together is dangerous. If you want to treata directed graph as undirected for some measurement you should probablyconvert it using `Graph.to_undirected()` or with ###Code H = nx.Graph(G) # create an undirected graph H from a directed graph G ###Output _____no_output_____ ###Markdown MultigraphsNetworkX provides classes for graphs which allow multiple edgesbetween any pair of nodes. The `MultiGraph` and`MultiDiGraph`classes allow you to add the same edge twice, possibly with differentedge data. This can be powerful for some applications, but manyalgorithms are not well defined on such graphs.Where results are well defined,e.g., `MultiGraph.degree()` we provide the function. Otherwise youshould convert to a standard graph in a way that makes the measurementwell defined. ###Code MG = nx.MultiGraph() MG.add_weighted_edges_from([(1, 2, 0.5), (1, 2, 0.75), (2, 3, 0.5)]) dict(MG.degree(weight='weight')) GG = nx.Graph() for n, nbrs in MG.adjacency(): for nbr, edict in nbrs.items(): minvalue = min([d['weight'] for d in edict.values()]) GG.add_edge(n, nbr, weight = minvalue) nx.shortest_path(GG, 1, 3) ###Output _____no_output_____ ###Markdown Graph generators and graph operationsIn addition to constructing graphs node-by-node or edge-by-edge, theycan also be generated by1. Applying classic graph operations, such as:1. Using a call to one of the classic small graphs, e.g.,1. Using a (constructive) generator for a classic graph, e.g.,like so: ###Code K_5 = nx.complete_graph(5) K_3_5 = nx.complete_bipartite_graph(3, 5) barbell = nx.barbell_graph(10, 10) lollipop = nx.lollipop_graph(10, 20) ###Output _____no_output_____ ###Markdown 1. Using a stochastic graph generator, e.g,like so: ###Code er = nx.erdos_renyi_graph(100, 0.15) ws = nx.watts_strogatz_graph(30, 3, 0.1) ba = nx.barabasi_albert_graph(100, 5) red = nx.random_lobster(100, 0.9, 0.9) ###Output _____no_output_____ ###Markdown 1. Reading a graph stored in a file using common graph formats, such as edge lists, adjacency lists, GML, GraphML, pickle, LEDA and others. ###Code nx.write_gml(red, "path.to.file") mygraph = nx.read_gml("path.to.file") ###Output _____no_output_____ ###Markdown For details on graph formats see Reading and writing graphsand for graph generator functions see Graph generators Analyzing graphsThe structure of `G` can be analyzed using various graph-theoreticfunctions such as: ###Code G = nx.Graph() G.add_edges_from([(1, 2), (1, 3)]) G.add_node("spam") # adds node "spam" list(nx.connected_components(G)) sorted(d for n, d in G.degree()) nx.clustering(G) ###Output _____no_output_____ ###Markdown Some functions with large output iterate over (node, value) 2-tuples.These are easily stored in a [dict](https://docs.python.org/3/library/stdtypes.htmldict) structure if you desire. ###Code sp = dict(nx.all_pairs_shortest_path(G)) sp[3] ###Output _____no_output_____ ###Markdown See Algorithms for details on graph algorithmssupported. Drawing graphsNetworkX is not primarily a graph drawing package but basic drawing withMatplotlib as well as an interface to use the open source Graphviz softwarepackage are included. These are part of the `networkx.drawing` module and willbe imported if possible.First import Matplotlib’s plot interface (pylab works too) ###Code import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown To test if the import of `networkx.drawing` was successful draw `G` using one of ###Code G = nx.petersen_graph() plt.subplot(121) nx.draw(G, with_labels=True, font_weight='bold') plt.subplot(122) nx.draw_shell(G, nlist=[range(5, 10), range(5)], with_labels=True, font_weight='bold') ###Output _____no_output_____ ###Markdown when drawing to an interactive display. Note that you may need to issue aMatplotlib ###Code plt.show() ###Output _____no_output_____ ###Markdown command if you are not using matplotlib in interactive mode (see[Matplotlib FAQ](http://matplotlib.org/faq/installing_faq.htmlmatplotlib-compiled-fine-but-nothing-shows-up-when-i-use-it)). ###Code options = { 'node_color': 'black', 'node_size': 100, 'width': 3, } plt.subplot(221) nx.draw_random(G, **options) plt.subplot(222) nx.draw_circular(G, **options) plt.subplot(223) nx.draw_spectral(G, **options) plt.subplot(224) nx.draw_shell(G, nlist=[range(5,10), range(5)], **options) ###Output _____no_output_____ ###Markdown You can find additional options via `draw_networkx()` andlayouts via `layout`.You can use multiple shells with `draw_shell()`. ###Code G = nx.dodecahedral_graph() shells = [[2, 3, 4, 5, 6], [8, 1, 0, 19, 18, 17, 16, 15, 14, 7], [9, 10, 11, 12, 13]] nx.draw_shell(G, nlist=shells, **options) ###Output _____no_output_____ ###Markdown To save drawings to a file, use, for example ###Code nx.draw(G) plt.savefig("path.png") ###Output _____no_output_____ ###Markdown writes to the file `path.png` in the local directory. If Graphviz andPyGraphviz or pydot, are available on your system, you can also use`nx_agraph.graphviz_layout(G)` or `nx_pydot.graphviz_layout(G)` to get thenode positions, or write the graph in dot format for further processing. ###Code from networkx.drawing.nx_pydot import write_dot pos = nx.nx_agraph.graphviz_layout(G) nx.draw(G, pos=pos) write_dot(G, 'file.dot') ###Output _____no_output_____
notebooks/OffensiveNim.ipynb
###Markdown Nim's Offensive Applications After looking at Nim malware from the defensive perspective, let's examine the other side of the coin.Nim is a fantastic malware development language for a few reasons:- Elegent, easy access to the Win API via the winim library (we saw this with the ShellExecute binary example).- Nim compiled binaries are compiled directly to native C/C++ code. Your malware binaries do not rely on the existence of an interpreter or virtual machine to execute.- Nim binaries are tiny compared to other cross-compileable, statically-linked languages like Go and Rust.- Winim can also call the Component Object Model (COM) directly which allows for flexibility of post-exploitation execution.- Easy generation of Windows DLLs.... and a whole lot more. OffensiveNimaka "The Sacred Texts"[OffensiveNim](https://github.com/byt3bl33d3r/OffensiveNim) is Marcello Salvati's (aka [@byt3bl33d3r](https://twitter.com/byt3bl33d3r)) research repository for the offensive application of Nim. No talk on Nim malware would be complete without mentioning his incredible work.This repository has several powerful proof of concepts for a wide range of offensive activities, from classic [CreateRemoteThread shellcode injection](https://github.com/byt3bl33d3r/OffensiveNim/blob/master/src/shellcode_bin.nim) to [minidumping LSASS](https://github.com/byt3bl33d3r/OffensiveNim/blob/master/src/minidump_bin.nim) to [keylogging](https://github.com/byt3bl33d3r/OffensiveNim/blob/master/src/keylogger_bin.nim) to [things so dope I don't even understand what they are doing](https://github.com/byt3bl33d3r/OffensiveNim/blob/master/src/taskbar_ewmi_bin.nim). Example: CreateRemoteThread Shellcode InjectionLet's examine a simple POC from this repository to see an example of how Nim's chartacteristics lend itself to malware development. ###Code !cat ../samples/src/createremotethread/createremotethread.nim ###Output _____no_output_____ ###Markdown ---Now, let's look at another simple POC for a malicious function: **DNS exfiltration**.Using the following Nim code, I am able to read in the bytes of `cosmo.jpg`, encode them in URL safe base64, and make a series of DNS TXT record lookups to a specified name server: ###Code !cat ../samples/src/DNSExfilCosmo/DNSExfilCosmo.nim ###Output _____no_output_____
beijingair_2016.ipynb
###Markdown By using `read_csv(engine, encoding)`, solving the utf-8 error, and reading the file correctly. reference: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html ###Code air_2016 = pd.read_csv('Beijing_2016_HourlyPM25_created20170201.csv', skiprows=3, engine ='python', encoding = 'latin_1', parse_dates=[2]) air_2016.head() air_2016.tail() air_2016.index air_2016.info() air_2016.columns ###Output _____no_output_____ ###Markdown Looking for the missing data. According to *the U.S. Department of State air quality files*, Missing values are listed as -999 but not null. In this case, we can't filter boolean by using 'dropna( )'. ###Code air_2016_filtered = air_2016[air_2016['Value'] == -999] air_2016_filtered ###Output _____no_output_____ ###Markdown Data cleaning. By using `drop( )`, removing the rows with missing data. ###Code air_2016 = air_2016.drop(air_2016.loc[air_2016['Value'] == -999].index) air_2016 air_2016.Value.max() air_2016.Value.mean() air_2016.Value.median() ###Output _____no_output_____ ###Markdown To get the first sense of how air quality looks like in Beijing, 2015, from 00:00 Jan 1st to 23:59 Dec 31. ###Code air_2016.plot(y='Value') midnight=air_2016[air_2016['Hour'] == 0] midnight midday = air_2016[air_2016['Hour'] == 12] midday ###Output _____no_output_____ ###Markdown I have a hypothesis, due to the actitivities during the daytime, air quality in the day is worse than it in the night. ###Code midday.plot(y='Value') midnight.plot(y='Value') ax = midday.plot(y="Value", label="midday") midnight.plot(y="Value",label="midnight",ax=ax) ax.set_ylabel("PM 2.5") ###Output _____no_output_____ ###Markdown I assume the air quality on Chinese new year eve is more likely worse than new year eve, due to the cultural bond. ###Code new_year_eve = air_2016[((air_2016['Month']==12) & (air_2016['Day']==31))] new_year_eve.head() new_year_eve.describe() new_year_eve.plot(x='Hour', y='Value') ###Output _____no_output_____ ###Markdown Chinese new year is Feb 7 ###Code chinese_new_year_eve = air_2016[((air_2016['Month']==2) & (air_2016['Day']==7))] chinese_new_year_eve.head() chinese_new_year_eve.describe() chinese_new_year_eve.plot(x ='Hour', y='Value') ax = new_year_eve.plot(x="Hour", y="Value", label="New Year Eve") chinese_new_year_eve.plot(x="Hour",y="Value",label="Chinese New Year Eve",ax=ax) ax.set_ylabel("PM 2.5") ###Output _____no_output_____ ###Markdown *Next step:* Compare the two different new year's eves with an average day (calculated hour by hour). To compare to normal days, I need to find the average index of every hour. ###Code hourly_average = air_2016[['Hour','Value']].groupby(['Hour']).mean().reset_index() hourly_average ax = new_year_eve.plot(x="Hour", y="Value", label="New Year Eve") chinese_new_year_eve.plot(x="Hour",y="Value",label="Chinese New Year Eve",ax=ax) hourly_average.plot(x='Hour',y='Value',label="Average Day", ax=ax) ax.set_ylabel("PM 2.5") ###Output _____no_output_____ ###Markdown I also want to know in which month/months, Beijingers have better air. ###Code month_average = air_2016[['Month','Value']].groupby(['Month']).mean().reset_index() month_average list(month_average['Month']) import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np import matplotlib.pyplot as plt plt.bar(list(month_average['Month']), list(month_average['Value']), align='center',alpha=0.5) from datetime import date month_names = [date(2015,m,1).strftime('%b') for m in list(month_average['Month'])] month_names plt.bar(month_names, list(month_average['Value']), align='center',alpha=0.5) ###Output _____no_output_____
data-directory/create_HMDB51/stat_org.ipynb
###Markdown MIT LicenseCopyright (c) 2021 Taiki Miyagawa and Akinori F. EbiharaPermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE. HMDB51 Statistics* Num of total videos: 6766* Num of total frames: 632665* Max num of frames in a video: 1062* Min num of frames in a video: 18* Long videos (descending order):['/data/t-miyagawa/HMDB51png/pour/How_to_pour_beer_pour_u_nm_np1_fr_goo_0''/data/t-miyagawa/HMDB51png/pour/How_to_pour_beer__eh__pour_u_nm_np1_fr_goo_0''/data/t-miyagawa/HMDB51png/talk/jonhs_netfreemovies_holygrail_talk_h_nm_np1_fr_med_6''/data/t-miyagawa/HMDB51png/throw/baseballpitchslowmotion_throw_f_nm_np1_fr_med_0''/data/t-miyagawa/HMDB51png/climb/Bristol_UCR_roof_climb_climb_f_cm_np1_ba_bad_0']Num of frames: [1062, 1062, 846, 741, 728]* Short videos (ascending order):['/data/t-miyagawa/HMDB51png/somersault/LONGESTYARD_somersault_f_cm_np1_le_bad_27''/data/t-miyagawa/HMDB51png/drink/BLACK_HAWK_DOWN_drink_h_nm_np1_fr_bad_36''/data/t-miyagawa/HMDB51png/run/likebeckam_run_f_cm_np1_le_med_3''/data/t-miyagawa/HMDB51png/run/likebeckam_run_f_cm_np1_ri_med_1''/data/t-miyagawa/HMDB51png/run/BLACK_HAWK_DOWN_run_l_nm_np1_ba_med_16']Num of frames: [18, 21, 21, 21, 21]![image-2.png](attachment:image-2.png) Naming Rules of Label Texts`glob` does not work for "[" or "]". Use "[[]" and "[]]" instead. `path.replace("[", "[[").replace("]", "[]]").replace("[[", "[[]")` does a good job.``` Naming rules in label text file There are totally 153 files in this folder,[action]_test_split[1-3].txt corresponding to three splits reported in the paper.The format of each file is[video_name] [id]The video is included in the training set if id is 1The video is included in the testing set if id is 2The video is not included for training/testing if id is 0There should be 70 videos with id 1 , 30 videos with id 2 in each txt file.PROPERTY LABELS (ABBREVIATION)visible body parts head(h), upper body(u), full body (f), lower body(l)camera motion motion (cm), static (nm)number of people involved in the action Single (np1), two (np2), three (np3)camera viewpoint Front (fr), back (ba), left(le), right(ri)video quality good (goo), medium (med), ok (bad) Templates label file names:ClassName_test_split[1-3].txtvideo names:VideoName_ClassName_VisibleBodyParts_CameraMotion_NumberOfPeopleInvolvedInTheAction_CameraViewpoint_VideoQuality_Number\.avi ID Examples in class "smile" my_smile_smile_h_cm_np1_fr_goo_0.avi 1prelinger_LetsPlay1949_smile_h_nm_np1_fr_goo_27.avi 2prelinger_LetsPlay1949_smile_h_nm_np1_le_goo_25.avi 2prelinger_LetsPlay1949_smile_u_nm_np1_fr_med_24.avi 0prelinger_LetsPlay1949_smile_u_nm_np1_ri_med_21.avi 2prelinger_they_grow_up_so_fast_1_smile_u_nm_np1_fr_med_0.avi 1show_your_smile_-)_smile_h_nm_np1_fr_med_0.avi 1showyoursmile_smile_h_nm_np1_fr_goo_0.avi 1smile_collection_7_smile_h_nm_np1_fr_goo_0.avi 1smile_collection_7_smile_h_nm_np1_fr_goo_1.avi 1youtube_smile_response_smile_h_nm_np1_fr_goo_0.avi 1``` Get statistical information ###Code from glob import glob import os import statistics import matplotlib.pyplot as plt import numpy as np DATADIR = "Define this first. E.g., /data/t-miyagawa" # Get videodir and numf datadir = "{}/HMDB51png".format(DATADIR) classdir = sorted(glob(datadir + "/*")) classdir = [i + "/" for i in classdir] classnames = [i[i.rfind("HMDB51png/") + 10 : -1] for i in classdir] videodir = { k : sorted(glob([v for v in classdir if v.find("/" + k + "/") != -1][0] + "/*")) for k in classnames} numf = dict() for k in classnames: v1 = videodir[k] v2 = [i.replace("[", "[[").replace("]", "[]]").replace("[[", "[[]") for i in v1] numf[k] = [len(glob(_video + "/*.png")) for _video in v2] #videodir, numf # path to video directories, num of frames for each # Smear the keys numf_concat = [] for k in classnames: v = numf[k] numf_concat.extend(v) videodir_concat = [] for k in classnames: v = videodir[k] videodir_concat.extend(v) # Classwise num of frames numf_classwise = [] for k in classnames: v = numf[k] v = sum(v) numf_classwise.append(v) # Classwise num of videos (clips) numv_classwise = [] for k in classnames: v = videodir[k] v = len(v) numv_classwise.append(v) # Classwise num of unique videos (groups) numuv_classwise = [] for k in classnames: v1 = videodir[k] # ['DATADIR/HMDB51png/wave/20060723sfjffbartsinger_wave_f_cm_np1_ba_med_0', # 'DATADIR/HMDB51png/wave/21_wave_u_nm_np1_fr_goo_5', # 'DATADIR/HMDB51png/wave/50_FIRST_DATES_wave_f_cm_np1_fr_med_0', # 'DATADIR/HMDB51png/wave/50_FIRST_DATES_wave_u_cm_np1_fr_goo_30', # 'DATADIR/HMDB51png/wave/50_FIRST_DATES_wave_u_cm_np1_fr_med_1', # 'DATADIR/HMDB51png/wave/50_FIRST_DATES_wave_u_cm_np1_fr_med_36', v2 = [i[i.rfind("/")+1:] for i in v1] # ['20060723sfjffbartsinger_wave_f_cm_np1_ba_med_0', # '21_wave_u_nm_np1_fr_goo_5', # '50_FIRST_DATES_wave_f_cm_np1_fr_med_0', # '50_FIRST_DATES_wave_u_cm_np1_fr_goo_30', # '50_FIRST_DATES_wave_u_cm_np1_fr_med_1', # '50_FIRST_DATES_wave_u_cm_np1_fr_med_36', v3 = [i[:i.rfind(k)-1] for i in v2] # ['20060723sfjffbartsinger', # '21', # '50_FIRST_DATES', # '50_FIRST_DATES', # '50_FIRST_DATES', # '50_FIRST_DATES', v4 = [] for i in v3: if not i in v4: v4.append(i) # ['20060723sfjffbartsinger', # '21', # '50_FIRST_DATES', numuv_classwise.append(len(v4)) # Returns: # classnames: List. Len = Num of classes. Names of classes in alphabetical order. # # videodir: Dict. Paths to video directories. Each values (paths) are in alphabetical order of video names. # numf: Dict. Num of frames for each videos. Each values (integers) are in alphabetical order of video names. # # numf_concat: List. Len = Num of total videos. Order is the same as `videoddir_concat`. # videodir_concat: List. Len = Num of total videos. Order is the same as `numf_concat`. # # numf_classwise: List. Len = Num of classes. The classwise numbers of frames in alphabetical order of class names. # numv_classwise: List. Len = Num of classes. The classwise numbers of videos (clips) in alphabetical order of class names. # numuv_classwise: List. Len = Num of classes. The classwise numbers of unique videos (groups) in alphabetical order of class names. ###Output _____no_output_____ ###Markdown Statistics ###Code print("* Num of total videos: {}".format(len(numf_concat))) print("* Num of total frames: {}".format(sum(numf_concat))) print("* Max num of frames in a video: {}".format(max(numf_concat))) print("* Min num of frames in a video: {}".format(min(numf_concat))) _numshow = 5 print("* Long videos (descending order):\n{}".format(np.array(videodir_concat)[np.argsort(numf_concat)[-_numshow:][::-1]])) print("Num of frames:\n{}".format(sorted(numf_concat)[-_numshow:][::-1])) print("\n* Short videos (ascending order):\n{}".format(np.array(videodir_concat)[np.argsort(numf_concat)[:_numshow]])) print("Num of frames:\n{}".format(sorted(numf_concat)[:_numshow])) plt.title("Num of videos (y) vs. Num of frames in a video (x)") #plt.yscale("log") plt.hist(numf_concat, bins=200) print("Mean: {}".format(statistics.mean(numf_concat))) print("Median: {}".format(statistics.median(numf_concat))) print("Mode: {}".format(statistics.mode(numf_concat))) plt.title("Classwise Num of frames") #plt.yscale("log") plt.bar([i + 1 for i in range(51)], numf_classwise) print("Mean: {}".format(statistics.mean(numf_classwise))) print("Median: {}".format(statistics.median(numf_classwise))) plt.title("Classwise Num of videos (clips)") #plt.yscale("log") plt.bar([i + 1 for i in range(51)], numv_classwise) print("Mean: {}".format(statistics.mean(numv_classwise))) print("Median: {}".format(statistics.median(numv_classwise))) plt.title("Classwise Num of unique videos (groups)") #plt.yscale("log") plt.bar([i + 1 for i in range(51)], numuv_classwise) print("Mean: {}".format(statistics.mean(numuv_classwise))) print("Median: {}".format(statistics.median(numuv_classwise))) ###Output Mean: 44.92156862745098 Median: 40
docs/src/tutorials/Allocation.ipynb
###Markdown Allocation The allocation module provides some utils to be used before running A/B test experiments. Groups allocation is the process that assigns (allocates) a list of users either to a group A (e.g. control) or to a group B (e.g. treatment). This module provides functionalities to randomly allocate users in two or more groups (A/B/C/...).Let's import first the tools needed. ###Code import numpy as np import pandas as pd from abexp.core.allocation import Allocator from abexp.core.analysis_frequentist import FrequentistAnalyzer ###Output _____no_output_____ ###Markdown Complete randomization Here we want to randomly assign users in *n* groups (where *n*=2) in order to run an A/B test experiment with 2 variants, so called control and treatment groups. Complete randomization does not require any data on the user, and in practice, it yields balanced design for large-sample sizes. ###Code # Generate random data user_id = np.arange(100) # Run allocation df, stats = Allocator.complete_randomization(user_id=user_id, ngroups=2, prop=[0.4, 0.6], seed=42) # Users list with group assigned df.head() # Statistics of the randomization: #users per group stats ###Output _____no_output_____ ###Markdown Note: Post-allocation checks can be made to ensure the groups homogeneity and in case of imbalance, a new randomization can be performed (see the [Homogeneity check](homogeneity_check) section below for details). Blocks randomization In some case, one would like to consider one or more confounding factor(s) i.e. features which could unbalance the groups and bias the results if not taken into account during the randomization process. In this example we want to randomly assign users in n groups (where n=3, one control and two treatment groups) considering a confounding factor ('level'). Users with similar characteristics (level) define a block, and randomization is conducted within a block. This enables balanced and homogeneous groups of similar sizes according to the confounding feature. ###Code # Generate random data np.random.seed(42) df = pd.DataFrame(data={'user_id': np.arange(1000), 'level': np.random.randint(1, 6, size=1000)}) # Run allocation df, stats = Allocator.blocks_randomization(df=df, id_col='user_id', stratum_cols='level', ngroups=3, seed=42) # Users data with group assigned df.head() # Statistics of the randomization: #users per group in each level stats ###Output _____no_output_____ ###Markdown __Multi-level block randomization__ You can stratify randomization on two or more features. In the example below we want to randomly allocate users in *n* groups (where *n*=5) in order to run an A/B test experiment with 5 variants, one control and four treatment groups. Thestratification will be based on the user level and paying status in order to create homogeneous groups. ###Code # Generate random data np.random.seed(42) df = pd.DataFrame(data={'user_id': np.arange(1000), 'is_paying': np.random.randint(0, 2, size=1000), 'level': np.random.randint(1, 7, size=1000)}) # Run allocation df, stats = Allocator.blocks_randomization(df=df, id_col='user_id', stratum_cols=['level', 'is_paying'], ngroups=5, seed=42) # Users data with group assigned df.head() # Statistics of the randomization: #users per group in each level and paying status stats ###Output _____no_output_____ ###Markdown Homogeneity check **Complete randomization** does not guarantee homogeneous groups, but it yields balanced design for large-sample sizes. **Blocks randomization** guarantees homogeneous groups based on categorical variables (but not on continuous variable).Thus, we can perform post-allocation checks to ensure the groups homogeneity both for continuous or categorical variables. In case of imbalance, a new randomization can be performed. ###Code # Generate random data np.random.seed(42) df = pd.DataFrame(data={'user_id': np.arange(1000), 'points': np.random.randint(100, 500, size=1000), 'collected_bonus': np.random.randint(2000, 7000, size=1000), 'is_paying': np.random.randint(0, 2, size=1000), 'level': np.random.randint(1, 7, size=1000)}) df.head() ###Output _____no_output_____ ###Markdown __Single iteration__In the cell below it is shown a single iteration of check homogeneity analysis. ###Code # Run allocation df, stats = Allocator.blocks_randomization(df=df, id_col='user_id', stratum_cols=['level', 'is_paying'], ngroups=2, seed=42) # Run homogeneity check analysis X = df.drop(columns=['group']) y = df['group'] analyzer = FrequentistAnalyzer() analysis = analyzer.check_homogeneity(X, y, cat_cols=['is_paying','level']) analysis ###Output _____no_output_____ ###Markdown The ``check_homogeneity`` function performs univariate logistic regression per each feature of the input dataset. If the p-value (column ``P>|z|`` in the table above) of any variables is below a certain threshold (e.g. ``threshold = 0.2``), the random allocation is considered to be non homogeneous and it must be repeated. For instance, in the table above the variable ``collected_bonus`` is not homogeneously split across groups ``p-value = 0.119``. __Multiple iterations__ ###Code # Generate random data np.random.seed(42) df = pd.DataFrame(data={'user_id': np.arange(1000), 'points': np.random.randint(100, 500, size=1000), 'collected_bonus': np.random.randint(2000, 7000, size=1000), 'is_paying': np.random.randint(0, 2, size=1000), 'level': np.random.randint(1, 7, size=1000)}) df.head() ###Output _____no_output_____ ###Markdown In the cell below we repeatedly perform random allocation until it creates homogeneous groups (up to a maximum number of iterations). The groups are considered to be homogeneous when the p-value (column ``P>|z|``) of any variables is below a certain threshold (e.g. ``p-values < 0.2``). ###Code # Define parameters rep = 100 threshold = 0.2 analyzer = FrequentistAnalyzer() for i in np.arange(rep): # Run allocation df, stats = Allocator.blocks_randomization(df=df, id_col='user_id', stratum_cols=['level', 'is_paying'], ngroups=2, seed=i + 45) # Run homogeneity check analysis X = df.drop(columns=['group']) y = df['group'] analysis = analyzer.check_homogeneity(X, y, cat_cols=['is_paying','level']) # Check p-values if all(analysis['P>|z|'] > threshold): break df = df.drop(columns=['group']) analysis ###Output _____no_output_____
04_training_linear_models.ipynb
###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다. ###Code # 파이썬 ≥3.5 필수 import sys assert sys.version_info >= (3, 5) # 사이킷런 ≥0.20 필수 import sklearn assert sklearn.__version__ >= "0.20" # 공통 모듈 임포트 import numpy as np import os # 노트북 실행 결과를 동일하게 유지하기 위해 np.random.seed(42) # 깔끔한 그래프 출력을 위해 %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # 그림을 저장할 위치 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("그림 저장:", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # 불필요한 경고를 무시합니다 (사이파이 이슈 #5998 참조) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown 정규 방정식을 사용한 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output 그림 저장: generated_data_plot ###Markdown **식 4-4: 정규 방정식**$\hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}$ ###Code X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown $\hat{y} = \mathbf{X} \boldsymbol{\hat{\theta}}$ ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown 책에 있는 그림은 범례와 축 레이블이 있는 그래프입니다: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 `scipy.linalg.lstsq()` 함수("least squares"의 약자)를 사용하므로 이 함수를 직접 사용할 수 있습니다: ###Code # 싸이파이 lstsq() 함수를 사용하려면 scipy.linalg.lstsq(X_b, y)와 같이 씁니다. theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_ (pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: $\boldsymbol{\hat{\theta}} = \mathbf{X}^{-1}\hat{y}$ ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 배치 경사 하강법을 사용한 선형 회귀 **식 4-6: 비용 함수의 그레이디언트 벡터**$\dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta}) = \dfrac{2}{m} \mathbf{X}^T (\mathbf{X} \boldsymbol{\theta} - \mathbf{y})$**식 4-7: 경사 하강법의 스텝**$\boldsymbol{\theta}^{(\text{next step})} = \boldsymbol{\theta} - \eta \dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta})$ ###Code eta = 0.1 # 학습률 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # 랜덤 초기화 for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output 그림 저장: gradient_descent_plot ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 랜덤 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 없음 y_predict = X_new_b.dot(theta) # 책에는 없음 style = "b-" if i > 0 else "r--" # 책에는 없음 plt.plot(X_new, y_predict, style) # 책에는 없음 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 없음 plt.plot(X, y, "b.") # 책에는 없음 plt.xlabel("$x_1$", fontsize=18) # 책에는 없음 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 없음 plt.axis([0, 2, 0, 15]) # 책에는 없음 save_fig("sgd_plot") # 책에는 없음 plt.show() # 책에는 없음 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 랜덤 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output 그림 저장: gradient_descent_paths_plot ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # 책에는 없음 plt.xlabel("Training set size", fontsize=14) # 책에는 없음 plt.ylabel("RMSE", fontsize=14) # 책에는 없음 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("underfitting_learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 ###Output 그림 저장: learning_curves_plot ###Markdown 규제가 있는 모델 ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) ###Output _____no_output_____ ###Markdown **식 4-8: 릿지 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \dfrac{1}{2}\sum\limits_{i=1}^{n}{\theta_i}^2$ ###Code from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output 그림 저장: ridge_regression_plot ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.21 버전의 기본값인 `max_iter=1000`과 `tol=1e-3`으로 지정합니다. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-10: 라쏘 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right|$ ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-12: 엘라스틱넷 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + r \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right| + \dfrac{1 - r}{2} \alpha \sum\limits_{i=1}^{n}{{\theta_i}^2}$ ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown 조기 종료 예제: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 중지된 곳에서 다시 시작합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown 그래프를 그립니다: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output 그림 저장: lasso_vs_ridge_plot ###Markdown 로지스틱 회귀 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() ###Output 그림 저장: logistic_function_plot ###Markdown **식 4-16: 하나의 훈련 샘플에 대한 비용 함수**$c(\boldsymbol{\theta}) =\begin{cases} -\log(\hat{p}) & \text{if } y = 1, \\ -\log(1 - \hat{p}) & \text{if } y = 0.\end{cases}$**식 4-17: 로지스틱 회귀 비용 함수(로그 손실)**$J(\boldsymbol{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) log\left(1 - \hat{p}^{(i)}\right)\right]}$**식 4-18: 로지스틱 비용 함수의 편도 함수**$\dfrac{\partial}{\partial \theta_j} \text{J}(\boldsymbol{\theta}) = \dfrac{1}{m}\sum\limits_{i=1}^{m}\left(\mathbf{\sigma(\boldsymbol{\theta}}^T \mathbf{x}^{(i)}) - y^{(i)}\right)\, x_j^{(i)}$ ###Code from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 너비 y = (iris["target"] == 2).astype(np.int) # Iris virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.22 버전의 기본값인 `solver="lbfgs"`로 지정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown 책에 실린 그림은 조금 더 예쁘게 꾸몄습니다: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() ###Output 그림 저장: logistic_regression_contour_plot ###Markdown **식 4-20: 소프트맥스 함수**$\hat{p}_k = \sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$**식 4-22: 크로스 엔트로피 비용 함수**$J(\boldsymbol{\Theta}) = - \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$**식 4-23: 클래스 k에 대한 크로스 엔트로피의 그레이디언트 벡터**$\nabla_{\boldsymbol{\theta}^{(k)}} \, J(\boldsymbol{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$ ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 너비 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 하지만 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 다음과 같이 수동으로 나누어 보겠습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2개와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항이나 인덱스의 순서가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 하지만 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그레이디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.42786510939287936 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.48886433374493027 5000 0.48884031207388184 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "조기 종료!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 여전히 완벽하지만 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercices in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import numpy.random as rnd import os # to make this notebook's output stable across runs rnd.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code X = 2 * rnd.rand(100, 1) y = 4 + 3 * X + rnd.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() import numpy.linalg as LA X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = LA.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) rnd.seed(42) theta = rnd.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] n_iterations = 50 t0, t1 = 5, 50 # learning schedule hyperparameters rnd.seed(42) theta = rnd.randn(2,1) # random initialization def learning_schedule(t): return t0 / (t + t1) m = len(X_b) for epoch in range(n_iterations): for i in range(m): if epoch == 0 and i < 20: y_predict = X_new_b.dot(theta) style = "b-" if i > 0 else "r--" plt.plot(X_new, y_predict, style) random_index = rnd.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("sgd_plot") plt.show() theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(n_iter=50, penalty=None, eta0=0.1) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 rnd.seed(42) theta = rnd.randn(2,1) # random initialization t0, t1 = 10, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = rnd.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd rnd.seed(42) m = 100 X = 6 * rnd.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + rnd.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline(( ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), )) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="Training set") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Training set size", fontsize=14) plt.ylabel("RMSE", fontsize=14) lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) save_fig("underfitting_learning_curves_plot") plt.show() from sklearn.pipeline import Pipeline polynomial_regression = Pipeline(( ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("sgd_reg", LinearRegression()), )) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) save_fig("learning_curves_plot") plt.show() ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge rnd.seed(42) m = 20 X = 3 * rnd.rand(m, 1) y = 1 + 0.5 * X + rnd.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline(( ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), )) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100)) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1)) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky") ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag") ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1)) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) rnd.seed(42) m = 100 X = 6 * rnd.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + rnd.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline(( ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), )) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(n_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train_predict, y_train)) val_errors.append(mean_squared_error(y_val_predict, y_val)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(n_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val_predict, y_val) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) for subplot in (221, 223): plt.subplot(subplot) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) for subplot in (223, 224): plt.subplot(subplot) plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) from sklearn.linear_model import LogisticRegression X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 log_reg = LogisticRegression() log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial", solver="lbfgs", C=10) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolour") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) #100*1的随机数矩阵, 100个样本 y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best # np.dot(theta_best) X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercices in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import numpy.random as rnd import os # to make this notebook's output stable across runs rnd.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code X = 2 * rnd.rand(100, 1) y = 4 + 3 * X + rnd.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() import numpy.linalg as LA X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = LA.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) rnd.seed(42) theta = rnd.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] n_iterations = 50 t0, t1 = 5, 50 # learning schedule hyperparameters rnd.seed(42) theta = rnd.randn(2,1) # random initialization def learning_schedule(t): return t0 / (t + t1) m = len(X_b) for epoch in range(n_iterations): for i in range(m): if epoch == 0 and i < 20: y_predict = X_new_b.dot(theta) style = "b-" if i > 0 else "r--" plt.plot(X_new, y_predict, style) random_index = rnd.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("sgd_plot") plt.show() theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(n_iter=50, penalty=None, eta0=0.1) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 rnd.seed(42) theta = rnd.randn(2,1) # random initialization t0, t1 = 10, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = rnd.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd rnd.seed(42) m = 100 X = 6 * rnd.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + rnd.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline(( ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), )) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="Training set") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Training set size", fontsize=14) plt.ylabel("RMSE", fontsize=14) lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) save_fig("underfitting_learning_curves_plot") plt.show() from sklearn.pipeline import Pipeline polynomial_regression = Pipeline(( ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("sgd_reg", LinearRegression()), )) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) save_fig("learning_curves_plot") plt.show() ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge rnd.seed(42) m = 20 X = 3 * rnd.rand(m, 1) y = 1 + 0.5 * X + rnd.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline(( ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), )) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100)) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1)) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky") ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag") ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1)) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) rnd.seed(42) m = 100 X = 6 * rnd.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + rnd.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline(( ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), )) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(n_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train_predict, y_train)) val_errors.append(mean_squared_error(y_val_predict, y_val)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(n_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val_predict, y_val) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) for subplot in (221, 223): plt.subplot(subplot) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) for subplot in (223, 224): plt.subplot(subplot) plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) from sklearn.linear_model import LogisticRegression X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 log_reg = LogisticRegression() log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial", solver="lbfgs", C=10) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup This project requires Python 3.7 or above: ###Code import sys assert sys.version_info >= (3, 7) ###Output _____no_output_____ ###Markdown It also requires Scikit-Learn ≥ 1.0.1: ###Code import sklearn assert sklearn.__version__ >= "1.0.1" ###Output _____no_output_____ ###Markdown As we did in previous chapters, let's define the default font sizes to make the figures prettier: ###Code import matplotlib.pyplot as plt plt.rc('font', size=14) plt.rc('axes', labelsize=14, titlesize=14) plt.rc('legend', fontsize=14) plt.rc('xtick', labelsize=10) plt.rc('ytick', labelsize=10) ###Output _____no_output_____ ###Markdown And let's create the `images/training_linear_models` folder (if it doesn't already exist), and define the `save_fig()` function which is used through this notebook to save the figures in high-res for the book: ###Code from pathlib import Path IMAGES_PATH = Path() / "images" / "training_linear_models" IMAGES_PATH.mkdir(parents=True, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = IMAGES_PATH / f"{fig_id}.{fig_extension}" if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear Regression The Normal Equation ###Code import numpy as np np.random.seed(42) # to make this code example reproducible m = 100 # number of instances X = 2 * np.random.rand(m, 1) # column vector y = 4 + 3 * X + np.random.randn(m, 1) # column vector # extra code – generates and saves Figure 4–1 import matplotlib.pyplot as plt plt.figure(figsize=(6, 4)) plt.plot(X, y, "b.") plt.xlabel("$x_1$") plt.ylabel("$y$", rotation=0) plt.axis([0, 2, 0, 15]) plt.grid() save_fig("generated_data_plot") plt.show() from sklearn.preprocessing import add_dummy_feature X_b = add_dummy_feature(X) # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T @ X_b) @ X_b.T @ y theta_best X_new = np.array([[0], [2]]) X_new_b = add_dummy_feature(X_new) # add x0 = 1 to each instance y_predict = X_new_b @ theta_best y_predict import matplotlib.pyplot as plt plt.figure(figsize=(6, 4)) # extra code – not needed, just formatting plt.plot(X_new, y_predict, "r-", label="Predictions") plt.plot(X, y, "b.") # extra code – beautifies and saves Figure 4–2 plt.xlabel("$x_1$") plt.ylabel("$y$", rotation=0) plt.axis([0, 2, 0, 15]) plt.grid() plt.legend(loc="upper left") save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b) @ y ###Output _____no_output_____ ###Markdown Gradient Descent Batch Gradient Descent ###Code eta = 0.1 # learning rate n_epochs = 1000 m = len(X_b) # number of instances np.random.seed(42) theta = np.random.randn(2, 1) # randomly initialized model parameters for epoch in range(n_epochs): gradients = 2 / m * X_b.T @ (X_b @ theta - y) theta = theta - eta * gradients ###Output _____no_output_____ ###Markdown The trained model parameters: ###Code theta # extra code – generates and saves Figure 4–8 import matplotlib as mpl def plot_gradient_descent(theta, eta): m = len(X_b) plt.plot(X, y, "b.") n_epochs = 1000 n_shown = 20 theta_path = [] for epoch in range(n_epochs): if epoch < n_shown: y_predict = X_new_b @ theta color = mpl.colors.rgb2hex(plt.cm.OrRd(epoch / n_shown + 0.15)) plt.plot(X_new, y_predict, linestyle="solid", color=color) gradients = 2 / m * X_b.T @ (X_b @ theta - y) theta = theta - eta * gradients theta_path.append(theta) plt.xlabel("$x_1$") plt.axis([0, 2, 0, 15]) plt.grid() plt.title(fr"$\eta = {eta}$") return theta_path np.random.seed(42) theta = np.random.randn(2, 1) # random initialization plt.figure(figsize=(10, 4)) plt.subplot(131) plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0) plt.subplot(132) theta_path_bgd = plot_gradient_descent(theta, eta=0.1) plt.gca().axes.yaxis.set_ticklabels([]) plt.subplot(133) plt.gca().axes.yaxis.set_ticklabels([]) plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output _____no_output_____ ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] # extra code – we need to store the path of theta in the # parameter space to plot the next figure n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) np.random.seed(42) theta = np.random.randn(2, 1) # random initialization n_shown = 20 # extra code – just needed to generate the figure below plt.figure(figsize=(6, 4)) # extra code – not needed, just formatting for epoch in range(n_epochs): for iteration in range(m): # extra code – these 4 lines are used to generate the figure if epoch == 0 and iteration < n_shown: y_predict = X_new_b @ theta color = mpl.colors.rgb2hex(plt.cm.OrRd(iteration / n_shown + 0.15)) plt.plot(X_new, y_predict, color=color) random_index = np.random.randint(m) xi = X_b[random_index : random_index + 1] yi = y[random_index : random_index + 1] gradients = 2 * xi.T @ (xi @ theta - yi) # for SGD, do not divide by m eta = learning_schedule(epoch * m + iteration) theta = theta - eta * gradients theta_path_sgd.append(theta) # extra code – to generate the figure # extra code – this section beautifies and saves Figure 4–10 plt.plot(X, y, "b.") plt.xlabel("$x_1$") plt.ylabel("$y$", rotation=0) plt.axis([0, 2, 0, 15]) plt.grid() save_fig("sgd_plot") plt.show() theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-5, penalty=None, eta0=0.01, n_iter_no_change=100, random_state=42) sgd_reg.fit(X, y.ravel()) # y.ravel() because fit() expects 1D targets sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent The code in this section is used to generate the next figure, it is not in the book. ###Code # extra code – this cell generates and saves Figure 4–11 from math import ceil n_epochs = 50 minibatch_size = 20 n_batches_per_epoch = ceil(m / minibatch_size) np.random.seed(42) theta = np.random.randn(2, 1) # random initialization t0, t1 = 200, 1000 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta_path_mgd = [] for epoch in range(n_epochs): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for iteration in range(0, n_batches_per_epoch): idx = iteration * minibatch_size xi = X_b_shuffled[idx : idx + minibatch_size] yi = y_shuffled[idx : idx + minibatch_size] gradients = 2 / minibatch_size * xi.T @ (xi @ theta - yi) eta = learning_schedule(iteration) theta = theta - eta * gradients theta_path_mgd.append(theta) theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7, 4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left") plt.xlabel(r"$\theta_0$") plt.ylabel(r"$\theta_1$ ", rotation=0) plt.axis([2.6, 4.6, 2.3, 3.4]) plt.grid() save_fig("gradient_descent_paths_plot") plt.show() ###Output _____no_output_____ ###Markdown Polynomial Regression ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X ** 2 + X + 2 + np.random.randn(m, 1) # extra code – this cell generates and saves Figure 4–12 plt.figure(figsize=(6, 4)) plt.plot(X, y, "b.") plt.xlabel("$x_1$") plt.ylabel("$y$", rotation=0) plt.axis([-3, 3, 0, 10]) plt.grid() save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ # extra code – this cell generates and saves Figure 4–13 X_new = np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.figure(figsize=(6, 4)) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$") plt.ylabel("$y$", rotation=0) plt.legend(loc="upper left") plt.axis([-3, 3, 0, 10]) plt.grid() save_fig("quadratic_predictions_plot") plt.show() # extra code – this cell generates and saves Figure 4–14 from sklearn.preprocessing import StandardScaler from sklearn.pipeline import make_pipeline plt.figure(figsize=(6, 4)) for style, width, degree in (("r-+", 2, 1), ("b--", 2, 2), ("g-", 1, 300)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = make_pipeline(polybig_features, std_scaler, lin_reg) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) label = f"{degree} degree{'s' if degree > 1 else ''}" plt.plot(X_new, y_newbig, style, label=label, linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$") plt.ylabel("$y$", rotation=0) plt.axis([-3, 3, 0, 10]) plt.grid() save_fig("high_degree_polynomials_plot") plt.show() ###Output _____no_output_____ ###Markdown Learning Curves ###Code from sklearn.model_selection import learning_curve train_sizes, train_scores, valid_scores = learning_curve( LinearRegression(), X, y, train_sizes=np.linspace(0.01, 1.0, 40), cv=5, scoring="neg_root_mean_squared_error") train_errors = -train_scores.mean(axis=1) valid_errors = -valid_scores.mean(axis=1) plt.figure(figsize=(6, 4)) # extra code – not need, just formatting plt.plot(train_sizes, train_errors, "r-+", linewidth=2, label="train") plt.plot(train_sizes, valid_errors, "b-", linewidth=3, label="valid") # extra code – beautifies and saves Figure 4–15 plt.xlabel("Training set size") plt.ylabel("RMSE") plt.grid() plt.legend(loc="upper right") plt.axis([0, 80, 0, 2.5]) save_fig("underfitting_learning_curves_plot") plt.show() from sklearn.pipeline import make_pipeline polynomial_regression = make_pipeline( PolynomialFeatures(degree=10, include_bias=False), LinearRegression()) train_sizes, train_scores, valid_scores = learning_curve( polynomial_regression, X, y, train_sizes=np.linspace(0.01, 1.0, 40), cv=5, scoring="neg_root_mean_squared_error") # extra code – generates and saves Figure 4–16 train_errors = -train_scores.mean(axis=1) valid_errors = -valid_scores.mean(axis=1) plt.figure(figsize=(6, 4)) plt.plot(train_sizes, train_errors, "r-+", linewidth=2, label="train") plt.plot(train_sizes, valid_errors, "b-", linewidth=3, label="valid") plt.legend(loc="upper right") plt.xlabel("Training set size") plt.ylabel("RMSE") plt.grid() plt.axis([0, 80, 0, 2.5]) save_fig("learning_curves_plot") plt.show() ###Output _____no_output_____ ###Markdown Regularized Linear Models Ridge Regression Let's generate a very small and noisy linear dataset: ###Code # extra code – we've done this type of generation several times before np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) # extra code – a quick peek at the dataset we just generated plt.figure(figsize=(6, 4)) plt.plot(X, y, ".") plt.xlabel("$x_1$") plt.ylabel("$y$ ", rotation=0) plt.axis([0, 3, 0, 3.5]) plt.grid() plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=0.1, solver="cholesky") ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) # extra code – this cell generates and saves Figure 4–17 def plot_model(model_class, polynomial, alphas, **model_kargs): plt.plot(X, y, "b.", linewidth=3) for alpha, style in zip(alphas, ("b:", "g--", "r-")): if alpha > 0: model = model_class(alpha, **model_kargs) else: model = LinearRegression() if polynomial: model = make_pipeline( PolynomialFeatures(degree=10, include_bias=False), StandardScaler(), model) model.fit(X, y) y_new_regul = model.predict(X_new) plt.plot(X_new, y_new_regul, style, linewidth=2, label=fr"$\alpha = {alpha}$") plt.legend(loc="upper left") plt.xlabel("$x_1$") plt.axis([0, 3, 0, 3.5]) plt.grid() plt.figure(figsize=(9, 3.5)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$ ", rotation=0) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) plt.gca().axes.yaxis.set_ticklabels([]) save_fig("ridge_regression_plot") plt.show() sgd_reg = SGDRegressor(penalty="l2", alpha=0.1 / m, tol=None, max_iter=1000, eta0=0.01, random_state=42) sgd_reg.fit(X, y.ravel()) # y.ravel() because fit() expects 1D targets sgd_reg.predict([[1.5]]) # extra code – show that we get roughly the same solution as earlier when # we use Stochastic Average GD (solver="sag") ridge_reg = Ridge(alpha=0.1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) # extra code – shows the closed form solution of Ridge regression, # compare with the next Ridge model's learned parameters below alpha = 0.1 A = np.array([[0., 0.], [0., 1.]]) X_b = np.c_[np.ones(m), X] np.linalg.inv(X_b.T @ X_b + alpha * A) @ X_b.T @ y ridge_reg.intercept_, ridge_reg.coef_ # extra code ###Output _____no_output_____ ###Markdown Lasso Regression ###Code from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) # extra code – this cell generates and saves Figure 4–18 plt.figure(figsize=(9, 3.5)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$ ", rotation=0) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 1e-2, 1), random_state=42) plt.gca().axes.yaxis.set_ticklabels([]) save_fig("lasso_regression_plot") plt.show() # extra code – this BIG cell generates and saves Figure 4–19 t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1 / len(Xr) * ((T @ Xr.T - yr.T) ** 2).sum(axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(J.argmin(), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core=1, eta=0.05, n_iterations=200): path = [theta] for iteration in range(n_iterations): gradients = (core * 2 / len(X) * X.T @ (X @ theta - y) + l1 * np.sign(theta) + l2 * theta) theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2.0, 0, "Lasso"), (1, N2, 0, 2.0, "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2 ** 2 tr_min_idx = np.unravel_index(JR.argmin(), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levels = np.exp(np.linspace(0, 1, 20)) - 1 levelsJ = levels * (J.max() - J.min()) + J.min() levelsJR = levels * (JR.max() - JR.min()) + JR.min() levelsN = np.linspace(0, N.max(), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(theta=np.array([[2.0], [0.5]]), X=Xr, y=yr, l1=np.sign(l1) / 3, l2=np.sign(l2), core=0) ax = axes[i, 0] ax.grid() ax.axhline(y=0, color="k") ax.axvline(x=0, color="k") ax.contourf(t1, t2, N / 2.0, levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(fr"$\ell_{i + 1}$ penalty") ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$") ax.set_ylabel(r"$\theta_2$", rotation=0) ax = axes[i, 1] ax.grid() ax.axhline(y=0, color="k") ax.axvline(x=0, color="k") ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$") save_fig("lasso_vs_ridge_plot") plt.show() ###Output _____no_output_____ ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Early Stopping Let's go back to the quadratic dataset we used earlier: ###Code from copy import deepcopy from sklearn.metrics import mean_squared_error from sklearn.preprocessing import StandardScaler # extra code – creates the same quadratic dataset as earlier and splits it np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X ** 2 + X + 2 + np.random.randn(m, 1) X_train, y_train = X[: m // 2], y[: m // 2, 0] X_valid, y_valid = X[m // 2 :], y[m // 2 :, 0] preprocessing = make_pipeline(PolynomialFeatures(degree=90, include_bias=False), StandardScaler()) X_train_prep = preprocessing.fit_transform(X_train) X_valid_prep = preprocessing.transform(X_valid) sgd_reg = SGDRegressor(penalty=None, eta0=0.002, random_state=42) n_epochs = 500 best_valid_rmse = float('inf') train_errors, val_errors = [], [] # extra code – it's for the figure below for epoch in range(n_epochs): sgd_reg.partial_fit(X_train_prep, y_train) y_valid_predict = sgd_reg.predict(X_valid_prep) val_error = mean_squared_error(y_valid, y_valid_predict, squared=False) if val_error < best_valid_rmse: best_valid_rmse = val_error best_model = deepcopy(sgd_reg) # extra code – we evaluate the train error and save it for the figure y_train_predict = sgd_reg.predict(X_train_prep) train_error = mean_squared_error(y_train, y_train_predict, squared=False) val_errors.append(val_error) train_errors.append(train_error) # extra code – this section generates and saves Figure 4–20 best_epoch = np.argmin(val_errors) plt.figure(figsize=(6, 4)) plt.annotate('Best model', xy=(best_epoch, best_valid_rmse), xytext=(best_epoch, best_valid_rmse + 0.5), ha="center", arrowprops=dict(facecolor='black', shrink=0.05)) plt.plot([0, n_epochs], [best_valid_rmse, best_valid_rmse], "k:", linewidth=2) plt.plot(val_errors, "b-", linewidth=3, label="Validation set") plt.plot(best_epoch, best_valid_rmse, "bo") plt.plot(train_errors, "r--", linewidth=2, label="Training set") plt.legend(loc="upper right") plt.xlabel("Epoch") plt.ylabel("RMSE") plt.axis([0, n_epochs, 0, 3.5]) plt.grid() save_fig("early_stopping_plot") plt.show() ###Output _____no_output_____ ###Markdown Logistic Regression Estimating Probabilities ###Code # extra code – generates and saves Figure 4–21 lim = 6 t = np.linspace(-lim, lim, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(8, 3)) plt.plot([-lim, lim], [0, 0], "k-") plt.plot([-lim, lim], [0.5, 0.5], "k:") plt.plot([-lim, lim], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \dfrac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left") plt.axis([-lim, lim, -0.1, 1.1]) plt.gca().set_yticks([0, 0.25, 0.5, 0.75, 1]) plt.grid() save_fig("logistic_function_plot") plt.show() ###Output _____no_output_____ ###Markdown Decision Boundaries ###Code from sklearn.datasets import load_iris iris = load_iris(as_frame=True) list(iris) print(iris.DESCR) # extra code – it's a bit too long iris.data.head(3) iris.target.head(3) # note that the instances are not shuffled iris.target_names from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split X = iris.data[["petal width (cm)"]].values y = iris.target_names[iris.target] == 'virginica' X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) log_reg = LogisticRegression(random_state=42) log_reg.fit(X_train, y_train) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) # reshape to get a column vector y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0, 0] plt.figure(figsize=(8, 3)) # extra code – not needed, just formatting plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica proba") plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica proba") plt.plot([decision_boundary, decision_boundary], [0, 1], "k:", linewidth=2, label="Decision boundary") # extra code – this section beautifies and saves Figure 4–21 plt.arrow(x=decision_boundary, y=0.08, dx=-0.3, dy=0, head_width=0.05, head_length=0.1, fc="b", ec="b") plt.arrow(x=decision_boundary, y=0.92, dx=0.3, dy=0, head_width=0.05, head_length=0.1, fc="g", ec="g") plt.plot(X_train[y_train == 0], y_train[y_train == 0], "bs") plt.plot(X_train[y_train == 1], y_train[y_train == 1], "g^") plt.xlabel("Petal width (cm)") plt.ylabel("Probability") plt.legend(loc="center left") plt.axis([0, 3, -0.02, 1.02]) plt.grid() save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) # extra code – this cell generates and saves Figure 4–22 X = iris.data[["petal length (cm)", "petal width (cm)"]].values y = iris.target_names[iris.target] == 'virginica' X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) log_reg = LogisticRegression(C=2, random_state=42) log_reg.fit(X_train, y_train) # for the contour plot x0, x1 = np.meshgrid(np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1)) X_new = np.c_[x0.ravel(), x1.ravel()] # one instance per point on the figure y_proba = log_reg.predict_proba(X_new) zz = y_proba[:, 1].reshape(x0.shape) # for the decision boundary left_right = np.array([2.9, 7]) boundary = -((log_reg.coef_[0, 0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0, 1]) plt.figure(figsize=(10, 4)) plt.plot(X_train[y_train == 0, 0], X_train[y_train == 0, 1], "bs") plt.plot(X_train[y_train == 1, 0], X_train[y_train == 1, 1], "g^") contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) plt.clabel(contour, inline=1) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.27, "Not Iris virginica", color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", color="g", ha="center") plt.xlabel("Petal length") plt.ylabel("Petal width") plt.axis([2.9, 7, 0.8, 2.7]) plt.grid() save_fig("logistic_regression_contour_plot") plt.show() ###Output _____no_output_____ ###Markdown Softmax Regression ###Code X = iris.data[["petal length (cm)", "petal width (cm)"]].values y = iris["target"] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) softmax_reg = LogisticRegression(C=30, random_state=42) softmax_reg.fit(X_train, y_train) softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]).round(2) # extra code – this cell generates and saves Figure 4–23 from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(["#fafab0", "#9898ff", "#a0faa0"]) x0, x1 = np.meshgrid(np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1)) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y == 2, 0], X[y == 2, 1], "g^", label="Iris virginica") plt.plot(X[y == 1, 0], X[y == 1, 1], "bs", label="Iris versicolor") plt.plot(X[y == 0, 0], X[y == 0, 1], "yo", label="Iris setosa") plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap="hot") plt.clabel(contour, inline=1) plt.xlabel("Petal length") plt.ylabel("Petal width") plt.legend(loc="center left") plt.axis([0.5, 7, 0, 3.5]) plt.grid() save_fig("softmax_regression_contour_plot") plt.show() ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. 1. If you have a training set with millions of features you can use Stochastic Gradient Descent or Mini-batch Gradient Descent, and perhaps Batch Gradient Descent if the training set fits in memory. But you cannot use the Normal Equation or the SVD approach because the computational complexity grows quickly (more than quadratically) with the number of features.2. If the features in your training set have very different scales, the cost function will have the shape of an elongated bowl, so the Gradient Descent algorithms will take a long time to converge. To solve this you should scale the data before training the model. Note that the Normal Equation or SVD approach will work just fine without scaling. Moreover, regularized models may converge to a suboptimal solution if the features are not scaled: since regularization penalizes large weights, features with smaller values will tend to be ignored compared to features with larger values.3. Gradient Descent cannot get stuck in a local minimum when training a Logistic Regression model because the cost function is convex. _Convex_ means that if you draw a straight line between any two points on the curve, the line never crosses the curve.4. If the optimization problem is convex (such as Linear Regression or Logistic Regression), and assuming the learning rate is not too high, then all Gradient Descent algorithms will approach the global optimum and end up producing fairly similar models. However, unless you gradually reduce the learning rate, Stochastic GD and Mini-batch GD will never truly converge; instead, they will keep jumping back and forth around the global optimum. This means that even if you let them run for a very long time, these Gradient Descent algorithms will produce slightly different models.5. If the validation error consistently goes up after every epoch, then one possibility is that the learning rate is too high and the algorithm is diverging. If the training error also goes up, then this is clearly the problem and you should reduce the learning rate. However, if the training error is not going up, then your model is overfitting the training set and you should stop training.6. Due to their random nature, neither Stochastic Gradient Descent nor Mini-batch Gradient Descent is guaranteed to make progress at every single training iteration. So if you immediately stop training when the validation error goes up, you may stop much too early, before the optimum is reached. A better option is to save the model at regular intervals; then, when it has not improved for a long time (meaning it will probably never beat the record), you can revert to the best saved model.7. Stochastic Gradient Descent has the fastest training iteration since it considers only one training instance at a time, so it is generally the first to reach the vicinity of the global optimum (or Mini-batch GD with a very small mini-batch size). However, only Batch Gradient Descent will actually converge, given enough training time. As mentioned, Stochastic GD and Mini-batch GD will bounce around the optimum, unless you gradually reduce the learning rate.8. If the validation error is much higher than the training error, this is likely because your model is overfitting the training set. One way to try to fix this is to reduce the polynomial degree: a model with fewer degrees of freedom is less likely to overfit. Another thing you can try is to regularize the model—for example, by adding an ℓ₂ penalty (Ridge) or an ℓ₁ penalty (Lasso) to the cost function. This will also reduce the degrees of freedom of the model. Lastly, you can try to increase the size of the training set.9. If both the training error and the validation error are almost equal and fairly high, the model is likely underfitting the training set, which means it has a high bias. You should try reducing the regularization hyperparameter _α_.10. Let's see: * A model with some regularization typically performs better than a model without any regularization, so you should generally prefer Ridge Regression over plain Linear Regression. * Lasso Regression uses an ℓ₁ penalty, which tends to push the weights down to exactly zero. This leads to sparse models, where all weights are zero except for the most important weights. This is a way to perform feature selection automatically, which is good if you suspect that only a few features actually matter. When you are not sure, you should prefer Ridge Regression. * Elastic Net is generally preferred over Lasso since Lasso may behave erratically in some cases (when several features are strongly correlated or when there are more features than training instances). However, it does add an extra hyperparameter to tune. If you want Lasso without the erratic behavior, you can just use Elastic Net with an `l1_ratio` close to 1.11. If you want to classify pictures as outdoor/indoor and daytime/nighttime, since these are not exclusive classes (i.e., all four combinations are possible) you should train two Logistic Regression classifiers. 12. Batch Gradient Descent with early stopping for Softmax RegressionExercise: _Implement Batch Gradient Descent with early stopping for Softmax Regression without using Scikit-Learn, only NumPy. Use it on a classification task such as the iris dataset._ Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris.data[["petal length (cm)", "petal width (cm)"]].values y = iris["target"].values ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$). The easiest option to do this would be to use Scikit-Learn's `add_dummy_feature()` function, but the point of this exercise is to get a better understanding of the algorithms by implementing them manually. So here is one possible implementation: ###Code X_with_bias = np.c_[np.ones(len(X)), X] ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but again, we want to did this manually: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size np.random.seed(42) rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for any given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance. To understand this code, you need to know that `np.diag(np.ones(n))` creates an n×n matrix full of 0s except for 1s on the main diagonal. Moreover, if `a` in a NumPy array, then `a[[1, 3, 2]]` returns an array with 3 rows equal to `a[1]`, `a[3]` and `a[2]` (this is [advanced NumPy indexing](https://numpy.org/doc/stable/reference/arrays.indexing.htmladvanced-indexing)). ###Code def to_one_hot(y): return np.diag(np.ones(y.max() + 1))[y] ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's scale the inputs. We compute the mean and standard deviation of each feature on the training set (except for the bias feature), then we center and scale each feature in the training set, the validation set, and the test set: ###Code mean = X_train[:, 1:].mean(axis=0) std = X_train[:, 1:].std(axis=0) X_train[:, 1:] = (X_train[:, 1:] - mean) / std X_valid[:, 1:] = (X_valid[:, 1:] - mean) / std X_test[:, 1:] = (X_test[:, 1:] - mean) / std ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = exps.sum(axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (there are 3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.5 n_epochs = 5001 m = len(X_train) epsilon = 1e-5 np.random.seed(42) Theta = np.random.randn(n_inputs, n_outputs) for epoch in range(n_epochs): logits = X_train @ Theta Y_proba = softmax(logits) if epoch % 1000 == 0: Y_proba_valid = softmax(X_valid @ Theta) xentropy_losses = -(Y_valid_one_hot * np.log(Y_proba_valid + epsilon)) print(epoch, xentropy_losses.sum(axis=1).mean()) error = Y_proba - Y_train_one_hot gradients = 1 / m * X_train.T @ error Theta = Theta - eta * gradients ###Output 0 3.7085808486476917 1000 0.14519367480830644 2000 0.1301309575504088 3000 0.12009639326384539 4000 0.11372961364786884 5000 0.11002459532472425 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid @ Theta Y_proba = softmax(logits) y_predict = Y_proba.argmax(axis=1) accuracy_score = (y_predict == y_valid).mean() accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty ok. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.5 n_epochs = 5001 m = len(X_train) epsilon = 1e-5 alpha = 0.01 # regularization hyperparameter np.random.seed(42) Theta = np.random.randn(n_inputs, n_outputs) for epoch in range(n_epochs): logits = X_train @ Theta Y_proba = softmax(logits) if epoch % 1000 == 0: Y_proba_valid = softmax(X_valid @ Theta) xentropy_losses = -(Y_valid_one_hot * np.log(Y_proba_valid + epsilon)) l2_loss = 1 / 2 * (Theta[1:] ** 2).sum() total_loss = xentropy_losses.sum(axis=1).mean() + alpha * l2_loss print(epoch, total_loss.round(4)) error = Y_proba - Y_train_one_hot gradients = 1 / m * X_train.T @ error gradients += np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 3.7372 1000 0.3259 2000 0.3259 3000 0.3259 4000 0.3259 5000 0.3259 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid @ Theta Y_proba = softmax(logits) y_predict = Y_proba.argmax(axis=1) accuracy_score = (y_predict == y_valid).mean() accuracy_score ###Output _____no_output_____ ###Markdown In this case, the $\ell_2$ penalty did not change the test accuracy. Perhaps try fine-tuning `alpha`? Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.5 n_epochs = 50_001 m = len(X_train) epsilon = 1e-5 C = 100 # regularization hyperparameter best_loss = np.infty np.random.seed(42) Theta = np.random.randn(n_inputs, n_outputs) for epoch in range(n_epochs): logits = X_train @ Theta Y_proba = softmax(logits) Y_proba_valid = softmax(X_valid @ Theta) xentropy_losses = -(Y_valid_one_hot * np.log(Y_proba_valid + epsilon)) l2_loss = 1 / 2 * (Theta[1:] ** 2).sum() total_loss = xentropy_losses.sum(axis=1).mean() + 1 / C * l2_loss if epoch % 1000 == 0: print(epoch, total_loss.round(4)) if total_loss < best_loss: best_loss = total_loss else: print(epoch - 1, best_loss.round(4)) print(epoch, total_loss.round(4), "early stopping!") break error = Y_proba - Y_train_one_hot gradients = 1 / m * X_train.T @ error gradients += np.r_[np.zeros([1, n_outputs]), 1 / C * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid @ Theta Y_proba = softmax(logits) y_predict = Y_proba.argmax(axis=1) accuracy_score = (y_predict == y_valid).mean() accuracy_score ###Output _____no_output_____ ###Markdown Oh well, still no change in validation acccuracy, but at least early training shortened training a bit. Now let's plot the model's predictions on the whole dataset (remember to scale all features fed to the model): ###Code custom_cmap = mpl.colors.ListedColormap(['#fafab0', '#9898ff', '#a0faa0']) x0, x1 = np.meshgrid(np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1)) X_new = np.c_[x0.ravel(), x1.ravel()] X_new = (X_new - mean) / std X_new_with_bias = np.c_[np.ones(len(X_new)), X_new] logits = X_new_with_bias @ Theta Y_proba = softmax(logits) y_predict = Y_proba.argmax(axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y == 2, 0], X[y == 2, 1], "g^", label="Iris virginica") plt.plot(X[y == 1, 0], X[y == 1, 1], "bs", label="Iris versicolor") plt.plot(X[y == 0, 0], X[y == 0, 1], "yo", label="Iris setosa") plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap="hot") plt.clabel(contour, inline=1) plt.xlabel("Petal length") plt.ylabel("Petal width") plt.legend(loc="upper left") plt.axis([0, 7, 0, 3.5]) plt.grid() plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test @ Theta Y_proba = softmax(logits) y_predict = Y_proba.argmax(axis=1) accuracy_score = (y_predict == y_test).mean() accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercices in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import numpy.random as rnd import os # to make this notebook's output stable across runs rnd.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) rnd.seed(42) theta = rnd.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() rnd.seed(42) theta = rnd.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) rnd.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(n_iter=50, penalty=None, eta0=0.1) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 rnd.seed(42) theta = rnd.randn(2,1) # random initialization t0, t1 = 10, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = rnd.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd rnd.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline(( ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), )) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline(( ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("sgd_reg", LinearRegression()), )) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge rnd.seed(42) m = 20 X = 3 * rnd.rand(m, 1) y = 1 + 0.5 * X + rnd.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline(( ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), )) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100)) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1)) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky") ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag") ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1)) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) rnd.seed(42) m = 100 X = 6 * rnd.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + rnd.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline(( ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), )) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(n_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train_predict, y_train)) val_errors.append(mean_squared_error(y_val_predict, y_val)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(n_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val_predict, y_val) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) for subplot in (221, 223): plt.subplot(subplot) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) for subplot in (223, 224): plt.subplot(subplot) plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression() log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.44618386482 500 0.835100303577 1000 0.687696155441 1500 0.601029983545 2000 0.544278281196 2500 0.503726274224 3000 0.472835729391 3500 0.448187250818 4000 0.427834726281 4500 0.410589102282 5000 0.395680325749 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.62957494791 500 0.534163155437 1000 0.503771274864 1500 0.494805645558 2000 0.491408194841 2500 0.490008507445 3000 0.489407428961 3500 0.489143102469 4000 0.489025165491 4500 0.488972058096 5000 0.488948000479 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown Our perfect model turns out to have slight imperfections. This variability is likely due to the very small size of the dataset: depending on how you sample the training set, validation set and the test set, you can get quite different results. Try changing the random seed and running the code again a few times, you will see that the results will vary. ###Code ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) N_JOBS = 3 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear Regression The Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Gradient Descent Batch Gradient Descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial Regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output Saving figure high_degree_polynomials_plot ###Markdown Learning Curves ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train) + 1): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized Linear Models Ridge Regression ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Lasso Regression ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Early Stopping ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic Regression Decision Boundaries ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) ###Output _____no_output_____ ###Markdown Softmax Regression ###Code from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() np.c_[np.array([1,2,3]),np.array([4,5,6])] ar = np.array([1,2,3]) ar[1] X np.ones((2,3)) y X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) X_b.T np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict X_new X_new_b plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code a = [residuals,rank,s] a np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta = np.random.randn(2,1) 2/100 * X_b.T.dot(X_b.dot(theta) - y) theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() theta_path_bgd ###Output _____no_output_____ ###Markdown Stochastic Gradient Descent ###Code for a in range(5): print(a) X_b[18:19] theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ y_new X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) type(iris) iris.data print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() X_new[y_proba[:,1] >= 0.5] decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear Regression The Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Gradient Descent Batch Gradient Descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial Regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output Saving figure high_degree_polynomials_plot ###Markdown Learning Curves ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized Linear Models Ridge Regression ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Lasso Regression ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Early Stopping ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic Regression Decision Boundaries ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) ###Output _____no_output_____ ###Markdown Softmax Regression ###Code from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Huấn luyện Mô hình Tuyến tính** _Notebook này chứa toàn bộ mã nguồn mẫu và lời giải bài tập Chương 4 - tập 1._ Cài đặt Đầu tiên hãy nhập một vài mô-đun thông dụng, đảm bảo rằng Matplotlib sẽ vẽ đồ thị ngay trong notebook, và chuẩn bị một hàm để lưu đồ thị. Ta cũng kiểm tra xem Python phiên bản từ 3.5 trở lên đã được cài đặt hay chưa (mặc dù Python 2.x vẫn có thể hoạt động, phiên bản này đã bị deprecated nên chúng tôi rất khuyến khích việc sử dụng Python 3), cũng như Scikit-Learn ≥ 0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Hồi quy Tuyến tính sử dụng Phương Trình Pháp Tuyến ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown Hình minh họa trong cuốn sách tương ứng với đoạn mã sau, với chú thích và tên trục: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown Lớp `LinearRegression` dựa trên hàm `scipy.linalg.lstsq()`(viết tắt cho "bình phương nhỏ nhất" - "least squares"), có thể được gọi trực tiếp như sau: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown Hàm này tính $\mathbf{X}^+\mathbf{y}$, trong đó $\mathbf{X}^{+}$ là _giả nghịch đảo_ của $\mathbf{X}$ (cụ thể là nghịch đảo Moore-Penrose). Bạn có thể sử dụng `np.linalg.pinv()` để trực tiếp tính giả nghịch đảo: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Hồi quy Tuyến tính sử dụng hạ gradient theo batch ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Hạ Gradient Ngẫu nhiên**Stochastic Gradient Descent** ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Hạ gradient theo Mini-batch**Mini-batch gradient descent** ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Hồi quy Đa thức**Polynomial regression** ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Mô hình Điều chuẩn**Regularized models** ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Lưu ý**: để thống nhất với phiên bản trong tương lai, chúng ta đặt `max_iter=1000` và `tol=1e-3` bởi chúng là các giá trị mặc định trong Scikit-Learn 0.21 ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Ví dụ về dừng sớm (early stopping): ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Vẽ đồ thị: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Hồi quy Logistic**Logistic regression** ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Lưu ý**: Để thống nhất với phiên bản trong tương lai, chúng ta đặt `solver="lbfgs"` bởi đây là giá trị mặc định trong Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown Hình này trong cuốn sách thực tế phức tạp hơn một chút: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Lời giải bài tập 1. đến 11. Tham khảo Phụ lục A. 12. Hạ Gradient theo Batch với dừng sớm cho Hồi quy Softmax (không sử dụng Scikit-Learn) Hãy bắt đầu bằng cách nạp vào dữ liệu. Chúng ta đơn giản là sử dụng lại bộ dữ liệu Irish đã nạp từ trước. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown Chúng ta cần thêm vào thiên kiến cho mỗi mẫu ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown Và hãy đặt random seed cố định để có thể tái tạo lại đầu ra của lời giải này: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown Tùy chọn dễ dàng nhất để chia tập dữ liệu thành một tập huấn luyện, một tập kiểm định, và một tập kiểm tra là sử dụng hàm `train_test_split()` của Scikit-Learn, tuy nhiên mục đích của bài tập này là cố gắng hiểu được thuật toán bằng cách triển khai chúng theo cách thủ công. Do đó, dưới đây là một cách thực hiện điều này: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown Các nhãn mục tiêu hiện tại là chỉ số lớp (0, 1, hoặc 2), tuy nhiên chúng ta cần các nhãn mục tiêu xác suất để huấn luyện mô hình Hồi quy Softmax. Mỗi mẫu sẽ có xác suất các lớp mục tiêu không phải nhãn thực bằng 0.0, và lớp mục tiêu với nhãn thực sẽ có xác suất bằng 1.0 (nói cách khác, vector xác suất các lớp của một mẫu bất kỳ là một vector one-hot). Hãy viết một hàm đơn giản để chuyển đổi vector chỉ số lớp thành một ma trận chứa một vector one-hot cho mỗi mẫu: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Kiểm tra hàm này với 10 mẫu đầu tiên: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Trông khá ổn, tiếp tục hãy tạo ma trận xác xuất lớp mục tiêu cho tập huấn luyện, tập kiểm định, và tập kiểm tra: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Bây giờ, hãy áp dụng hàm Softmax. Nhớ lại rằng hàm Softmax được xác định bởi phương trình sau:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown Chúng ta gần như đã sẵn sàng để huấn luyện. Hãy định nghĩa số lượng đầu ra và đầu vào: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Bây giờ sẽ đến phần khó nhằn nhất: huấn luyện! Về mặt lý thuyết, nó tương đối đơn giản: chỉ việc chuyển đổi các phương trình toán học sang mã lập trình Python. Tuy nhiên trong thực tế, điều này khá là phức tạp: cụ thể là thứ tự các thuật ngữ và chỉ số rất dễ bị nhẫm lẫn. Thậm chí bạn có thể viết được một đoạn mã có vẻ như hoạt động tốt, nhưng thực ra lại không tính toán đúng như những gì ta muốn. Khi không chắc chắn, bạn nên viết ra hình dạng của từng thuật ngữ trong phương trình và đảm bảo rằng chúng khớp với các thuật ngữ tương ứng trong mã lập trình của bạn. Điều này giúp đánh giá từng thuật ngữ một cách độc lập cũng như in chúng ra. Tin tốt là bạn sẽ không phải làm điều này mỗi lần bởi tất cả đều được hỗ trợ đầy đủ bởi Scikit-Learn, tuy nhiên nó sẽ giúp bạn hiểu được những gì đang diễn ra ở bên dưới.Ta có phương trình của hàm chi phí:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$Và phương trình của gradient:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Chú ý rằng ta không thể tính được $\log\left(\hat{p}_k^{(i)}\right)$ nếu $\hat{p}_k^{(i)} = 0$. Vì vậy ta sẽ thêm một giá trị $\epsilon$ nhỏ vào $\log\left(\hat{p}_k^{(i)}\right)$ để tránh các giá trị `nan`. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown Đã xong! Mô hình Softmax đã được huấn luyện. Hãy in ra các tham số mô hình: ###Code Theta ###Output _____no_output_____ ###Markdown Hãy thử dự đoán trên tập kiểm định và kiểm tra độ chính xác: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Mô hình này trông khá tốt. Theo yêu cầu của bài tập, hãy thêm vào một ít điều chuẩn $\ell_2$. Đoạn mã huấn luyện dưới đây tương tự như đoạn trên, tuy nhiên giá trị mất mát nay được cộng thêm giá trị phạt $\ell_2$, và gradient được cộng thêm vào một đại lượng thích hợp (chú ý rằng chúng ta không điều chuẩn phần tử đầu tiên của `Theta` vì nó tương ứng với đại lượng thiên kiến). Ngoài ra, hãy thử tăng tốc độ học `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Nhờ giá trị phạt bổ sung $\ell_2$, giá trị mất mát trông tốt hơn so với trước đây, tuy nhiên liệu điều này có giúp mô hình của chúng ta hoạt động tốt hơn không? Hãy cùng tìm hiểu xem: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Tuyệt, chính xác tuyệt đối! Chúng ta có thể chỉ gặp may trên tập kiểm định này, tuy nhiên, điều này vẫn khá là tích cực Bây giờ hãy thêm tính năng dừng sớm. Để làm được điều này, chúng ta chỉ cần đo giá trị mất mát trên tập kiểm định tại mỗi vòng lặp và dừng lại khi lỗi bắt đầu tăng. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Vẫn tuyệt đối chính xác, nhưng nhanh hơn. Bây giờ hãy vẽ đồ thị dự đoán của mô hình trên toàn bộ tập dữ liệu: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown Tiếp theo hãy đo độ chính xác của mô hình trên tập kiểm tra: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab **Warning**: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions. Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다. ###Code # 파이썬 ≥3.5 필수 import sys assert sys.version_info >= (3, 5) # 사이킷런 ≥0.20 필수 import sklearn assert sklearn.__version__ >= "0.20" # 공통 모듈 임포트 import numpy as np import os # 노트북 실행 결과를 동일하게 유지하기 위해 np.random.seed(42) # 깔끔한 그래프 출력을 위해 %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # 그림을 저장할 위치 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("그림 저장:", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output 그림 저장: generated_data_plot ###Markdown **식 4-4: 정규 방정식**$\hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}$ ###Code X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown $\hat{y} = \mathbf{X} \boldsymbol{\hat{\theta}}$ ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown 책에 있는 그림은 범례와 축 레이블이 있는 그래프입니다: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 `scipy.linalg.lstsq()` 함수("least squares"의 약자)를 사용하므로 이 함수를 직접 사용할 수 있습니다: ###Code # 싸이파이 lstsq() 함수를 사용하려면 scipy.linalg.lstsq(X_b, y)와 같이 씁니다. theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_ (pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: $\boldsymbol{\hat{\theta}} = \mathbf{X}^{-1}\hat{y}$ ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 경사 하강법 배치 경사 하강법 **식 4-6: 비용 함수의 그레이디언트 벡터**$\dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta}) = \dfrac{2}{m} \mathbf{X}^T (\mathbf{X} \boldsymbol{\theta} - \mathbf{y})$**식 4-7: 경사 하강법의 스텝**$\boldsymbol{\theta}^{(\text{next step})} = \boldsymbol{\theta} - \eta \dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta})$ ###Code eta = 0.1 # 학습률 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # 랜덤 초기화 for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output 그림 저장: gradient_descent_plot ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 랜덤 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 없음 y_predict = X_new_b.dot(theta) # 책에는 없음 style = "b-" if i > 0 else "r--" # 책에는 없음 plt.plot(X_new, y_predict, style) # 책에는 없음 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 없음 plt.plot(X, y, "b.") # 책에는 없음 plt.xlabel("$x_1$", fontsize=18) # 책에는 없음 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 없음 plt.axis([0, 2, 0, 15]) # 책에는 없음 save_fig("sgd_plot") # 책에는 없음 plt.show() # 책에는 없음 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 랜덤 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output 그림 저장: gradient_descent_paths_plot ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output 그림 저장: high_degree_polynomials_plot ###Markdown 학습 곡선 ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # 책에는 없음 plt.xlabel("Training set size", fontsize=14) # 책에는 없음 plt.ylabel("RMSE", fontsize=14) # 책에는 없음 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("underfitting_learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 ###Output 그림 저장: learning_curves_plot ###Markdown 규제가 있는 선형 모델 릿지 회귀 ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) ###Output _____no_output_____ ###Markdown **식 4-8: 릿지 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \dfrac{1}{2}\sum\limits_{i=1}^{n}{\theta_i}^2$ ###Code from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output 그림 저장: ridge_regression_plot ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.21 버전의 기본값인 `max_iter=1000`과 `tol=1e-3`으로 지정합니다. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-10: 라쏘 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right|$ 라쏘 회귀 ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown 엘라스틱넷 **식 4-12: 엘라스틱넷 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + r \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right| + \dfrac{1 - r}{2} \alpha \sum\limits_{i=1}^{n}{{\theta_i}^2}$ ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown 조기 종료 ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 중지된 곳에서 다시 시작합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown 그래프를 그립니다: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output 그림 저장: lasso_vs_ridge_plot ###Markdown 로지스틱 회귀 결정 경계 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() ###Output 그림 저장: logistic_function_plot ###Markdown **식 4-16: 하나의 훈련 샘플에 대한 비용 함수**$c(\boldsymbol{\theta}) =\begin{cases} -\log(\hat{p}) & \text{if } y = 1, \\ -\log(1 - \hat{p}) & \text{if } y = 0.\end{cases}$**식 4-17: 로지스틱 회귀 비용 함수(로그 손실)**$J(\boldsymbol{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) log\left(1 - \hat{p}^{(i)}\right)\right]}$**식 4-18: 로지스틱 비용 함수의 편도 함수**$\dfrac{\partial}{\partial \theta_j} \text{J}(\boldsymbol{\theta}) = \dfrac{1}{m}\sum\limits_{i=1}^{m}\left(\mathbf{\sigma(\boldsymbol{\theta}}^T \mathbf{x}^{(i)}) - y^{(i)}\right)\, x_j^{(i)}$ ###Code from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 너비 y = (iris["target"] == 2).astype(np.int) # Iris virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.22 버전의 기본값인 `solver="lbfgs"`로 지정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown 책에 실린 그림은 조금 더 예쁘게 꾸몄습니다: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) ###Output _____no_output_____ ###Markdown 소프트맥스 회귀 ###Code from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() ###Output 그림 저장: logistic_regression_contour_plot ###Markdown **식 4-20: 소프트맥스 함수**$\hat{p}_k = \sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$**식 4-22: 크로스 엔트로피 비용 함수**$J(\boldsymbol{\Theta}) = - \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$**식 4-23: 클래스 k에 대한 크로스 엔트로피의 그레이디언트 벡터**$\nabla_{\boldsymbol{\theta}^{(k)}} \, J(\boldsymbol{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$ ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 너비 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 하지만 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 다음과 같이 수동으로 나누어 보겠습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2개와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항이나 인덱스의 순서가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 하지만 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그레이디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.4106007142918715 5000 0.3956780375390374 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629506 1000 0.503640075014894 1500 0.4946891059460321 2000 0.49129684180754774 2500 0.489899247009333 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "조기 종료!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 여전히 완벽하지만 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output Saving figure generated_data_plot ###Markdown Paulo Imp ###Code X_b = np.c_[np.ones((100, 1)), X] X_b[:5] np.linalg.inv(X_b.T @ X_b) @ X_b.T @ y ###Output _____no_output_____ ###Markdown ------------------------------------------- ###Code X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. 1. If you have a training set with millions of features you can use Stochastic Gradient Descent or Mini-batch Gradient Descent, and perhaps Batch Gradient Descent if the training set fits in memory. But you cannot use the Normal Equation or the SVD approach because the computational complexity grows quickly (more than quadratically) with the number of features. 2.If the features in your training set have very different scales, the cost function will have the shape of an elongated bowl, so the Gradient Descent algorithms will take a long time to converge. To solve this you should scale the data before training the model. Note that the Normal Equation or SVD approach will work just fine without scaling. Moreover, regularized models may converge to a suboptimal solution if the features are not scaled: since regularization penalizes large weights, features with smaller values will tend to be ignored compared to features with larger values. 3.Gradient Descent cannot get stuck in a local minimum when training a Logistic Regression model because the cost function is convex. 1 4. If the optimization problem is convex (such as Linear Regression or Logistic Regression), and assuming the learning rate is not too high, then all Gradient Descent algorithms will approach the global optimum and end up producing fairly similar models. However, unless you gradually reduce the learning rate, Stochastic GD and Mini-batch GD will never truly converge; instead, they will keep jumping back and forth around the global optimum. This means that even if you let them run for a very long time, these Gradient Descent algorithms will produce slightly different models. 5. If the validation error consistently goes up after every epoch, then one possibility is that the learning rate is too high and the algorithm is diverging. If the training error also goes up, then this is clearly the problem and you should reduce the learning rate. However, if the training error is not going up, then your model is overfitting the training set and you should stop training. 6.Due to their random nature, neither Stochastic Gradient Descent nor Mini-batch Gradient Descent is guaranteed to make progress at every single training iteration. So if you immediately stop training when the validation error goes up, you may stop much too early, before the optimum is reached. A better option is to save the model at regular intervals; then, when it has not improved for a long time (meaning it will probably never beat the record), you can revert to the best saved model. 7. Stochastic Gradient Descent has the fastest training iteration since it considers only one training instance at a time, so it is generally the first to reach the vicinity of the global optimum (or Mini-batch GD with a very small mini-batch size). However, only Batch Gradient Descent will actually converge, given enough training time. As mentioned, Stochastic GD and Mini-batch GD will bounce around the optimum, unless you gradually reduce the learning rate. 8.If the validation error is much higher than the training error, this is likely because your model is overfitting the training set. One way to try to fix this is to reduce the polynomial degree: a model with fewer degrees of freedom is less likely to overfit. Another thing you can try is to regularize the model — for example, by adding an ℓ2 penalty (Ridge) or an ℓ1 penalty (Lasso) to the cost function. This will also reduce the degrees of freedom of the model. Lastly, you can try to increase the size of the training set. 9.If both the training error and the validation error are almost equal and fairly high, the model is likely underfitting the training set, which means it has a high bias. You should try reducing the regularization hyperparameter α.Géron, Aurélien. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow (p. 723). O'Reilly Media. Edição do Kindle. 10.- A model with some regularization typically performs better than a model without any regularization, so you should generally prefer Ridge Regression over plain Linear Regression. - Lasso Regression uses an ℓ1 penalty, which tends to push the weights down to exactly zero. This leads to sparse models, where all weights are zero except for the most important weights. This is a way to perform feature selection automatically, which is good if you suspect that only a few features actually matter. When you are not sure, you should prefer Ridge Regression.- Elastic Net is generally preferred over Lasso since Lasso may behave erratically in some cases (when several features are strongly correlated or when there are more features than training instances). However, it does add an extra hyperparameter to tune. If you want Lasso without the erratic behavior, you can just use Elastic Net with an l1_ratio close to 1. 11.If you want to classify pictures as outdoor/ indoor and daytime/ nighttime, since these are not exclusive classes (i.e., all four combinations are possible) you should train two Logistic Regression classifiers.Géron, Aurélien. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow (p. 723). O'Reilly Media. Edição do Kindle. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 설정 파이썬 2와 3을 모두 지원합니다. 공통 모듈을 임포트하고 맷플롯립 그림이 노트북 안에 포함되도록 설정하고 생성한 그림을 저장하기 위한 함수를 준비합니다: ###Code # 파이썬 2와 파이썬 3 지원 from __future__ import division, print_function, unicode_literals # 공통 import numpy as np import os # 일관된 출력을 위해 유사난수 초기화 np.random.seed(42) # 맷플롯립 설정 %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # 한글출력 matplotlib.rc('font', family='NanumBarunGothic') plt.rcParams['axes.unicode_minus'] = False # 그림을 저장할 폴드 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown 정규 방정식을 사용한 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-", linewidth=2, label="예측") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 scipy.linalg.lstsq() 함수("least squares"의 약자)를 사용하므로 직접 호출할 수 있습니다: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_(pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 경사 하강법을 사용한 선형 회귀 ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output _____no_output_____ ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 무작위 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 빠짐 y_predict = X_new_b.dot(theta) # 책에는 빠짐 style = "b-" if i > 0 else "r--" # 책에는 빠짐 plt.plot(X_new, y_predict, style) # 책에는 빠짐 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 빠짐 plt.plot(X, y, "b.") # 책에는 빠짐 plt.xlabel("$x_1$", fontsize=18) # 책에는 빠짐 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 빠짐 plt.axis([0, 2, 0, 15]) # 책에는 빠짐 save_fig("sgd_plot") # 책에는 빠짐 plt.show() # 책에는 빠짐 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 무작위 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="SGD") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="미니배치") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="배치") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output _____no_output_____ ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="예측") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="훈련") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="검증") plt.legend(loc="upper right", fontsize=14) # 책에는 빠짐 plt.xlabel("훈련 세트 크기", fontsize=14) # 책에는 빠짐 plt.ylabel("RMSE", fontsize=14) # 책에는 빠짐 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 빠짐 save_fig("underfitting_learning_curves_plot") # 책에는 빠짐 plt.show() # 책에는 빠짐 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 빠짐 save_fig("learning_curves_plot") # 책에는 빠짐 plt.show() # 책에는 빠짐 ###Output _____no_output_____ ###Markdown 규제가 있는 모델 ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=5, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('최선의 모델', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="검증 세트") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="훈련 세트") plt.legend(loc="upper right", fontsize=14) plt.xlabel("에포크", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 이어서 학습합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # 편향은 무시 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0, labelpad=15) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output _____no_output_____ ###Markdown 로지스틱 회귀 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 넓이 y = (iris["target"] == 2).astype(np.int) # Iris-Virginica이면 1 아니면 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "결정 경계", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("꽃잎의 폭 (cm)", fontsize=14) plt.ylabel("확률", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Iris-Virginica 아님", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("꽃잎의 길이", fontsize=14) plt.ylabel("꽃잎의 폭", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("꽃잎의 길이", fontsize=14) plt.ylabel("꽃잎의 폭", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 가능한 한가지 방법은 다음과 같습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항과 인덱스가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그래디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446183864821945 500 0.8351003035768683 1000 0.6876961554414913 1500 0.6010299835452123 2000 0.5442782811959168 2500 0.5037262742244606 3000 0.4728357293908467 3500 0.4481872508179334 4000 0.4278347262806173 4500 0.4105891022823527 5000 0.3956803257488941 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629574947908294 500 0.5341631554372782 1000 0.5037712748637473 1500 0.4948056455575166 2000 0.4914081948411196 2500 0.4900085074445458 3000 0.4894074289613261 3500 0.4891431024691195 4000 0.4890251654906585 4500 0.48897205809605315 5000 0.4889480004791562 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 그래도 완벽하고 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("꽃잎 길이", fontsize=14) plt.ylabel("꽃잎 폭", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup This project requires Python 3.7 or above: ###Code import sys assert sys.version_info >= (3, 7) ###Output _____no_output_____ ###Markdown It also requires Scikit-Learn ≥ 1.0.1: ###Code import sklearn assert sklearn.__version__ >= "1.0.1" ###Output _____no_output_____ ###Markdown As we did in previous chapters, let's define the default font sizes to make the figures prettier: ###Code import matplotlib.pyplot as plt plt.rc('font', size=14) plt.rc('axes', labelsize=14, titlesize=14) plt.rc('legend', fontsize=14) plt.rc('xtick', labelsize=10) plt.rc('ytick', labelsize=10) ###Output _____no_output_____ ###Markdown And let's create the `images/training_linear_models` folder (if it doesn't already exist), and define the `save_fig()` function which is used through this notebook to save the figures in high-res for the book: ###Code from pathlib import Path IMAGES_PATH = Path() / "images" / "training_linear_models" IMAGES_PATH.mkdir(parents=True, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = IMAGES_PATH / f"{fig_id}.{fig_extension}" if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear Regression The Normal Equation ###Code import numpy as np np.random.seed(42) # to make this code example reproducible m = 100 # number of instances X = 2 * np.random.rand(m, 1) # column vector y = 4 + 3 * X + np.random.randn(m, 1) # column vector # extra code – generates and saves Figure 4–1 import matplotlib.pyplot as plt plt.figure(figsize=(6, 4)) plt.plot(X, y, "b.") plt.xlabel("$x_1$") plt.ylabel("$y$", rotation=0) plt.axis([0, 2, 0, 15]) plt.grid() save_fig("generated_data_plot") plt.show() from sklearn.preprocessing import add_dummy_feature X_b = add_dummy_feature(X) # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T @ X_b) @ X_b.T @ y theta_best X_new = np.array([[0], [2]]) X_new_b = add_dummy_feature(X_new) # add x0 = 1 to each instance y_predict = X_new_b @ theta_best y_predict import matplotlib.pyplot as plt plt.figure(figsize=(6, 4)) # extra code – not needed, just formatting plt.plot(X_new, y_predict, "r-", label="Predictions") plt.plot(X, y, "b.") # extra code – beautifies and saves Figure 4–2 plt.xlabel("$x_1$") plt.ylabel("$y$", rotation=0) plt.axis([0, 2, 0, 15]) plt.grid() plt.legend(loc="upper left") save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b) @ y ###Output _____no_output_____ ###Markdown Gradient Descent Batch Gradient Descent ###Code eta = 0.1 # learning rate n_epochs = 1000 m = len(X_b) # number of instances np.random.seed(42) theta = np.random.randn(2, 1) # randomly initialized model parameters for epoch in range(n_epochs): gradients = 2 / m * X_b.T @ (X_b @ theta - y) theta = theta - eta * gradients ###Output _____no_output_____ ###Markdown The trained model parameters: ###Code theta # extra code – generates and saves Figure 4–8 import matplotlib as mpl def plot_gradient_descent(theta, eta): m = len(X_b) plt.plot(X, y, "b.") n_epochs = 1000 n_shown = 20 theta_path = [] for epoch in range(n_epochs): if epoch < n_shown: y_predict = X_new_b @ theta color = mpl.colors.rgb2hex(plt.cm.OrRd(epoch / n_shown + 0.15)) plt.plot(X_new, y_predict, linestyle="solid", color=color) gradients = 2 / m * X_b.T @ (X_b @ theta - y) theta = theta - eta * gradients theta_path.append(theta) plt.xlabel("$x_1$") plt.axis([0, 2, 0, 15]) plt.grid() plt.title(fr"$\eta = {eta}$") return theta_path np.random.seed(42) theta = np.random.randn(2, 1) # random initialization plt.figure(figsize=(10, 4)) plt.subplot(131) plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0) plt.subplot(132) theta_path_bgd = plot_gradient_descent(theta, eta=0.1) plt.gca().axes.yaxis.set_ticklabels([]) plt.subplot(133) plt.gca().axes.yaxis.set_ticklabels([]) plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output _____no_output_____ ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] # extra code – we need to store the path of theta in the # parameter space to plot the next figure n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) np.random.seed(42) theta = np.random.randn(2, 1) # random initialization n_shown = 20 # extra code – just needed to generate the figure below plt.figure(figsize=(6, 4)) # extra code – not needed, just formatting for epoch in range(n_epochs): for iteration in range(m): # extra code – these 4 lines are used to generate the figure if epoch == 0 and iteration < n_shown: y_predict = X_new_b @ theta color = mpl.colors.rgb2hex(plt.cm.OrRd(iteration / n_shown + 0.15)) plt.plot(X_new, y_predict, color=color) random_index = np.random.randint(m) xi = X_b[random_index : random_index + 1] yi = y[random_index : random_index + 1] gradients = 2 * xi.T @ (xi @ theta - yi) # for SGD, do not divide by m eta = learning_schedule(epoch * m + iteration) theta = theta - eta * gradients theta_path_sgd.append(theta) # extra code – to generate the figure # extra code – this section beautifies and saves Figure 4–10 plt.plot(X, y, "b.") plt.xlabel("$x_1$") plt.ylabel("$y$", rotation=0) plt.axis([0, 2, 0, 15]) plt.grid() save_fig("sgd_plot") plt.show() theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-5, penalty=None, eta0=0.01, n_iter_no_change=100, random_state=42) sgd_reg.fit(X, y.ravel()) # y.ravel() because fit() expects 1D targets sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent The code in this section is used to generate the next figure, it is not in the book. ###Code # extra code – this cell generates and saves Figure 4–11 from math import ceil n_epochs = 50 minibatch_size = 20 n_batches_per_epoch = ceil(m / minibatch_size) np.random.seed(42) theta = np.random.randn(2, 1) # random initialization t0, t1 = 200, 1000 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta_path_mgd = [] for epoch in range(n_epochs): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for iteration in range(0, n_batches_per_epoch): idx = iteration * minibatch_size xi = X_b_shuffled[idx : idx + minibatch_size] yi = y_shuffled[idx : idx + minibatch_size] gradients = 2 / minibatch_size * xi.T @ (xi @ theta - yi) eta = learning_schedule(iteration) theta = theta - eta * gradients theta_path_mgd.append(theta) theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7, 4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left") plt.xlabel(r"$\theta_0$") plt.ylabel(r"$\theta_1$ ", rotation=0) plt.axis([2.6, 4.6, 2.3, 3.4]) plt.grid() save_fig("gradient_descent_paths_plot") plt.show() ###Output _____no_output_____ ###Markdown Polynomial Regression ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X ** 2 + X + 2 + np.random.randn(m, 1) # extra code – this cell generates and saves Figure 4–12 plt.figure(figsize=(6, 4)) plt.plot(X, y, "b.") plt.xlabel("$x_1$") plt.ylabel("$y$", rotation=0) plt.axis([-3, 3, 0, 10]) plt.grid() save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ # extra code – this cell generates and saves Figure 4–13 X_new = np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.figure(figsize=(6, 4)) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$") plt.ylabel("$y$", rotation=0) plt.legend(loc="upper left") plt.axis([-3, 3, 0, 10]) plt.grid() save_fig("quadratic_predictions_plot") plt.show() # extra code – this cell generates and saves Figure 4–14 from sklearn.preprocessing import StandardScaler from sklearn.pipeline import make_pipeline plt.figure(figsize=(6, 4)) for style, width, degree in (("r-+", 2, 1), ("b--", 2, 2), ("g-", 1, 300)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = make_pipeline(polybig_features, std_scaler, lin_reg) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) label = f"{degree} degree{'s' if degree > 1 else ''}" plt.plot(X_new, y_newbig, style, label=label, linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$") plt.ylabel("$y$", rotation=0) plt.axis([-3, 3, 0, 10]) plt.grid() save_fig("high_degree_polynomials_plot") plt.show() ###Output _____no_output_____ ###Markdown Learning Curves ###Code from sklearn.model_selection import learning_curve train_sizes, train_scores, valid_scores = learning_curve( LinearRegression(), X, y, train_sizes=np.linspace(0.01, 1.0, 40), cv=5, scoring="neg_root_mean_squared_error") train_errors = -train_scores.mean(axis=1) valid_errors = -valid_scores.mean(axis=1) plt.figure(figsize=(6, 4)) # extra code – not needed, just formatting plt.plot(train_sizes, train_errors, "r-+", linewidth=2, label="train") plt.plot(train_sizes, valid_errors, "b-", linewidth=3, label="valid") # extra code – beautifies and saves Figure 4–15 plt.xlabel("Training set size") plt.ylabel("RMSE") plt.grid() plt.legend(loc="upper right") plt.axis([0, 80, 0, 2.5]) save_fig("underfitting_learning_curves_plot") plt.show() from sklearn.pipeline import make_pipeline polynomial_regression = make_pipeline( PolynomialFeatures(degree=10, include_bias=False), LinearRegression()) train_sizes, train_scores, valid_scores = learning_curve( polynomial_regression, X, y, train_sizes=np.linspace(0.01, 1.0, 40), cv=5, scoring="neg_root_mean_squared_error") # extra code – generates and saves Figure 4–16 train_errors = -train_scores.mean(axis=1) valid_errors = -valid_scores.mean(axis=1) plt.figure(figsize=(6, 4)) plt.plot(train_sizes, train_errors, "r-+", linewidth=2, label="train") plt.plot(train_sizes, valid_errors, "b-", linewidth=3, label="valid") plt.legend(loc="upper right") plt.xlabel("Training set size") plt.ylabel("RMSE") plt.grid() plt.axis([0, 80, 0, 2.5]) save_fig("learning_curves_plot") plt.show() ###Output _____no_output_____ ###Markdown Regularized Linear Models Ridge Regression Let's generate a very small and noisy linear dataset: ###Code # extra code – we've done this type of generation several times before np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) # extra code – a quick peek at the dataset we just generated plt.figure(figsize=(6, 4)) plt.plot(X, y, ".") plt.xlabel("$x_1$") plt.ylabel("$y$ ", rotation=0) plt.axis([0, 3, 0, 3.5]) plt.grid() plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=0.1, solver="cholesky") ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) # extra code – this cell generates and saves Figure 4–17 def plot_model(model_class, polynomial, alphas, **model_kwargs): plt.plot(X, y, "b.", linewidth=3) for alpha, style in zip(alphas, ("b:", "g--", "r-")): if alpha > 0: model = model_class(alpha, **model_kwargs) else: model = LinearRegression() if polynomial: model = make_pipeline( PolynomialFeatures(degree=10, include_bias=False), StandardScaler(), model) model.fit(X, y) y_new_regul = model.predict(X_new) plt.plot(X_new, y_new_regul, style, linewidth=2, label=fr"$\alpha = {alpha}$") plt.legend(loc="upper left") plt.xlabel("$x_1$") plt.axis([0, 3, 0, 3.5]) plt.grid() plt.figure(figsize=(9, 3.5)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$ ", rotation=0) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) plt.gca().axes.yaxis.set_ticklabels([]) save_fig("ridge_regression_plot") plt.show() sgd_reg = SGDRegressor(penalty="l2", alpha=0.1 / m, tol=None, max_iter=1000, eta0=0.01, random_state=42) sgd_reg.fit(X, y.ravel()) # y.ravel() because fit() expects 1D targets sgd_reg.predict([[1.5]]) # extra code – show that we get roughly the same solution as earlier when # we use Stochastic Average GD (solver="sag") ridge_reg = Ridge(alpha=0.1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) # extra code – shows the closed form solution of Ridge regression, # compare with the next Ridge model's learned parameters below alpha = 0.1 A = np.array([[0., 0.], [0., 1.]]) X_b = np.c_[np.ones(m), X] np.linalg.inv(X_b.T @ X_b + alpha * A) @ X_b.T @ y ridge_reg.intercept_, ridge_reg.coef_ # extra code ###Output _____no_output_____ ###Markdown Lasso Regression ###Code from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) # extra code – this cell generates and saves Figure 4–18 plt.figure(figsize=(9, 3.5)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$ ", rotation=0) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 1e-2, 1), random_state=42) plt.gca().axes.yaxis.set_ticklabels([]) save_fig("lasso_regression_plot") plt.show() # extra code – this BIG cell generates and saves Figure 4–19 t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1 / len(Xr) * ((T @ Xr.T - yr.T) ** 2).sum(axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(J.argmin(), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core=1, eta=0.05, n_iterations=200): path = [theta] for iteration in range(n_iterations): gradients = (core * 2 / len(X) * X.T @ (X @ theta - y) + l1 * np.sign(theta) + l2 * theta) theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2.0, 0, "Lasso"), (1, N2, 0, 2.0, "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2 ** 2 tr_min_idx = np.unravel_index(JR.argmin(), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levels = np.exp(np.linspace(0, 1, 20)) - 1 levelsJ = levels * (J.max() - J.min()) + J.min() levelsJR = levels * (JR.max() - JR.min()) + JR.min() levelsN = np.linspace(0, N.max(), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(theta=np.array([[2.0], [0.5]]), X=Xr, y=yr, l1=np.sign(l1) / 3, l2=np.sign(l2), core=0) ax = axes[i, 0] ax.grid() ax.axhline(y=0, color="k") ax.axvline(x=0, color="k") ax.contourf(t1, t2, N / 2.0, levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(fr"$\ell_{i + 1}$ penalty") ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$") ax.set_ylabel(r"$\theta_2$", rotation=0) ax = axes[i, 1] ax.grid() ax.axhline(y=0, color="k") ax.axvline(x=0, color="k") ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$") save_fig("lasso_vs_ridge_plot") plt.show() ###Output _____no_output_____ ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Early Stopping Let's go back to the quadratic dataset we used earlier: ###Code from copy import deepcopy from sklearn.metrics import mean_squared_error from sklearn.preprocessing import StandardScaler # extra code – creates the same quadratic dataset as earlier and splits it np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X ** 2 + X + 2 + np.random.randn(m, 1) X_train, y_train = X[: m // 2], y[: m // 2, 0] X_valid, y_valid = X[m // 2 :], y[m // 2 :, 0] preprocessing = make_pipeline(PolynomialFeatures(degree=90, include_bias=False), StandardScaler()) X_train_prep = preprocessing.fit_transform(X_train) X_valid_prep = preprocessing.transform(X_valid) sgd_reg = SGDRegressor(penalty=None, eta0=0.002, random_state=42) n_epochs = 500 best_valid_rmse = float('inf') train_errors, val_errors = [], [] # extra code – it's for the figure below for epoch in range(n_epochs): sgd_reg.partial_fit(X_train_prep, y_train) y_valid_predict = sgd_reg.predict(X_valid_prep) val_error = mean_squared_error(y_valid, y_valid_predict, squared=False) if val_error < best_valid_rmse: best_valid_rmse = val_error best_model = deepcopy(sgd_reg) # extra code – we evaluate the train error and save it for the figure y_train_predict = sgd_reg.predict(X_train_prep) train_error = mean_squared_error(y_train, y_train_predict, squared=False) val_errors.append(val_error) train_errors.append(train_error) # extra code – this section generates and saves Figure 4–20 best_epoch = np.argmin(val_errors) plt.figure(figsize=(6, 4)) plt.annotate('Best model', xy=(best_epoch, best_valid_rmse), xytext=(best_epoch, best_valid_rmse + 0.5), ha="center", arrowprops=dict(facecolor='black', shrink=0.05)) plt.plot([0, n_epochs], [best_valid_rmse, best_valid_rmse], "k:", linewidth=2) plt.plot(val_errors, "b-", linewidth=3, label="Validation set") plt.plot(best_epoch, best_valid_rmse, "bo") plt.plot(train_errors, "r--", linewidth=2, label="Training set") plt.legend(loc="upper right") plt.xlabel("Epoch") plt.ylabel("RMSE") plt.axis([0, n_epochs, 0, 3.5]) plt.grid() save_fig("early_stopping_plot") plt.show() ###Output _____no_output_____ ###Markdown Logistic Regression Estimating Probabilities ###Code # extra code – generates and saves Figure 4–21 lim = 6 t = np.linspace(-lim, lim, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(8, 3)) plt.plot([-lim, lim], [0, 0], "k-") plt.plot([-lim, lim], [0.5, 0.5], "k:") plt.plot([-lim, lim], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \dfrac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left") plt.axis([-lim, lim, -0.1, 1.1]) plt.gca().set_yticks([0, 0.25, 0.5, 0.75, 1]) plt.grid() save_fig("logistic_function_plot") plt.show() ###Output _____no_output_____ ###Markdown Decision Boundaries ###Code from sklearn.datasets import load_iris iris = load_iris(as_frame=True) list(iris) print(iris.DESCR) # extra code – it's a bit too long iris.data.head(3) iris.target.head(3) # note that the instances are not shuffled iris.target_names from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split X = iris.data[["petal width (cm)"]].values y = iris.target_names[iris.target] == 'virginica' X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) log_reg = LogisticRegression(random_state=42) log_reg.fit(X_train, y_train) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) # reshape to get a column vector y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0, 0] plt.figure(figsize=(8, 3)) # extra code – not needed, just formatting plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica proba") plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica proba") plt.plot([decision_boundary, decision_boundary], [0, 1], "k:", linewidth=2, label="Decision boundary") # extra code – this section beautifies and saves Figure 4–23 plt.arrow(x=decision_boundary, y=0.08, dx=-0.3, dy=0, head_width=0.05, head_length=0.1, fc="b", ec="b") plt.arrow(x=decision_boundary, y=0.92, dx=0.3, dy=0, head_width=0.05, head_length=0.1, fc="g", ec="g") plt.plot(X_train[y_train == 0], y_train[y_train == 0], "bs") plt.plot(X_train[y_train == 1], y_train[y_train == 1], "g^") plt.xlabel("Petal width (cm)") plt.ylabel("Probability") plt.legend(loc="center left") plt.axis([0, 3, -0.02, 1.02]) plt.grid() save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) # extra code – this cell generates and saves Figure 4–24 X = iris.data[["petal length (cm)", "petal width (cm)"]].values y = iris.target_names[iris.target] == 'virginica' X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) log_reg = LogisticRegression(C=2, random_state=42) log_reg.fit(X_train, y_train) # for the contour plot x0, x1 = np.meshgrid(np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1)) X_new = np.c_[x0.ravel(), x1.ravel()] # one instance per point on the figure y_proba = log_reg.predict_proba(X_new) zz = y_proba[:, 1].reshape(x0.shape) # for the decision boundary left_right = np.array([2.9, 7]) boundary = -((log_reg.coef_[0, 0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0, 1]) plt.figure(figsize=(10, 4)) plt.plot(X_train[y_train == 0, 0], X_train[y_train == 0, 1], "bs") plt.plot(X_train[y_train == 1, 0], X_train[y_train == 1, 1], "g^") contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) plt.clabel(contour, inline=1) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.27, "Not Iris virginica", color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", color="g", ha="center") plt.xlabel("Petal length") plt.ylabel("Petal width") plt.axis([2.9, 7, 0.8, 2.7]) plt.grid() save_fig("logistic_regression_contour_plot") plt.show() ###Output _____no_output_____ ###Markdown Softmax Regression ###Code X = iris.data[["petal length (cm)", "petal width (cm)"]].values y = iris["target"] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) softmax_reg = LogisticRegression(C=30, random_state=42) softmax_reg.fit(X_train, y_train) softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]).round(2) # extra code – this cell generates and saves Figure 4–25 from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(["#fafab0", "#9898ff", "#a0faa0"]) x0, x1 = np.meshgrid(np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1)) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y == 2, 0], X[y == 2, 1], "g^", label="Iris virginica") plt.plot(X[y == 1, 0], X[y == 1, 1], "bs", label="Iris versicolor") plt.plot(X[y == 0, 0], X[y == 0, 1], "yo", label="Iris setosa") plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap="hot") plt.clabel(contour, inline=1) plt.xlabel("Petal length") plt.ylabel("Petal width") plt.legend(loc="center left") plt.axis([0.5, 7, 0, 3.5]) plt.grid() save_fig("softmax_regression_contour_plot") plt.show() ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. 1. If you have a training set with millions of features you can use Stochastic Gradient Descent or Mini-batch Gradient Descent, and perhaps Batch Gradient Descent if the training set fits in memory. But you cannot use the Normal Equation or the SVD approach because the computational complexity grows quickly (more than quadratically) with the number of features.2. If the features in your training set have very different scales, the cost function will have the shape of an elongated bowl, so the Gradient Descent algorithms will take a long time to converge. To solve this you should scale the data before training the model. Note that the Normal Equation or SVD approach will work just fine without scaling. Moreover, regularized models may converge to a suboptimal solution if the features are not scaled: since regularization penalizes large weights, features with smaller values will tend to be ignored compared to features with larger values.3. Gradient Descent cannot get stuck in a local minimum when training a Logistic Regression model because the cost function is convex. _Convex_ means that if you draw a straight line between any two points on the curve, the line never crosses the curve.4. If the optimization problem is convex (such as Linear Regression or Logistic Regression), and assuming the learning rate is not too high, then all Gradient Descent algorithms will approach the global optimum and end up producing fairly similar models. However, unless you gradually reduce the learning rate, Stochastic GD and Mini-batch GD will never truly converge; instead, they will keep jumping back and forth around the global optimum. This means that even if you let them run for a very long time, these Gradient Descent algorithms will produce slightly different models.5. If the validation error consistently goes up after every epoch, then one possibility is that the learning rate is too high and the algorithm is diverging. If the training error also goes up, then this is clearly the problem and you should reduce the learning rate. However, if the training error is not going up, then your model is overfitting the training set and you should stop training.6. Due to their random nature, neither Stochastic Gradient Descent nor Mini-batch Gradient Descent is guaranteed to make progress at every single training iteration. So if you immediately stop training when the validation error goes up, you may stop much too early, before the optimum is reached. A better option is to save the model at regular intervals; then, when it has not improved for a long time (meaning it will probably never beat the record), you can revert to the best saved model.7. Stochastic Gradient Descent has the fastest training iteration since it considers only one training instance at a time, so it is generally the first to reach the vicinity of the global optimum (or Mini-batch GD with a very small mini-batch size). However, only Batch Gradient Descent will actually converge, given enough training time. As mentioned, Stochastic GD and Mini-batch GD will bounce around the optimum, unless you gradually reduce the learning rate.8. If the validation error is much higher than the training error, this is likely because your model is overfitting the training set. One way to try to fix this is to reduce the polynomial degree: a model with fewer degrees of freedom is less likely to overfit. Another thing you can try is to regularize the model—for example, by adding an ℓ₂ penalty (Ridge) or an ℓ₁ penalty (Lasso) to the cost function. This will also reduce the degrees of freedom of the model. Lastly, you can try to increase the size of the training set.9. If both the training error and the validation error are almost equal and fairly high, the model is likely underfitting the training set, which means it has a high bias. You should try reducing the regularization hyperparameter _α_.10. Let's see: * A model with some regularization typically performs better than a model without any regularization, so you should generally prefer Ridge Regression over plain Linear Regression. * Lasso Regression uses an ℓ₁ penalty, which tends to push the weights down to exactly zero. This leads to sparse models, where all weights are zero except for the most important weights. This is a way to perform feature selection automatically, which is good if you suspect that only a few features actually matter. When you are not sure, you should prefer Ridge Regression. * Elastic Net is generally preferred over Lasso since Lasso may behave erratically in some cases (when several features are strongly correlated or when there are more features than training instances). However, it does add an extra hyperparameter to tune. If you want Lasso without the erratic behavior, you can just use Elastic Net with an `l1_ratio` close to 1.11. If you want to classify pictures as outdoor/indoor and daytime/nighttime, since these are not exclusive classes (i.e., all four combinations are possible) you should train two Logistic Regression classifiers. 12. Batch Gradient Descent with early stopping for Softmax RegressionExercise: _Implement Batch Gradient Descent with early stopping for Softmax Regression without using Scikit-Learn, only NumPy. Use it on a classification task such as the iris dataset._ Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris.data[["petal length (cm)", "petal width (cm)"]].values y = iris["target"].values ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$). The easiest option to do this would be to use Scikit-Learn's `add_dummy_feature()` function, but the point of this exercise is to get a better understanding of the algorithms by implementing them manually. So here is one possible implementation: ###Code X_with_bias = np.c_[np.ones(len(X)), X] ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but again, we want to do it manually: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size np.random.seed(42) rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for any given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance. To understand this code, you need to know that `np.diag(np.ones(n))` creates an n×n matrix full of 0s except for 1s on the main diagonal. Moreover, if `a` is a NumPy array, then `a[[1, 3, 2]]` returns an array with 3 rows equal to `a[1]`, `a[3]` and `a[2]` (this is [advanced NumPy indexing](https://numpy.org/doc/stable/user/basics.indexing.htmladvanced-indexing)). ###Code def to_one_hot(y): return np.diag(np.ones(y.max() + 1))[y] ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's scale the inputs. We compute the mean and standard deviation of each feature on the training set (except for the bias feature), then we center and scale each feature in the training set, the validation set, and the test set: ###Code mean = X_train[:, 1:].mean(axis=0) std = X_train[:, 1:].std(axis=0) X_train[:, 1:] = (X_train[:, 1:] - mean) / std X_valid[:, 1:] = (X_valid[:, 1:] - mean) / std X_test[:, 1:] = (X_test[:, 1:] - mean) / std ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = exps.sum(axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (there are 3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.5 n_epochs = 5001 m = len(X_train) epsilon = 1e-5 np.random.seed(42) Theta = np.random.randn(n_inputs, n_outputs) for epoch in range(n_epochs): logits = X_train @ Theta Y_proba = softmax(logits) if epoch % 1000 == 0: Y_proba_valid = softmax(X_valid @ Theta) xentropy_losses = -(Y_valid_one_hot * np.log(Y_proba_valid + epsilon)) print(epoch, xentropy_losses.sum(axis=1).mean()) error = Y_proba - Y_train_one_hot gradients = 1 / m * X_train.T @ error Theta = Theta - eta * gradients ###Output 0 3.7085808486476917 1000 0.14519367480830644 2000 0.1301309575504088 3000 0.12009639326384539 4000 0.11372961364786884 5000 0.11002459532472425 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid @ Theta Y_proba = softmax(logits) y_predict = Y_proba.argmax(axis=1) accuracy_score = (y_predict == y_valid).mean() accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty ok. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.5 n_epochs = 5001 m = len(X_train) epsilon = 1e-5 alpha = 0.01 # regularization hyperparameter np.random.seed(42) Theta = np.random.randn(n_inputs, n_outputs) for epoch in range(n_epochs): logits = X_train @ Theta Y_proba = softmax(logits) if epoch % 1000 == 0: Y_proba_valid = softmax(X_valid @ Theta) xentropy_losses = -(Y_valid_one_hot * np.log(Y_proba_valid + epsilon)) l2_loss = 1 / 2 * (Theta[1:] ** 2).sum() total_loss = xentropy_losses.sum(axis=1).mean() + alpha * l2_loss print(epoch, total_loss.round(4)) error = Y_proba - Y_train_one_hot gradients = 1 / m * X_train.T @ error gradients += np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 3.7372 1000 0.3259 2000 0.3259 3000 0.3259 4000 0.3259 5000 0.3259 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid @ Theta Y_proba = softmax(logits) y_predict = Y_proba.argmax(axis=1) accuracy_score = (y_predict == y_valid).mean() accuracy_score ###Output _____no_output_____ ###Markdown In this case, the $\ell_2$ penalty did not change the test accuracy. Perhaps try fine-tuning `alpha`? Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.5 n_epochs = 50_001 m = len(X_train) epsilon = 1e-5 C = 100 # regularization hyperparameter best_loss = np.infty np.random.seed(42) Theta = np.random.randn(n_inputs, n_outputs) for epoch in range(n_epochs): logits = X_train @ Theta Y_proba = softmax(logits) Y_proba_valid = softmax(X_valid @ Theta) xentropy_losses = -(Y_valid_one_hot * np.log(Y_proba_valid + epsilon)) l2_loss = 1 / 2 * (Theta[1:] ** 2).sum() total_loss = xentropy_losses.sum(axis=1).mean() + 1 / C * l2_loss if epoch % 1000 == 0: print(epoch, total_loss.round(4)) if total_loss < best_loss: best_loss = total_loss else: print(epoch - 1, best_loss.round(4)) print(epoch, total_loss.round(4), "early stopping!") break error = Y_proba - Y_train_one_hot gradients = 1 / m * X_train.T @ error gradients += np.r_[np.zeros([1, n_outputs]), 1 / C * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid @ Theta Y_proba = softmax(logits) y_predict = Y_proba.argmax(axis=1) accuracy_score = (y_predict == y_valid).mean() accuracy_score ###Output _____no_output_____ ###Markdown Oh well, still no change in validation accuracy, but at least early stopping shortened training a bit. Now let's plot the model's predictions on the whole dataset (remember to scale all features fed to the model): ###Code custom_cmap = mpl.colors.ListedColormap(['#fafab0', '#9898ff', '#a0faa0']) x0, x1 = np.meshgrid(np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1)) X_new = np.c_[x0.ravel(), x1.ravel()] X_new = (X_new - mean) / std X_new_with_bias = np.c_[np.ones(len(X_new)), X_new] logits = X_new_with_bias @ Theta Y_proba = softmax(logits) y_predict = Y_proba.argmax(axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y == 2, 0], X[y == 2, 1], "g^", label="Iris virginica") plt.plot(X[y == 1, 0], X[y == 1, 1], "bs", label="Iris versicolor") plt.plot(X[y == 0, 0], X[y == 0, 1], "yo", label="Iris setosa") plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap="hot") plt.clabel(contour, inline=1) plt.xlabel("Petal length") plt.ylabel("Petal width") plt.legend(loc="upper left") plt.axis([0, 7, 0, 3.5]) plt.grid() plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test @ Theta Y_proba = softmax(logits) y_predict = Y_proba.argmax(axis=1) accuracy_score = (y_predict == y_test).mean() accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=5, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446183864821945 500 0.8351003035768683 1000 0.6876961554414912 1500 0.6010299835452122 2000 0.5442782811959167 2500 0.5037262742244605 3000 0.4728357293908468 3500 0.4481872508179334 4000 0.4278347262806174 4500 0.4105891022823527 5000 0.39568032574889406 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629574947908294 500 0.5341631554372782 1000 0.5037712748637474 1500 0.4948056455575166 2000 0.49140819484111964 2500 0.4900085074445459 3000 0.48940742896132616 3500 0.4891431024691195 4000 0.48902516549065855 4500 0.48897205809605315 5000 0.4889480004791563 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown فصل چهارم - ترین مدل های خطیتوی این فصل به معرفی مدل های خطی رگرسیون و نحوه کارشون میپردازیم. انواع پکیچ ها و کاربردهای ان ###Code # Python ≥3.5 is required import sys # بررسی اینکه نسخه پایتون حداقل ۳.۵ است assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required # یکی از معروف ترین کتابخانه های یادگیری ماشین در پایتون import sklearn assert sklearn.__version__ >= "0.20" # Common imports # کتابخانه کار با اعداد import numpy as np # کتابخانه رابط پایتون و موارد مربوط به سیستم عامل import os # to make this notebook's output stable across runs # سید برای اعداد رندوم تا شبیه خروجی در راستای خاصی باشند np.random.seed(42) # To plot pretty figures # موارد لازم برای رسم و ذخیره کردن نمودارها %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures # محل ذخیره سازی نمودار PROJECT_ROOT_DIR = "." CHAPTER_ID = "classification" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) # تابع ذخیره سازی نمودار def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # برای اینکه اگر به خطای خاصی برخورد کردیم نادیدش بگیریم import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") warnings.filterwarnings(action="ignore", message="^") ###Output _____no_output_____ ###Markdown رگرسیون خطی و معادله نرمال Linear regression using the Normal Equation توی این بخش به بررسی رگرسیون خطی در پایه ترین حالت یعنی معادله نرمال میپردازیم برای شروع یک مقدار داده رو ایجاد میکنیم ###Code X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output Saving figure generated_data_plot ###Markdown با کمک معادله نرما بیایم وتتا رو در کمینه ترین حالت محاسبه کنیم ###Code X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown حالا با توجه به پارامتر هایی که داریم بیایم پیش بینی کنیم ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown باتوجه به معادله خطی که پیدا کردیم حالا بیایم و این رو در حالت معادله خط نمون بدیم که خط مورد پیش بینی به چه شکل هست ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() ###Output Saving figure linear_model_predictions_plot ###Markdown حالا با کمک مدل های سایکیت لرن این پیش بینی رو انجام بدیم ###Code from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown مدل LinearRegression بر پایه scipy.linalg.lstsq() است که میتونیم مستقیما ازش استفاده کنیم ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown رگرسیون خطی با فاصله گرادیان ( نسخه بچ) Linear regression using batch gradient descent در این روش با کمک فاصله گرادیان میایم و محاسبه میکنیم که پارامتر های رگرسیون خطی به چه شکل است ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) ###Output _____no_output_____ ###Markdown نحوه تغییرات رگرسیون خطی با نسخه یادگیری های متفاوتنرخ کم : یادگیری سریع تر - ممکنه به بهترین پارامترها نرسیمنرخ یادگیری زیاد: یادگیری خیلی کند و طولانی - ممکن است در مینیموم محلی گیر کنیمانتخاب نرخ یادگیری مناسب و میزان دور های یادگیری مناسب از پارامتر های مهم یک رگرسیون خطی و بعد تر در شبکه های عصبی هست. ###Code np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown رگرسیون خطی با فاصله گرادیان تصادفی Stochastic Gradient Descentمشکلاتی که در نسخه قبلی با کمک انتخاب تصادفی نمونه از نمونه های اموزش میتونیم رفع کنیم و آموزش رو خیلی سریع تر کنیم ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta ###Output _____no_output_____ ###Markdown همانطور که دیدید در این جا سریع تر به یک بهینگی پارامتری رسیدیدم.برای استفاده از کتاب خانه سایکیت لرن میتونیم به این شکل فراخوانی کنیم ###Code from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown رگرسیون خطی با فاصله گرادیان مینی بچ Mini-batch gradient descentیکی از مشکلات بخش قبلی این بود که فقط یک داده رو هر بار میبینه که وقتی میزان داده ها زیاد هست حتی اگر عادلانه برخورد کنه بازم به همه داده ها نمی رسه. توی مینی بچ میایم داده ها رو توی دسته های کوچیک میدیم به مدل تا اینطوری هم مسئله تصادفی رو داشته باشیم و هم مقدار داده بیشتر رو تزریق کنیم به مدل ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown همانطور که از تصویر مشخص است اگر ما فاصله گرادیان معمولی داشته باشیم و همه داده ها رو تزریق کنیم راحتر میرسیم ولی به خاطر مسئله رم بهترین رویکرد مینی-بچ است رگرسیون چندجمله ای Polynomial Regressionتا الان درباره رگرسیون خطی صحبت کردیم. حالا اگر داده ها از حالت خطی بیرون بیان چی؟ ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) X[0] , y[0] X.shape , y.shape plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() ###Output Saving figure quadratic_data_plot ###Markdown همانطور که توی تصویر مشخص هست داده های ما به شکلی نیستند که بشه صرفا با یک خط ساده اونها رو پیش بینی کرد.نزدیک ترین حالت شبیه تابع x به توان دو ریاضی هستندیکی از راه حل هایی که داریم این هست که داده ها رو از حالت چند جمله ای به حالت خطی ببریم با کمک PolynomialFeatures میتونیم این کار رو کنیم ###Code from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() ###Output Saving figure quadratic_predictions_plot ###Markdown حالا اگر این درجه رو از دو ببریم به ۳۰۰ مدل بهتری میگیریم؟ نه لزوما ###Code from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output Saving figure high_degree_polynomials_plot ###Markdown نمودار یادگیری Learning Curves برای مقایسه بهتر بیایم ببینیم میزان داده ها چه تاثیری روی یادگیری داره با کمک یک نمودار یادگیری ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure underfitting_learning_curves_plot ###Markdown ابتدا از یک مدل خطی ساده استفاده میکینم میبینم که با افزایش داده ها میزان لاس بیشتر شده ولی فاصله بین اموزش و اعتبار سنجی خیلی کم هستاین به ما میگه که مدل به خوبی یاد نگرفته هنوز و یکی از کارها استفاده از مدل پیچیده تر هست ###Code from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown ابتدا از یک مدل خطی ساده استفاده میکینم میبینم که با افزایش داده ها میزان لاس بیشتر شده ولی فاصله بین اموزش و اعتبار سنجی خیلی کم هستبا کمک یک مدل پیچیده تر ابتدا یکم اورفیت رو شاهد هستیم و کم کم با اضافه کردن داده های جدید به مدلمون میتونیم ببینم که روی داده های اعتبار سنجی لاس کمتری رو داریم و تا حدودی وضعیت مدل بهتر از قبل هست اما هنوز فاصله بین داده های تست و آموزشی قابل توجه استا میگه که مدل به خوبی یاد نگرفته هنوز و یکی از کارها استفاده از مدل پیچیده تر هست میزان سازی - محدود سازی مدل Regularized Linear Modelsتوی این قسمت درباره متد هایی که میتونیم مدل هامون رو محدود کنیم و نذاریم اورفیت بشند صحبت میکنیم برای شروع یک مقدار داده میسازیم ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) ###Output _____no_output_____ ###Markdown L1 - Ridgeابتدا فرمول L1 یا Ridge رو بررسی میکنیم و با کمک پکیچ سایکیت لرن میایم یک رگرسیون خطی با اون میسازیم ###Code from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown بررسی تاثیرات الفا روی پیدا کردن رگرسیون خطی مناسب ###Code from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown L2 - Lassoابتدا فرمول L2 یا Lasso رو بررسی میکنیم و با کمک پکیچ سایکیت لرن میایم یک رگرسیون خطی با اون میسازیم ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown L1 + L2 - ElasticNetتوی اینجا از ترکیبL1 و L2 به وجود میاد و میتونیم از قدرت هر وشون استفاده کنیم ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown به صورت کلی L1 روی رگرسیون خطی وقتی که ویژگی های مسئله کم هستند خوب جواب میده. وقتی که ویژگی های مسئله بیشتر میشن شما باید از ترکیبشون یا L2 استفاده کنید. توقف زودهنگام آموزشEarly Stopping یکی از مهم ترین روش های جلوگیری از اورفیت داده ها استفاده از Early Stopping است. ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown برای شروع یک مقدار داده رو میسازیم و اینکه یک مدل اولیه رگرسیون استفاده میکنیم و مقدار دورش رو روی ۱ تنظیم میکنیم. بعدش به اندازه ۱۰۰۰ بار اموزش رو تکرار میکنیم و بهترین حالت مدل رو در جایی ذخیره میکنی ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() ###Output Saving figure early_stopping_plot ###Markdown حالا بهترین مدل و دور مربوط رو نمایش میدیم ###Code best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) ###Output _____no_output_____ ###Markdown روند تاثیر محدود سازی رو داده ها ###Code def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown رگرسیون لجستیکLogistic regression اگر مسئله ما دسته بندی هست با رگرسیون خطی نمیتونیم حل کنیم چون جواب یک عدد هست نه احتمال بودن در یک کلاس که میایم از رگرسیون لجستیک استفاده میکنیم ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) ###Output _____no_output_____ ###Markdown مرز تصمیم گیری جایی هست که نمیشه به دقت گفت که این نسخه مربوط به کدوم دسته هست و بدترین حالت ممکن برای یک نمونه و مدل است ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown حالتی که توی کتاب نمایش داده شده ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() ###Output Saving figure softmax_regression_contour_plot ###Markdown احتمال اینکه عضو کدوم دسته باشه ###Code softmax_reg.predict([[5, 2]]) ###Output _____no_output_____ ###Markdown احتمال اینکه عضو کدوم دسته باشه به درصد برای هر دسته ###Code (softmax_reg.predict_proba([[5, 2]]) * 100).astype(int) ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() ###Output Saving figure quadratic_predictions_plot ###Markdown Learning Curves ###Code from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) ###Output _____no_output_____ ###Markdown Ridge Regression ###Code from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) print(ridge_reg.intercept_, ridge_reg.coef_) print(ridge_reg.predict([[1.5]])) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) print(ridge_reg.intercept_, ridge_reg.coef_) print(ridge_reg.predict([[1.5]])) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) print(sgd_reg.intercept_, sgd_reg.coef_) print(sgd_reg.predict([[1.5]])) ###Output [0.53947472] [0.62043411] [1.47012588] ###Markdown Lasso Regression ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) print(lasso_reg.intercept_, lasso_reg.coef_) print(lasso_reg.predict([[1.5]])) # SGD approach sgd_lasso = SGDRegressor(penalty='l1',alpha=0.1,random_state=42) sgd_lasso.fit(X, y.ravel()) print(sgd_lasso.intercept_, sgd_lasso.coef_) print(sgd_lasso.predict([[1.5]])) ###Output [0.64450934] [0.54050476] [1.45526648] ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) print(elastic_net.intercept_, elastic_net.coef_) print(elastic_net.predict([[1.5]])) # SGD approach sgd_elastic = SGDRegressor(penalty='elasticnet', alpha=0.1, l1_ratio=0.5, random_state=42) sgd_elastic.fit(X, y.ravel()) print(sgd_elastic.intercept_, sgd_elastic.coef_) print(sgd_elastic.predict([[1.5]])) ###Output [0.61855153] [0.56038633] [1.45913103] ###Markdown Early Stopping ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model print(best_model.intercept_) print(best_model.predict(poly_scaler.transform([[1.5]]))) # Early stopping using Sklearn # Note that early_stopping will set aside a fraction # of the training data, however in the above example # we perform that split and evaluate manually. sgd_reg_es = SGDRegressor(max_iter=1000, tol=1e-5,penalty=None, learning_rate="constant", eta0=0.0005, random_state=42, early_stopping=True ) sgd_reg_es.fit(X_train_poly_scaled, y_train) print(f'Epoch: {sgd_reg_es.n_iter_}') print(sgd_reg_es.intercept_) print(sgd_reg_es.predict(poly_scaler.transform([[1.5]]))) ###Output Epoch: 293 [2.88999355] [3.57713102] ###Markdown L1 & L2 Norm Plots ###Code %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() # t is sometimes called the logit # This is because the logit function log(p / 1-p) # where p = sig(t) i.e. the probability returned by applying # the logistic function to t, will return the original t tx = -1.5 # theta . X as returned by normal linear regression sig_tx = 1 / (1 + np.exp(-tx)) # logistic function applied to tx returns a probability logit= np.log(sig_tx/(1-sig_tx)) # logit funtion applied to probability returns tx print(sig_tx, logit, tx == logit) from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 X.shape, y.shape ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) # predict_proba returns the probability for negative (1-p) and postive classes. # Each row has two columns [prob_neg_class,prob_pos_class] plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() ###Output Saving figure logistic_regression_contour_plot ###Markdown Softmax Regression ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 10, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=5, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train_predict, y_train)) val_errors.append(mean_squared_error(y_val_predict, y_val)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val_predict, y_val) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) for subplot in (221, 223): plt.subplot(subplot) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) for subplot in (223, 224): plt.subplot(subplot) plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.44618386482 500 0.835100303577 1000 0.687696155441 1500 0.601029983545 2000 0.544278281196 2500 0.503726274224 3000 0.472835729391 3500 0.448187250818 4000 0.427834726281 4500 0.410589102282 5000 0.395680325749 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.62957494791 500 0.534163155437 1000 0.503771274864 1500 0.494805645558 2000 0.491408194841 2500 0.490008507445 3000 0.489407428961 3500 0.489143102469 4000 0.489025165491 4500 0.488972058096 5000 0.488948000479 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output _____no_output_____ ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output _____no_output_____ ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다. ###Code # 파이썬 ≥3.5 필수 import sys assert sys.version_info >= (3, 5) # 사이킷런 ≥0.20 필수 import sklearn assert sklearn.__version__ >= "0.20" # 공통 모듈 임포트 import numpy as np import os # 노트북 실행 결과를 동일하게 유지하기 위해 np.random.seed(42) # 깔끔한 그래프 출력을 위해 %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # 그림을 저장할 위치 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("그림 저장:", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # 불필요한 경고를 무시합니다 (사이파이 이슈 #5998 참조) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown 정규 방정식을 사용한 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output 그림 저장: generated_data_plot ###Markdown **식 4-4: 정규 방정식**$\hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}$ ###Code X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown $\hat{y} = \mathbf{X} \boldsymbol{\hat{\theta}}$ ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown 책에 있는 그림은 범례와 축 레이블이 있는 그래프입니다: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 `scipy.linalg.lstsq()` 함수("least squares"의 약자)를 사용하므로 이 함수를 직접 사용할 수 있습니다: ###Code # 싸이파이 lstsq() 함수를 사용하려면 scipy.linalg.lstsq(X_b, y)와 같이 씁니다. theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_ (pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: $\boldsymbol{\hat{\theta}} = \mathbf{X}^{-1}\hat{y}$ ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 배치 경사 하강법을 사용한 선형 회귀 **식 4-6: 비용 함수의 그레이디언트 벡터**$\dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta}) = \dfrac{2}{m} \mathbf{X}^T (\mathbf{X} \boldsymbol{\theta} - \mathbf{y})$**식 4-7: 경사 하강법의 스텝**$\boldsymbol{\theta}^{(\text{next step})} = \boldsymbol{\theta} - \eta \dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta})$ ###Code eta = 0.1 # 학습률 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # 랜덤 초기화 for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output 그림 저장: gradient_descent_plot ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 랜덤 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 없음 y_predict = X_new_b.dot(theta) # 책에는 없음 style = "b-" if i > 0 else "r--" # 책에는 없음 plt.plot(X_new, y_predict, style) # 책에는 없음 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 없음 plt.plot(X, y, "b.") # 책에는 없음 plt.xlabel("$x_1$", fontsize=18) # 책에는 없음 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 없음 plt.axis([0, 2, 0, 15]) # 책에는 없음 save_fig("sgd_plot") # 책에는 없음 plt.show() # 책에는 없음 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 랜덤 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output 그림 저장: gradient_descent_paths_plot ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # 책에는 없음 plt.xlabel("Training set size", fontsize=14) # 책에는 없음 plt.ylabel("RMSE", fontsize=14) # 책에는 없음 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("underfitting_learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 ###Output 그림 저장: learning_curves_plot ###Markdown 규제가 있는 모델 ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) ###Output _____no_output_____ ###Markdown **식 4-8: 릿지 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \dfrac{1}{2}\sum\limits_{i=1}^{n}{\theta_i}^2$ ###Code from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output 그림 저장: ridge_regression_plot ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.21 버전의 기본값인 `max_iter=1000`과 `tol=1e-3`으로 지정합니다. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-10: 라쏘 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right|$ ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-12: 엘라스틱넷 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + r \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right| + \dfrac{1 - r}{2} \alpha \sum\limits_{i=1}^{n}{{\theta_i}^2}$ ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown 조기 종료 예제: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 중지된 곳에서 다시 시작합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown 그래프를 그립니다: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output 그림 저장: lasso_vs_ridge_plot ###Markdown 로지스틱 회귀 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() ###Output 그림 저장: logistic_function_plot ###Markdown **식 4-16: 하나의 훈련 샘플에 대한 비용 함수**$c(\boldsymbol{\theta}) =\begin{cases} -\log(\hat{p}) & \text{if } y = 1, \\ -\log(1 - \hat{p}) & \text{if } y = 0.\end{cases}$**식 4-17: 로지스틱 회귀 비용 함수(로그 손실)**$J(\boldsymbol{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) log\left(1 - \hat{p}^{(i)}\right)\right]}$**식 4-18: 로지스틱 비용 함수의 편도 함수**$\dfrac{\partial}{\partial \theta_j} \text{J}(\boldsymbol{\theta}) = \dfrac{1}{m}\sum\limits_{i=1}^{m}\left(\mathbf{\sigma(\boldsymbol{\theta}}^T \mathbf{x}^{(i)}) - y^{(i)}\right)\, x_j^{(i)}$ ###Code from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 너비 y = (iris["target"] == 2).astype(np.int) # Iris virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.22 버전의 기본값인 `solver="lbfgs"`로 지정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown 책에 실린 그림은 조금 더 예쁘게 꾸몄습니다: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() ###Output 그림 저장: logistic_regression_contour_plot ###Markdown **식 4-20: 소프트맥스 함수**$\hat{p}_k = \sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$**식 4-22: 크로스 엔트로피 비용 함수**$J(\boldsymbol{\Theta}) = - \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$**식 4-23: 클래스 k에 대한 크로스 엔트로피의 그레이디언트 벡터**$\nabla_{\boldsymbol{\theta}^{(k)}} \, J(\boldsymbol{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$ ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 너비 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 하지만 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 다음과 같이 수동으로 나누어 보겠습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2개와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항이나 인덱스의 순서가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 하지만 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그레이디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.42786510939287936 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.48886433374493027 5000 0.48884031207388184 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "조기 종료!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 여전히 완벽하지만 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) import numpy as np from sklearn.model_selection import train_test_split np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone from sklearn.pipeline import Pipeline from sklearn.linear_model import SGDRegressor from sklearn.preprocessing import PolynomialFeatures from sklearn.preprocessing import StandardScaler from sklearn.metrics import mean_squared_error poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code import matplotlib.pyplot as plt sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) #save_fig("early_stopping_plot") plt.show() best_epoch, best_model best_model = None for epoch in range(250): sgd_reg.fit(X_train_poly_scaled, y_train) best_model=clone(sgd_reg) best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) plt.rcParams['svg.fonttype'] = 'none' # text not as curves def save_fig(fig_id, tight_layout=True, fig_extension="svg", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution, transparent=True) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 20 * np.random.rand(m, 1) - 10 y = 1.5 * (X + 5)**2 + 2 + np.random.randn(m, 1) %matplotlib inline plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-10, 10, 0, 30]) #save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0:5] X_poly[0:5] %matplotlib notebook #import matplotlib.pyplot as plt fig = plt.figure(figsize=(9,9)) ax = fig.add_subplot(111, projection='3d', proj_type = 'persp') # https://stackoverflow.com/questions/23840756/how-to-disable-perspective-in-mplot3d#49856771 ax.set_xlabel("x0")# https://stackoverflow.com/questions/37711538/matplotlib-3d-axes-ticks-labels-and-latex ax.set_ylabel("x1") ax.set_zlabel("y") ax.scatter(X_poly[:,0], X_poly[:,1], y) ax.view_init(azim=-90, elev=0) # x0, y #ax.view_init(azim=-90, elev=90) # x0, x1 #ax.view_init(azim=0, elev=0) # x1, y lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output _____no_output_____ ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output _____no_output_____ ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output _____no_output_____ ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance #X_b theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercices in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 10, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train_predict, y_train)) val_errors.append(mean_squared_error(y_val_predict, y_val)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val_predict, y_val) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) for subplot in (221, 223): plt.subplot(subplot) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) for subplot in (223, 224): plt.subplot(subplot) plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for a given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): # the catogries start from 0, so add one to the column numbers n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) # add an epsilon item to avoid overflow loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.44618386482 500 0.835100303577 1000 0.687696155441 1500 0.601029983545 2000 0.544278281196 2500 0.503726274224 3000 0.472835729391 3500 0.448187250818 4000 0.427834726281 4500 0.410589102282 5000 0.395680325749 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) # pg 129 use that close-form equation gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.62957494791 500 0.534163155437 1000 0.503771274864 1500 0.494805645558 2000 0.491408194841 2500 0.490008507445 3000 0.489407428961 3500 0.489143102469 4000 0.489025165491 4500 0.488972058096 5000 0.488948000479 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown My exercise solutions 1. to 11. These were theoretical questions. 12. ###Code from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown Note: I looked at the solution a bit for the below code that adds the bias. ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] import random def train_test_split(X, y, val_ratio=0.2, test_ratio=0.2): train_length = int(len(X) * (1 - val_ratio - test_ratio)) val_length = int(len(X) * val_ratio) random.shuffle(X) random.shuffle(y) X_train = X[:train_length] y_train = y[:train_length] X_val = X[train_length:(train_length + val_length)] y_val = y[train_length:(train_length + val_length)] X_test = X[(train_length + val_length):] y_test = y[(train_length + val_length):] return X_train, y_train, X_val, y_val, X_test, y_test X_train, y_train, X_val, y_val, X_test, y_test = train_test_split(X_with_bias, y) print(len(X_train)) print(len(y_train)) print(len(X_val)) print(len(y_val)) print(len(X_test)) print(len(y_test)) print(X_train) print(y_train) def to_one_hot(y): y_one_hot = [] for value in y: if value == 0: y_one_hot.append([1, 0, 0]) elif value == 1: y_one_hot.append([0, 1, 0]) elif value == 2: y_one_hot.append([0, 0, 1]) return y_one_hot ###Output _____no_output_____ ###Markdown If the y vector had more options, the above function would get tedious to write (because I could have [0, 0, ..., 1 ..., 0]. That would be a pain. ###Code y_train_one_hot = to_one_hot(y_train) print(y_train) print(y_train_one_hot) def softmax(s): numerator = np.exp(s) denominator = np.sum(np.exp(s), axis=1, keepdims=True) return numerator / denominator ###Output _____no_output_____ ###Markdown Note: For the function above I looked at the solution a bit. ###Code print(softmax(y_one_hot)) n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y)) # == 3 (3 iris classes) print(X_train.shape) print(len(np.unique(y))) ###Output 3 ###Markdown Now for the gradient descent: ###Code eta = 0.01 n_iterations = 1001 epsilon = 1e-7 # so we don't try to take the log of 0 m = len(X_train) theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(theta) y_proba = softmax(logits) loss = -np.mean(np.sum(y_train_one_hot * np.log(softmax(y_proba + epsilon)), axis=1)) error = y_proba - y_train_one_hot if ((iteration % 500) == 0): print("Iteration: " + str(iteration) + "; loss: " + str(loss)) gradients = 1/m * X_train.T.dot(error) theta = theta - eta * gradients ###Output Iteration: 0; loss: 1.1254676117494151 Iteration: 500; loss: 1.1013337854756082 Iteration: 1000; loss: 1.09892832055843 ###Markdown Note: For the above function I consulted the official solution. ###Code logits = X_val.dot(theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_val) accuracy_score ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 4.597130923472287 500 1.0908127529355436 1000 1.0869544648915044 1500 1.0848375011351523 2000 1.0835671826737425 2500 1.0827229443791269 3000 1.082105168466696 3500 1.0816174822554023 4000 1.081212060324179 4500 1.0808641153835157 5000 1.080559941470915 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients print(Theta) ###Output [[ 0.51011728 0.74237601 0.53943335] [-0.04789373 -0.00246172 0.05035546] [ 0.04824824 -0.07473776 0.02648952]] ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 설정 파이썬 2와 3을 모두 지원합니다. 공통 모듈을 임포트하고 맷플롯립 그림이 노트북 안에 포함되도록 설정하고 생성한 그림을 저장하기 위한 함수를 준비합니다: ###Code # 파이썬 2와 파이썬 3 지원 from __future__ import division, print_function, unicode_literals # 공통 import numpy as np import os # 일관된 출력을 위해 유사난수 초기화 np.random.seed(42) # 맷플롯립 설정 %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # 한글출력 matplotlib.rc('font', family='NanumBarunGothic') plt.rcParams['axes.unicode_minus'] = False # 그림을 저장할 폴드 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown 정규 방정식을 사용한 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-", linewidth=2, label="예측") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 scipy.linalg.lstsq() 함수("least squares"의 약자)를 사용하므로 직접 호출할 수 있습니다: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_(pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 경사 하강법을 사용한 선형 회귀 ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output _____no_output_____ ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 무작위 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 빠짐 y_predict = X_new_b.dot(theta) # 책에는 빠짐 style = "b-" if i > 0 else "r--" # 책에는 빠짐 plt.plot(X_new, y_predict, style) # 책에는 빠짐 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 빠짐 plt.plot(X, y, "b.") # 책에는 빠짐 plt.xlabel("$x_1$", fontsize=18) # 책에는 빠짐 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 빠짐 plt.axis([0, 2, 0, 15]) # 책에는 빠짐 save_fig("sgd_plot") # 책에는 빠짐 plt.show() # 책에는 빠짐 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=5, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 무작위 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="SGD") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="미니배치") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="배치") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output _____no_output_____ ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="예측") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="훈련") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="검증") plt.legend(loc="upper right", fontsize=14) # 책에는 빠짐 plt.xlabel("훈련 세트 크기", fontsize=14) # 책에는 빠짐 plt.ylabel("RMSE", fontsize=14) # 책에는 빠짐 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 빠짐 save_fig("underfitting_learning_curves_plot") # 책에는 빠짐 plt.show() # 책에는 빠짐 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 빠짐 save_fig("learning_curves_plot") # 책에는 빠짐 plt.show() # 책에는 빠짐 ###Output _____no_output_____ ###Markdown 규제가 있는 모델 ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, penalty="l2", tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('최선의 모델', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="검증 세트") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="훈련 세트") plt.legend(loc="upper right", fontsize=14) plt.xlabel("에포크", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 이어서 학습합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # 편향은 무시 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0, labelpad=15) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output _____no_output_____ ###Markdown 로지스틱 회귀 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 넓이 y = (iris["target"] == 2).astype(np.int) # Iris-Virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown 향후 사이킷런 0.22 버전에서 `LogisticRegression` 클래스의 `solver` 매개변수 기본값이 `liblinear`에서 `lbfgs`로 변경될 예정입니다. 사이킷런 0.20 버전에서 `solver` 매개변수를 지정하지 않는 경우 이에 대한 경고 메세지를 출력합니다. 경고 메세지를 피하고 출력 결과를 일관되게 유지하기 위하여 `solver` 매개변수를 `liblinear`로 설정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver='liblinear', random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "결정 경계", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("꽃잎의 폭 (cm)", fontsize=14) plt.ylabel("확률", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver='liblinear', C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Iris-Virginica 아님", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("꽃잎의 길이", fontsize=14) plt.ylabel("꽃잎의 폭", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("꽃잎의 길이", fontsize=14) plt.ylabel("꽃잎의 폭", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 가능한 한가지 방법은 다음과 같습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항과 인덱스가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그래디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.44824244188957774 4000 0.42786510939287936 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.4946891059460321 2000 0.4912968418075477 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.48891736218308185 4500 0.4888643337449302 5000 0.4888403120738818 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 그래도 완벽하고 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("꽃잎 길이", fontsize=14) plt.ylabel("꽃잎 폭", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab **Warning**: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions. Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390373 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629506 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.48989924700933296 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear Regression The Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Gradient Descent Batch Gradient Descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial Regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output Saving figure high_degree_polynomials_plot ###Markdown Learning Curves ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train) + 1): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Implementing Learning Curves using Scikit-learn ###Code from sklearn.model_selection import learning_curve from sklearn.metrics import make_scorer def plot_learning_curves_sklearn(model, X, y): train_sizes, train_scores, val_scores = learning_curve(model, X, y, train_sizes=np.linspace(1/80, 1, 80), scoring = make_scorer(mean_squared_error)) train_scores_mean = np.mean(train_scores, axis=1) val_scores_mean = np.mean(val_scores, axis=1) plt.plot(np.sqrt(train_scores_mean), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_scores_mean), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Training set size", fontsize=14) plt.ylabel("RMSE", fontsize=14) lin_reg = LinearRegression() plot_learning_curves_sklearn(lin_reg, X, y) plt.axis([0, 80, 0, 3]) save_fig("learning_curves_plot2") plt.show(); plot_learning_curves_sklearn(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) save_fig("learning_curves_plot3") plt.show(); ###Output C:\Users\ayh17\anaconda3\lib\site-packages\sklearn\model_selection\_validation.py:1647: RuntimeWarning: Removed duplicate entries from 'train_sizes'. Number of ticks will be less than the size of 'train_sizes': 79 instead of 80. warnings.warn( ###Markdown Regularized Linear Models Ridge Regression ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Lasso Regression ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Early Stopping ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic Regression Decision Boundaries ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) ###Output _____no_output_____ ###Markdown Softmax Regression ###Code from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown Regularized Linear Models Ridge Regression ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Lasso Regression ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Early Stopping ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic Regression Decision Boundaries ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) ###Output _____no_output_____ ###Markdown Softmax Regression ###Code from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(n_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 10, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(n_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train_predict, y_train)) val_errors.append(mean_squared_error(y_val_predict, y_val)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(n_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val_predict, y_val) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) for subplot in (221, 223): plt.subplot(subplot) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) for subplot in (223, 224): plt.subplot(subplot) plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.44618386482 500 0.835100303577 1000 0.687696155441 1500 0.601029983545 2000 0.544278281196 2500 0.503726274224 3000 0.472835729391 3500 0.448187250818 4000 0.427834726281 4500 0.410589102282 5000 0.395680325749 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.62957494791 500 0.534163155437 1000 0.503771274864 1500 0.494805645558 2000 0.491408194841 2500 0.490008507445 3000 0.489407428961 3500 0.489143102469 4000 0.489025165491 4500 0.488972058096 5000 0.488948000479 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercices in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import numpy.random as rnd import os # to make this notebook's output stable across runs rnd.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) rnd.seed(42) theta = rnd.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() rnd.seed(42) theta = rnd.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) rnd.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(n_iter=50, penalty=None, eta0=0.1) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 rnd.seed(42) theta = rnd.randn(2,1) # random initialization t0, t1 = 10, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = rnd.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd rnd.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline(( ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), )) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline(( ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("sgd_reg", LinearRegression()), )) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge rnd.seed(42) m = 20 X = 3 * rnd.rand(m, 1) y = 1 + 0.5 * X + rnd.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline(( ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), )) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100)) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1)) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky") ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag") ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1)) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) rnd.seed(42) m = 100 X = 6 * rnd.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + rnd.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline(( ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), )) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(n_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train_predict, y_train)) val_errors.append(mean_squared_error(y_val_predict, y_val)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(n_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val_predict, y_val) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) for subplot in (221, 223): plt.subplot(subplot) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) for subplot in (223, 224): plt.subplot(subplot) plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression() log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.44618386482 500 0.835100303577 1000 0.687696155441 1500 0.601029983545 2000 0.544278281196 2500 0.503726274224 3000 0.472835729391 3500 0.448187250818 4000 0.427834726281 4500 0.410589102282 5000 0.395680325749 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.62957494791 500 0.534163155437 1000 0.503771274864 1500 0.494805645558 2000 0.491408194841 2500 0.490008507445 3000 0.489407428961 3500 0.489143102469 4000 0.489025165491 4500 0.488972058096 5000 0.488948000479 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] # 1's ensure bias is not multiplied ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 # 20% data to test set validation_ratio = 0.2 # 20% training data to validation set total_size = len(X_with_bias) # total size of training set # Getting sizes of train, validation, and test sets test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size # Compute set of random indices from size of total dataset rnd_indices = np.random.permutation(total_size) # Split data set along randomly generated indices X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 # max element number + 1 b/c indexed from zero m = len(y) # number of instances to label Y_one_hot = np.zeros((m, n_classes)) # create matrix of zeros of size m x n Y_one_hot[np.arange(m), y] = 1 # insert 1 at index m x y for encoding return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) # randomly initialize parameters of model for iteration in range(n_iterations): logits = X_train.dot(Theta) # computing log odds Y_proba = softmax(logits) # computing score of class loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) # calculating loss error = Y_proba - Y_train_one_hot # calculating error term if iteration % 500 == 0: # log iteration and loss print(iteration, loss) gradients = 1/m * X_train.T.dot(error) # calculating gradient of loss function Theta = Theta - eta * gradients # updating parameters of model with learning rate * step ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) # trialing on validation set Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) # cross entropy loss term l2_loss = 1/2 * np.sum(np.square(Theta[1:])) # l_2 loss term loss = xentropy_loss + alpha * l2_loss # total loss term error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) # gradient includes derivative of l_2 regularization term gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: # conditional for stopping when loss begins to increase best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear Regression The Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) # plt.legend() plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Gradient Descent Batch Gradient Descent ###Code eta = 0.01 # learning rate n_iterations = 100000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) # batch is the number of data samples to work through before updating model parameters # 1. Batch Gradient Descent. Batch Size = Size of Training Set (update happens by all) # 2. Stochastic Gradient Descent. Batch Size = 1 (each sample gets to update the model) # 3. Mini-Batch Gradient Descent. 1 < Batch Size < Size of Training Set n_epochs = 50 # it means that we will train the entire dataset 50 times # each epoch comprises of one or more batches # number of epochs are usually high in SGD # in Batch GD, epochs are equal to the number of training samples t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): # iterate over each batch if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial Regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output Saving figure high_degree_polynomials_plot ###Markdown Learning Curves ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train) + 1): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized Linear Models Ridge Regression ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Lasso Regression ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Early Stopping ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic Regression Decision Boundaries ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) ###Output _____no_output_____ ###Markdown Softmax Regression ###Code from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._**My Notes are mixed in with this notebook. So are parts of the textbook** Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" # def save_fig(fig_id, tight_layout=True): # path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") # print("Saving figure", fig_id) # if tight_layout: # plt.tight_layout() # plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal EquationA linear model makes a prediction by computing the weighted sum of the input features plus a constant _bias_ term (aka intercept):$$ \widehat {y}=\theta _{0}+\theta _{1}x_{2}+\ldots +\theta _{n}x_{n} $$Can also be written in a more consise form:$$ \widehat {y}=h_{\theta }\left( x\right) =\theta \cdot x $$- $\theta$ is the model’s parameter vector, containing the bias term $\theta_0$ and the feature weights $\theta_1$ to $\theta_n$- $x$ is the instance's _feature vector_ with length $n$ and $x_0$ is always equal to 1 for some reason.- $\theta \cdot x$ is the dot product- $h_{\theta }$ is the _hypothesis_ function using the model parameters $\theta$ ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) # save_fig("generated_data_plot") plt.show() ###Output _____no_output_____ ###Markdown So how do we train this linear model? We need a way to measure how good/poor the model we have does. We have that measure, it's RSME. So, we need to find the value of $\theta$ that _minimizes_ RMSE. In practice it's easier to minimize MSE and it's the same result since the value that minimizes a function also minimizes its root that's just math.The MSE of a linear regression hypotheses is calculated with:$$ MSE\left( x,h_{\theta }\right) =\dfrac {1}{m}\sum ^{m}_{i=1}\left( \theta ^{T}x^{\left( i\right) }-y^{\left( i\right) }\right) ^{2} $$This is also the **cost function** for our linear regression modelTo simplify you can also just write $MSE(\theta)$ The Normal EquationNote: the first releases of the book implied that the LinearRegression class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition$$ \widehat \theta =\left( X^{T}X\right) ^{-1}X^{T}y $$- $\widehat \theta$ is the value of $\theta$ that minimizes the cost function- $y$ is the vector of target values containing $y^{(1)}$ to $y^{(m)}$Time to generate some data: ___jumping in here Why do you add 1 in front?Let's try it without the ones column: ###Code X.shape theta_best = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = X_new y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown See how the intercept is at zero? Back to the normal textbook___Now with the intercept like in the bookNow let’s compute $\hat \theta$ using the Normal Equation. We will use the `inv()` function from NumPy’s Linear Algebra module (`np.linalg`) to compute the inverse of a matrix, and the `dot()` method for matrix multiplication: ###Code X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) ###Output _____no_output_____ ###Markdown The actual function that we used to generate the data is y = 4 + 3x1 + Gaussian noise. Let's check out what the model got: ###Code theta_best ###Output _____no_output_____ ###Markdown yo pretty good. If it were perfect it would have been 4 and 3. Making PredictionsWe can make predictions with our new model. ###Code X_new = np.array([[0], [2]]) print("X_new: {}".format(X_new)) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict ###Output X_new: [[0] [2]] ###Markdown We can predict and plot the prediction: ###Code plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) # save_fig("linear_model_predictions") plt.show() ###Output _____no_output_____ ###Markdown Recap. We just did a linear regression using the linear algebra form of the equation. Now let's do another model but this time use sklearn. ###Code from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent To do gradient descent, you need to calculate the rate of change of the cost function in a certain direction. Time for Calculus. we could use the partial derivative of the cost function with respect to the parameter $\theta_j$, denoted $ \frac{\delta}{\delta \theta_j} \text{ MSE} \left( \theta \right)$. Or, instead of computing the partial derivative for each $j$, we could use the gradient vector $\nabla_{\theta} \text{ MSE} \left( \theta \right) $. That's what this code below does. ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients ###Output _____no_output_____ ###Markdown Note that it does math on all of the dataset at each step. Yikes. ###Code theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) # save_fig("gradient_descent_plot") plt.show() ###Output _____no_output_____ ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown # save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) # save_fig("gradient_descent_paths_plot") plt.show() ###Output _____no_output_____ ###Markdown Polynomial regressionThis is the next step. Our data isn't just a straight line anymore. And of course we have to mention that this is still _linear_ regression, even though the data is not linear. ###Code import numpy as np import numpy.random as rnd np.random.seed(42) # Make some data with some noise m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) # save_fig("quadratic_data_plot") plt.show() ###Output _____no_output_____ ###Markdown This code below transforms our original data. It's more than just the square of the data it's (1, a, b, $a^2$, ab $b^2$). That's because we'll give it degree=2. Note that the size of the output grows really really fast with an increase in degree. ###Code from sklearn.preprocessing import PolynomialFeatures PolynomialFeatures? poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) # save_fig("quadratic_predictions_plot") plt.show() ###Output _____no_output_____ ###Markdown Learning CurvesA high degree model will fit the data really really well. It'll wiggle back and forth like crazy and you can even det it to hit every data point. Of course that wouldn't be very useful when it comes time to make predictions on new data. ###Code from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) # save_fig("high_degree_polynomials_plot") plt.show() ###Output _____no_output_____ ###Markdown So you can have too high a degree, and too low a degree. So how do you know if you're over or under fitting?We talked about cross validation before.1. If your model performs well on training but poor on the testing data, then your model is overfitting.2. If it performs poorly on both, then it is under fitting.3. The last combination possible is poor on training but well on testing. Which seems dumb. The book didn't metion this third one.Another way to tell is with _learning curves_. These are plots of perfomance on training and testing sets as a function of the training size (or training iteration) ((what)). To generate learning curve plots train the model several times on different sized subset of the training data. ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book # save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown # save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) # save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=5, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) # save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) # save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) # save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) # save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) # save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) # save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446183864821945 500 0.8351003035768683 1000 0.6876961554414912 1500 0.6010299835452122 2000 0.5442782811959167 2500 0.5037262742244605 3000 0.4728357293908468 3500 0.4481872508179334 4000 0.4278347262806174 4500 0.4105891022823527 5000 0.39568032574889406 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629574947908294 500 0.5341631554372782 1000 0.5037712748637474 1500 0.4948056455575166 2000 0.49140819484111964 2500 0.4900085074445459 3000 0.48940742896132616 3500 0.4891431024691195 4000 0.48902516549065855 4500 0.48897205809605315 5000 0.4889480004791563 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear Regression The Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown Understanding np.c_ ###Code X[:5] X_b[:5] np.ones((5,1)) np.c_[np.ones((5, 1)), X[:5]] ###Output _____no_output_____ ###Markdown Continuation ###Code X_new = np.array([[0], [2]]) # two new instances X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict X_new X_new_b plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Gradient Descent Batch Gradient Descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 # there are 100 instances theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): # a medida que aumenta t, el learning schedule va disminuyendo desde 0.1 return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) # para calcular el learning schedule (el rate pero cuando cambia no se llama rate) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) y.ravel().shape y.shape sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): # shuffled indices para shuffle de la misma forma los examples y las labels shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output _____no_output_____ ###Markdown Polynomial Regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output Saving figure high_degree_polynomials_plot ###Markdown Learning Curves ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train) + 1): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) # entero! train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output _____no_output_____ ###Markdown Regularized Linear Models Ridge Regression ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Lasso Regression ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Early Stopping ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) # warm start para que siga donde paró minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: # si este es el error de validación más bajo hasta ahora, guardas todo y haces deepcopy del modelo minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic Regression Decision Boundaries ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() ###Output Saving figure logistic_regression_contour_plot ###Markdown Softmax Regression ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) # softmax: multi_class = "multinomial" softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown Our perfect model turns out to have slight imperfections. This variability is likely due to the very small size of the dataset: depending on how you sample the training set, validation set and the test set, you can get quite different results. Try changing the random seed and running the code again a few times, you will see that the results will vary. ###Code ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab **Warning**: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions. Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.42786510939287936 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.48886433374493027 5000 0.48884031207388184 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown Our perfect model turns out to have slight imperfections. This variability is likely due to the very small size of the dataset: depending on how you sample the training set, validation set and the test set, you can get quite different results. Try changing the random seed and running the code again a few times, you will see that the results will vary. ###Code ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) theta_0s = theta[0].tolist() theta_1s = theta[1].tolist() for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta_0s.append(theta[0][0]) theta_1s.append(theta[1][0]) theta plt.plot(theta_0s, theta_1s) plt.axis([-2, 5, -2, 5]) X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() theta_path_bgd ###Output _____no_output_____ ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y.ravel()) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=5, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() len(X_train) len(X_val) from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) log_reg.coef_, log_reg.intercept_ X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.intercept_ / -log_reg.coef_[0] log_reg.predict([[1.7], [1.5]]) np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) x0 len(x0.ravel()) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 7, 500).reshape(-1, 1), np.linspace(0, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([0, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([0, 7, 0, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446183864821945 500 0.8351003035768683 1000 0.6876961554414912 1500 0.6010299835452122 2000 0.5442782811959167 2500 0.5037262742244605 3000 0.4728357293908468 3500 0.4481872508179334 4000 0.4278347262806174 4500 0.4105891022823527 5000 0.39568032574889406 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629574947908294 500 0.5341631554372782 1000 0.5037712748637474 1500 0.4948056455575166 2000 0.49140819484111964 2500 0.4900085074445459 3000 0.48940742896132616 3500 0.4891431024691195 4000 0.48902516549065855 4500 0.48897205809605315 5000 0.4889480004791563 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() plt.figure(figsize=(9, 3)) for w, b, style in zip([1,2,1,-2], [0,0,1, 0], ["b-", "g--", "r:", "y-"]): x = np.linspace(-10, 10, 200) t = w * x + b sig = 1 / (1 + np.exp(-t)) plt.plot(x, sig, style, linewidth=2, label=f"$t={w}*x+{b}$") plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.xlabel("t") plt.legend(loc="upper left", fontsize=12) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) iris['data'].shape X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) x0.shape x1.shape X_new.shape from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), # (500,) -> (500, 1) np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) y_proba.shape contour.levels x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 1.7]]) softmax_reg.predict_proba([[5, 1.7]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear Regression The Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Gradient Descent Batch Gradient Descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial Regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output Saving figure high_degree_polynomials_plot ###Markdown Learning Curves ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train) + 1): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized Linear Models Ridge Regression ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Lasso Regression ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Early Stopping ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic Regression Decision Boundaries ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) ###Output _____no_output_____ ###Markdown Softmax Regression ###Code from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Own solutions to exercises Exercise 1Stochastic or mini-batch GD. Exercise 2GD algorithms don't like very different scales, so the features should be scaled by e.g. StandardScaler. Exercise 3No, because the space is "bowl shaped" (i.e. only one minimum). Exercise 4No, e.g. the stochastic or mini-batch GD algorithms might jump around the minimum a bit, without ever finding it. Exercise 5The learning rate may be too high, especially if the training error also goes up. If it does not go up, there is overfitting and we should stop. Exercise 6No, some more epochs should be calculated to see if the validation error remains above the minimum. Exercise 7Stochastic GD gets near the minimum the fastest, but only Batch GD finds the actual minimum. The others can be made to converge by selecting a sufficiently small learning rate. Exercise 8Overfitting occurs. The polynomial degree could be reduced, model could be regularized, or the size of the training set could be increased. Exercise 9Underfitting occurs, the bias should be reduced. Alpha should be decreased. Exercise 10Regularization should be used to reduce overfitting. Lasso is better if there are only a few useful features, since less important ones tend to be eliminated by it. Elastic Net is preferable over Lasso when the number of features is greater than the number of instances or when there are several strongly correlated features. Exercise 11Two Logistic Regression classifiers. Exercise 12 ###Code from sklearn import datasets iris = datasets.load_iris() X=iris["data"][:, (2, 3)] y=iris["target"] X_bias = np.c_[np.ones([len(X), 1]), X] def yToProba(y): return np.asarray([[1 if v==0 else 0, 1 if v==1 else 0, 1 if v==2 else 0] for v in y]) yP = yToProba(y) num = len(y) a=np.arange(num) np.random.shuffle(a) cut = num*8//10 X_train = X_bias[a[:cut]] X_validate = X_bias[a[cut:]] y_validate=y[a[cut:]] yP_train = yP[a[:cut]] yP_validate = yP[a[cut:]] X.shape, X_train.shape, X_validate.shape, yP.shape, yP_train.shape, yP_validate.shape def softmaxScore(theta, x): return np.dot(theta, x) def softmax(Theta, x): scores=[softmaxScore(theta, x) for theta in Theta] expSum = np.sum(np.exp(scores)) return [np.exp(score)/expSum for score in scores] def softmaxM(Theta, X): logits = X.dot(Theta) exps = np.exp(logits) expSum = np.sum(exps, axis = 1, keepdims = True) return exps / expSum #return np.asarray([softmax(Theta, x) for x in X]) def crossEntropy(sm, y): return -1*np.sum(y * np.log(sm))/sm.shape[0] def crossEntropyGradient(sm, X, y): m = len(X) error = sm - y return (1/m) * X.T.dot(error) #summa = np.zeros((sm.shape[1], X.shape[1])) #count = 0 #for (smI, xI, yI) in zip(sm, X, y): # count += 1 # summa += np.outer(smI - yI, xI) #return summa / count sm = softmaxM(Theta, X_train) crossEntropyGradient(sm, X_train, yP_train) error = sm - yP_train X_train.T.dot(error)/len(X_train) Theta = np.random.rand(3, 3) print(Theta) learningRate = 0.1 prevCE = 10.0 count = 0 while count<5000: count += 1 sm = softmaxM(Theta, X_train) ceg = crossEntropyGradient(sm, X_train, yP_train) Theta -= learningRate * ceg sm_validate = softmaxM(Theta, X_validate) ce = crossEntropy(sm_validate, yP_validate) if (count%100)==0: print(count, prevCE, ce) #if ce>prevCE: # print("Going UP:", count, ce, prevCE) # break prevCE = ce print(Theta, ceg) sm=softmaxM(Theta, X_train) -1*np.sum(yP_train * np.log(sm))/sm.shape[0] Y_proba = softmaxM(Theta, X_validate) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_validate) accuracy_score np.dot(Theta[0], X_train[0]) Theta, Theta.T softmaxM(Theta, X_train) softmaxM(Theta, X_train) - yP_train yP_train ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code Y_proba = softmaxM(Theta, X_validate) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_validate) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code # import numpy as np # X = 2 * np.random.rand(100, 1) # y = 4 + 3 * X + np.random.randn(100, 1) # Practice import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) # plt.plot(X, y, "b.") # plt.xlabel("$x_1$", fontsize=18) # plt.ylabel("$y$", rotation=0, fontsize=18) # plt.axis([0, 2, 0, 15]) # save_fig("generated_data_plot") # plt.show() # Practice plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) # My Notes: x range from 0 to 2 and y from 0 to 15 save_fig("generated_data_plot") plt.show() # X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance # theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) # Practice X_b = np.c_[np.ones((100, 1)), X] # My Notes: x0 = 1 is your bias term theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) # theta_best theta_best # X_new = np.array([[0], [2]]) # X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance # y_predict = X_new_b.dot(theta_best) # y_predict # Practice X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2,1)), X_new] y_predict = X_new_b.dot(theta_best) y_predict # plt.plot(X_new, y_predict, "r-") # plt.plot(X, y, "b.") # plt.axis([0, 2, 0, 15]) # plt.show() # Practice plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code # plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") # plt.plot(X, y, "b.") # plt.xlabel("$x_1$", fontsize=18) # plt.ylabel("$y$", rotation=0, fontsize=18) # plt.legend(loc="upper left", fontsize=14) # plt.axis([0, 2, 0, 15]) # save_fig("linear_model_predictions") # plt.show() # Practice plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() # from sklearn.linear_model import LinearRegression # lin_reg = LinearRegression() # lin_reg.fit(X, y) # lin_reg.intercept_, lin_reg.coef_ # Practice from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ # lin_reg.predict(X_new) # Practice lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code # theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) # theta_best_svd # Practice theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code # np.linalg.pinv(X_b).dot(y) # Practice np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. My Notes:Training Linear Regression models using Normal Equation or Singular Value Decomposition and making predictions is fast when the number of instances or features are small ( < 100,000) Linear regression using batch gradient descent ###Code # eta = 0.1 # n_iterations = 1000 # m = 100 # theta = np.random.randn(2,1) # for iteration in range(n_iterations): # gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) # theta = theta - eta * gradients # Practice eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients # theta # Practice theta X_new_b.dot(theta) # My Notes X_new_b # theta_path_bgd = [] # def plot_gradient_descent(theta, eta, theta_path=None): # m = len(X_b) # plt.plot(X, y, "b.") # n_iterations = 1000 # for iteration in range(n_iterations): # if iteration < 10: # y_predict = X_new_b.dot(theta) # style = "b-" if iteration > 0 else "r--" # plt.plot(X_new, y_predict, style) # gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) # theta = theta - eta * gradients # if theta_path is not None: # theta_path.append(theta) # plt.xlabel("$x_1$", fontsize=18) # plt.axis([0, 2, 0, 15]) # plt.title(r"$\eta = {}$".format(eta), fontsize=16) # Practice theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) # np.random.seed(42) # theta = np.random.randn(2,1) # random initialization # plt.figure(figsize=(10,4)) # plt.subplot(131); plot_gradient_descent(theta, eta=0.02) # plt.ylabel("$y$", rotation=0, fontsize=18) # plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) # plt.subplot(133); plot_gradient_descent(theta, eta=0.5) # save_fig("gradient_descent_plot") # plt.show() # Practice np.random.seed(42) theta = np.random.randn(2,1) plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent My Notes:SGD does gradient descent for each random instance in the training set and is much faster compared to batch gradient descentEpoch = rounds of *m* iterations where *m* is the number of instances in a training set ###Code # theta_path_sgd = [] # m = len(X_b) # np.random.seed(42) # Practice theta_path_sgd = [] m = len(X_b) np.random.seed(42) # n_epochs = 50 # t0, t1 = 5, 50 # learning schedule hyperparameters # def learning_schedule(t): # return t0 / (t + t1) # theta = np.random.randn(2,1) # random initialization # for epoch in range(n_epochs): # for i in range(m): # if epoch == 0 and i < 20: # not shown in the book # y_predict = X_new_b.dot(theta) # not shown # style = "b-" if i > 0 else "r--" # not shown # plt.plot(X_new, y_predict, style) # not shown # random_index = np.random.randint(m) # xi = X_b[random_index:random_index+1] # yi = y[random_index:random_index+1] # gradients = 2 * xi.T.dot(xi.dot(theta) - yi) # eta = learning_schedule(epoch * m + i) # theta = theta - eta * gradients # theta_path_sgd.append(theta) # not shown # plt.plot(X, y, "b.") # not shown # plt.xlabel("$x_1$", fontsize=18) # not shown # plt.ylabel("$y$", rotation=0, fontsize=18) # not shown # plt.axis([0, 2, 0, 15]) # not shown # save_fig("sgd_plot") # not shown # plt.show() # not shown # Practice n_epochs = 50 t0, t1 = 5, 50 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # My Notes: Only does one Epoch and less than 20 instances y_predict = X_new_b.dot(theta) style = "b-" if i > 0 else "r--" plt.plot(X_new, y_predict, style) random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("sgd_plot") plt.show() # theta # Practice theta # from sklearn.linear_model import SGDRegressor # sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) # sgd_reg.fit(X, y.ravel()) # Practice from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) # sgd_reg.intercept_, sgd_reg.coef_ # Practice sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code # theta_path_mgd = [] # n_iterations = 50 # minibatch_size = 20 # np.random.seed(42) # theta = np.random.randn(2,1) # random initialization # t0, t1 = 200, 1000 # def learning_schedule(t): # return t0 / (t + t1) # t = 0 # for epoch in range(n_iterations): # shuffled_indices = np.random.permutation(m) # X_b_shuffled = X_b[shuffled_indices] # y_shuffled = y[shuffled_indices] # for i in range(0, m, minibatch_size): # t += 1 # xi = X_b_shuffled[i:i+minibatch_size] # yi = y_shuffled[i:i+minibatch_size] # gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) # eta = learning_schedule(t) # theta = theta - eta * gradients # theta_path_mgd.append(theta) # Practice theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): # My Notes: Shuffle the training set and the labels for every epoch shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) # theta # Practice theta # theta_path_bgd = np.array(theta_path_bgd) # theta_path_sgd = np.array(theta_path_sgd) # theta_path_mgd = np.array(theta_path_mgd) # Practice theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) theta_path_sgd.shape # plt.figure(figsize=(7,4)) # plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") # plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") # plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") # plt.legend(loc="upper left", fontsize=16) # plt.xlabel(r"$\theta_0$", fontsize=20) # plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) # plt.axis([2.5, 4.5, 2.3, 3.9]) # save_fig("gradient_descent_paths_plot") # plt.show() # Practice plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code # import numpy as np # import numpy.random as rnd # np.random.seed(42) # Practice import numpy as np import numpy.random as rnd np.random.seed(42) # m = 100 # X = 6 * np.random.rand(m, 1) - 3 # y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) # Practice m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) # plt.plot(X, y, "b.") # plt.xlabel("$x_1$", fontsize=18) # plt.ylabel("$y$", rotation=0, fontsize=18) # plt.axis([-3, 3, 0, 10]) # save_fig("quadratic_data_plot") # plt.show() # Practice plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() # from sklearn.preprocessing import PolynomialFeatures # poly_features = PolynomialFeatures(degree=2, include_bias=False) # X_poly = poly_features.fit_transform(X) # X[0] # Practice from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] # X_poly[0] # Practice X_poly[0] # lin_reg = LinearRegression() # lin_reg.fit(X_poly, y) # lin_reg.intercept_, lin_reg.coef_ # Practice lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ # X_new=np.linspace(-3, 3, 100).reshape(100, 1) # X_new_poly = poly_features.transform(X_new) # y_new = lin_reg.predict(X_new_poly) # plt.plot(X, y, "b.") # plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") # plt.xlabel("$x_1$", fontsize=18) # plt.ylabel("$y$", rotation=0, fontsize=18) # plt.legend(loc="upper left", fontsize=14) # plt.axis([-3, 3, 0, 10]) # save_fig("quadratic_predictions_plot") # plt.show() # Practice X_new = np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() # from sklearn.preprocessing import StandardScaler # from sklearn.pipeline import Pipeline # for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): # polybig_features = PolynomialFeatures(degree=degree, include_bias=False) # std_scaler = StandardScaler() # lin_reg = LinearRegression() # polynomial_regression = Pipeline([ # ("poly_features", polybig_features), # ("std_scaler", std_scaler), # ("lin_reg", lin_reg), # ]) # polynomial_regression.fit(X, y) # y_newbig = polynomial_regression.predict(X_new) # plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) # plt.plot(X, y, "b.", linewidth=3) # plt.legend(loc="upper left") # plt.xlabel("$x_1$", fontsize=18) # plt.ylabel("$y$", rotation=0, fontsize=18) # plt.axis([-3, 3, 0, 10]) # save_fig("high_degree_polynomials_plot") # plt.show() # Practice from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg) ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() # from sklearn.metrics import mean_squared_error # from sklearn.model_selection import train_test_split # def plot_learning_curves(model, X, y): # X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) # train_errors, val_errors = [], [] # for m in range(1, len(X_train)): # model.fit(X_train[:m], y_train[:m]) # y_train_predict = model.predict(X_train[:m]) # y_val_predict = model.predict(X_val) # train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) # val_errors.append(mean_squared_error(y_val, y_val_predict)) # plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") # plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") # plt.legend(loc="upper right", fontsize=14) # not shown in the book # plt.xlabel("Training set size", fontsize=14) # not shown # plt.ylabel("RMSE", fontsize=14) # not shown # Practice from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) # My Notes: Split dataset to training set to 80% and validation set to 20% train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Training set size", fontsize=14) plt.ylabel("RMSE", fontsize=14) # lin_reg = LinearRegression() # plot_learning_curves(lin_reg, X, y) # plt.axis([0, 80, 0, 3]) # not shown in the book # save_fig("underfitting_learning_curves_plot") # not shown # plt.show() # not shown # Practice lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) save_fig("underfitting_learning_curves_plot") plt.show() ###Output Saving figure underfitting_learning_curves_plot ###Markdown My Notes: For the training set, initially the model is able to fit the model for the first few instances, compared to the validation set, the model has not seen thedata set and it starts off with high RMSE which decreases as it learns the validation set.The above shows the model is **underfitting** ###Code # from sklearn.pipeline import Pipeline # polynomial_regression = Pipeline([ # ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), # ("lin_reg", LinearRegression()), # ]) # plot_learning_curves(polynomial_regression, X, y) # plt.axis([0, 80, 0, 3]) # not shown # save_fig("learning_curves_plot") # not shown # plt.show() # not shown # Practice from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()) ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) save_fig("learning_curves_plot") plt.show() ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code # from sklearn.linear_model import Ridge # np.random.seed(42) # m = 20 # X = 3 * np.random.rand(m, 1) # y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 # X_new = np.linspace(0, 3, 100).reshape(100, 1) # def plot_model(model_class, polynomial, alphas, **model_kargs): # for alpha, style in zip(alphas, ("b-", "g--", "r:")): # model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() # if polynomial: # model = Pipeline([ # ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), # ("std_scaler", StandardScaler()), # ("regul_reg", model), # ]) # model.fit(X, y) # y_new_regul = model.predict(X_new) # lw = 2 if alpha > 0 else 1 # plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) # plt.plot(X, y, "b.", linewidth=3) # plt.legend(loc="upper left", fontsize=15) # plt.xlabel("$x_1$", fontsize=18) # plt.axis([0, 3, 0, 4]) # plt.figure(figsize=(8,4)) # plt.subplot(121) # plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) # plt.ylabel("$y$", rotation=0, fontsize=18) # plt.subplot(122) # plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) # save_fig("ridge_regression_plot") # plt.show() # Practice from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 100**-4, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() # from sklearn.linear_model import Ridge # ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) # ridge_reg.fit(X, y) # ridge_reg.predict([[1.5]]) # My Notes # Closed form equation with ridge regression # Practice from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) # sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) # sgd_reg.fit(X, y.ravel()) # sgd_reg.predict([[1.5]]) # My Notes # Stochoasitc Gradient Descent with ridge regression # Practice sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) # ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) # ridge_reg.fit(X, y) # ridge_reg.predict([[1.5]]) # Practice ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) # from sklearn.linear_model import Lasso # plt.figure(figsize=(8,4)) # plt.subplot(121) # plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) # plt.ylabel("$y$", rotation=0, fontsize=18) # plt.subplot(122) # plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) # save_fig("lasso_regression_plot") # plt.show() # Practice from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() # from sklearn.linear_model import Lasso # lasso_reg = Lasso(alpha=0.1) # lasso_reg.fit(X, y) # lasso_reg.predict([[1.5]]) # Practice from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) # My Notes: Using SGD with l1 penalty sgd_lasso_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l1", random_state=42) sgd_lasso_reg.fit(X, y.ravel()) sgd_lasso_reg.predict([[1.5]]) # from sklearn.linear_model import ElasticNet # elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) # elastic_net.fit(X, y) # elastic_net.predict([[1.5]]) # My Notes: ElasticNet, which is a middle ground between Ridge and Lasso Regression from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) # np.random.seed(42) # m = 100 # X = 6 * np.random.rand(m, 1) - 3 # y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) # X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) # poly_scaler = Pipeline([ # ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), # ("std_scaler", StandardScaler()), # ]) # X_train_poly_scaled = poly_scaler.fit_transform(X_train) # X_val_poly_scaled = poly_scaler.transform(X_val) # sgd_reg = SGDRegressor(max_iter=1, # tol=-np.infty, # penalty=None, # eta0=0.0005, # warm_start=True, # learning_rate="constant", # random_state=42) # n_epochs = 500 # train_errors, val_errors = [], [] # for epoch in range(n_epochs): # sgd_reg.fit(X_train_poly_scaled, y_train) # y_train_predict = sgd_reg.predict(X_train_poly_scaled) # y_val_predict = sgd_reg.predict(X_val_poly_scaled) # train_errors.append(mean_squared_error(y_train, y_train_predict)) # val_errors.append(mean_squared_error(y_val, y_val_predict)) # best_epoch = np.argmin(val_errors) # best_val_rmse = np.sqrt(val_errors[best_epoch]) # plt.annotate('Best model', # xy=(best_epoch, best_val_rmse), # xytext=(best_epoch, best_val_rmse + 1), # ha="center", # arrowprops=dict(facecolor='black', shrink=0.05), # fontsize=16, # ) # best_val_rmse -= 0.03 # just to make the graph look better # plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) # plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") # plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") # plt.legend(loc="upper right", fontsize=14) # plt.xlabel("Epoch", fontsize=14) # plt.ylabel("RMSE", fontsize=14) # save_fig("early_stopping_plot") # plt.show() # Practice np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) # Prepare the data poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() # from sklearn.base import clone # sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, # learning_rate="constant", eta0=0.0005, random_state=42) # minimum_val_error = float("inf") # best_epoch = None # best_model = None # for epoch in range(1000): # sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off # y_val_predict = sgd_reg.predict(X_val_poly_scaled) # val_error = mean_squared_error(y_val, y_val_predict) # if val_error < minimum_val_error: # minimum_val_error = val_error # best_epoch = epoch # best_model = clone(sgd_reg) # Practice from sklearn.base import clone sgd_reg =SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error =float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # My Notes: Continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error # My Notes: validation error becomes the new minimum val error best_epoch = epoch best_model = clone(sgd_reg) # best_epoch, best_model # Practice best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code # t = np.linspace(-10, 10, 100) # sig = 1 / (1 + np.exp(-t)) # plt.figure(figsize=(9, 3)) # plt.plot([-10, 10], [0, 0], "k-") # plt.plot([-10, 10], [0.5, 0.5], "k:") # plt.plot([-10, 10], [1, 1], "k:") # plt.plot([0, 0], [-1.1, 1.1], "k-") # plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") # plt.xlabel("t") # plt.legend(loc="upper left", fontsize=20) # plt.axis([-10, 10, -0.1, 1.1]) # save_fig("logistic_function_plot") # plt.show() # Practice t = np.linspace(-10, 10, 100) sig = 1 / (1+ np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1,1], "k:") plt.plot([-0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() # from sklearn import datasets # iris = datasets.load_iris() # list(iris.keys()) # Practice from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) # print(iris.DESCR) # Practice print(iris.DESCR) # X = iris["data"][:, 3:] # petal width # y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 # Practice X = iris["data"][:, 3:] #petal width y = (iris["target"] == 2).astype(np.int) #1 if Iris-Virginica, else 0 # My Notes: Idenitfy Iris-Virginica class based on the petal width # from sklearn.linear_model import LogisticRegression # log_reg = LogisticRegression(solver="liblinear", random_state=42) # log_reg.fit(X, y) # Practice from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) # X_new = np.linspace(0, 3, 1000).reshape(-1, 1) # y_proba = log_reg.predict_proba(X_new) # plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") # plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") # Practice X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() # X_new = np.linspace(0, 3, 1000).reshape(-1, 1) # y_proba = log_reg.predict_proba(X_new) # decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] # plt.figure(figsize=(8,3)) # plt.plot(X[y==0], y[==0], "bs") # plt.plot(X[y==1], y[y==1], "g^") # plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) # plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") # plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") # plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") # plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') # plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') # plt.xlabel("Petal width (cm)", fontsize=14) # plt.ylabel("Probability", fontsize=14) # plt.legend(loc="center left", fontsize=14) # plt.axis([0, 3, -0.02, 1.02]) # save_fig("logistic_regression_plot") # plt.show() # decision_boundary # Practice decision_boundary # log_reg.predict([[1.7], [1.5]]) # Practice log_reg.predict([[1.7], [1.5]]) # from sklearn.linear_model import LogisticRegression # X = iris["data"][:, (2, 3)] # petal length, petal width # y = (iris["target"] == 2).astype(np.int) # log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) # log_reg.fit(X, y) # x0, x1 = np.meshgrid( # np.linspace(2.9, 7, 500).reshape(-1, 1), # np.linspace(0.8, 2.7, 200).reshape(-1, 1), # ) # X_new = np.c_[x0.ravel(), x1.ravel()] # y_proba = log_reg.predict_proba(X_new) # plt.figure(figsize=(10, 4)) # plt.plot(X[y==0, 0], X[y==0, 1], "bs") # plt.plot(X[y==1, 0], X[y==1, 1], "g^") # zz = y_proba[:, 1].reshape(x0.shape) # contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) # left_right = np.array([2.9, 7]) # boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] # plt.clabel(contour, inline=1, fontsize=12) # plt.plot(left_right, boundary, "k--", linewidth=3) # plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") # plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") # plt.xlabel("Petal length", fontsize=14) # plt.ylabel("Petal width", fontsize=14) # plt.axis([2.9, 7, 0.8, 2.7]) # save_fig("logistic_regression_contour_plot") # plt.show() # Practice from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2,3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10,4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5 ,1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() # X = iris["data"][:, (2, 3)] # petal length, petal width # y = iris["target"] # softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) # softmax_reg.fit(X, y) # Practice X = iris["data"][:, (2,3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial", solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) # x0, x1 = np.meshgrid( # np.linspace(0, 8, 500).reshape(-1, 1), # np.linspace(0, 3.5, 200).reshape(-1, 1), # ) # X_new = np.c_[x0.ravel(), x1.ravel()] # y_proba = softmax_reg.predict_proba(X_new) # y_predict = softmax_reg.predict(X_new) # zz1 = y_proba[:, 1].reshape(x0.shape) # zz = y_predict.reshape(x0.shape) # plt.figure(figsize=(10, 4)) # plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") # plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") # plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") # from matplotlib.colors import ListedColormap # custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) # plt.contourf(x0, x1, zz, cmap=custom_cmap) # contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) # plt.clabel(contour, inline=1, fontsize=12) # plt.xlabel("Petal length", fontsize=14) # plt.ylabel("Petal width", fontsize=14) # plt.legend(loc="center left", fontsize=14) # plt.axis([0, 7, 0, 3.5]) # save_fig("softmax_regression_contour_plot") # plt.show() # Practice x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10,4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0', "#9898ff", "#a0faa0"]) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1 ,fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() ###Output Saving figure softmax_regression_contour_plot ###Markdown My Notes: This shows the decision boundary for *Iris verginica* class ###Code # softmax_reg.predict([[5, 2]]) # Practice softmax_reg.predict([[5,2]]) # softmax_reg.predict_proba([[5, 2]]) # Practice softmax_reg.predict_proba([[5, 2]]) # My Notes: It predicts 94.2% chance of being in class 2, which Iris Verginica flower ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code # X = iris["data"][:, (2, 3)] # petal length, petal width # y = iris["target"] # Practice X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] # My Notes X.shape # My Notes X[:150] y[:200] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code # X_with_bias = np.c_[np.ones([len(X), 1]), X] # Practice X_with_bias = np.c_[np.ones([len(X), 1]), X] X_with_bias ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) # Practice np.random.seed(2042) # My Notes len(X_with_bias) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code # test_ratio = 0.2 # validation_ratio = 0.2 # total_size = len(X_with_bias) # test_size = int(total_size * test_ratio) # validation_size = int(total_size * validation_ratio) # train_size = total_size - test_size - validation_size # rnd_indices = np.random.permutation(total_size) # My Notes: Randomized indices # X_train = X_with_bias[rnd_indices[:train_size]] # y_train = y[rnd_indices[:train_size]] # X_valid = X_with_bias[rnd_indices[train_size:-test_size]] # y_valid = y[rnd_indices[train_size:-test_size]] # X_test = X_with_bias[rnd_indices[-test_size:]] # y_test = y[rnd_indices[-test_size:]] # Practice test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) # My Notes: 150 test_size = int(total_size * test_ratio) # My Notes: 30 validation_size = int(total_size * validation_ratio) # My Notes: 30 train_size = total_size - test_size - validation_size # My Notes: 90 rnd_indices = np.random.permutation(total_size) # My Notes: Randomized indices X_train = X_with_bias[rnd_indices[:train_size]] # My Notes: Gets the X_with_bias of randomized indices of up to 90 digits y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] # My Notes: start from index 90 of rnd_indices and gets the first 30 digits y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] # My Notes: Gets the last 30 digits of the rnd_indices array y_test = y[rnd_indices[-test_size:]] # My Notes len(y_train) # My Notes rnd_indices[-30:] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for any given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code # def to_one_hot(y): # n_classes = y.max() + 1 # m = len(y) # Y_one_hot = np.zeros((m, n_classes)) # Y_one_hot[np.arange(m), y] = 1 # return Y_one_hot # Practice def to_one_hot(y): n_classes = y.max() + 1 # My Notes: Total number of classes m = len(y) Y_one_hot = np.zeros((m, n_classes)) # My Notes: Create the 0's array Y_one_hot[np.arange(m), y] = 1 # My Notes: Inputs a 1 in the 0's array based on the each index of y_train and y_train return Y_one_hot ###Output _____no_output_____ ###Markdown My Notes: arange function means creating arrays with increasing values ###Code # My Notes np.arange(len(y_train)) # My Notes y_train # My Notes np.zeros((len(y_train), y_train.max()+1)) ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code # y_train[:10] # Practice y_train[:10] to_one_hot(y_train[:10]) # Practice to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code # Y_train_one_hot = to_one_hot(y_train) # Y_valid_one_hot = to_one_hot(y_valid) # Y_test_one_hot = to_one_hot(y_test) # My Notes: One hot encoding for the labels # Practice Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code # def softmax(logits): # exps = np.exp(logits) # exp_sums = np.sum(exps, axis=1, keepdims=True) # return exps / exp_sums def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) # My Notes: Sum them up across the rows and it becomes a vector return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code # n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) # n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) # Practice n_inputs = X_train.shape[1] # == 3 features (2 features, petal's width + length and the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) # My Notes X_train.shape[1] X_train.shape ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code # My Notes: A random 3X3 array # np.random.randn(n_inputs, n_outputs) np.random.randn(n_inputs, n_outputs) len(X_train) # eta = 0.01 # n_iterations = 5001 # m = len(X_train) # epsilon = 1e-7 # Theta = np.random.randn(n_inputs, n_outputs) # for iteration in range(n_iterations): # logits = X_train.dot(Theta) # Y_proba = softmax(logits) # loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) # error = Y_proba - Y_train_one_hot # if iteration % 500 == 0: # print(iteration, loss) # gradients = 1/m * X_train.T.dot(error) # Theta = Theta - eta * gradients # Practice eta = 0.01 # My Notes: Learning rate, how fast you want to descent in gradient descent n_iterations = 5001 m = len(X_train) # 90 instances epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) # Randomly initialize Theta with a 3 by 3 shape for iteration in range(n_iterations): # My Notes: Up to 5000 iterations logits = X_train.dot(Theta) # My Notes: Get the softmax scores Y_proba = softmax(logits) # My Notes: Use softmax function on the softmax scores. The Y_proba is your Pk(i) as seen in the cost function equation loss = -np.mean(np.sum(Y_train_one_hot*np.log(Y_proba + epsilon), axis=1)) # My Notes: Calculate the cost function, sum them up across the rows into a vector and then average them, refer to evernote for more details error = Y_proba - Y_train_one_hot # My Notes: Pk(i) - yk(i) if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) # My Notes: Perform cross entropy gradient vector Theta = Theta - eta * gradients # My Notes: Update the Theta parameters ###Output 0 6.101695333134016 500 0.7495955964003087 1000 0.6335867601044719 1500 0.5627809964824694 2000 0.5154218410707392 2500 0.48116101400112515 3000 0.45483325048430245 3500 0.43366178684830964 4000 0.4160434121580318 4500 0.4009931010780988 5000 0.38787288168306744 ###Markdown My Notes: -np.mean is same as -1/m ###Code # My Notes logits = X_train.dot(Theta) Y_proba = softmax(logits) Y_proba # My Notes: y_predict = np.argmax(Y_proba, axis=1) y_predict ###Output _____no_output_____ ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code # Theta # Practice Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code # logits = X_valid.dot(Theta) # Y_proba = softmax(logits) # y_predict = np.argmax(Y_proba, axis=1) # accuracy_score = np.mean(y_predict == y_valid) # accuracy_score # Practice logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) # My Notes: Gets the index of the higest value in each row and all of them are placed in a vector accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code # eta = 0.1 # n_iterations = 5001 # m = len(X_train) # epsilon = 1e-7 # alpha = 0.1 # regularization hyperparameter # Theta = np.random.randn(n_inputs, n_outputs) # for iteration in range(n_iterations): # logits = X_train.dot(Theta) # Y_proba = softmax(logits) # xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) # l2_loss = 1/2 * np.sum(np.square(Theta[1:])) # loss = xentropy_loss + alpha * l2_loss # error = Y_proba - Y_train_one_hot # if iteration % 500 == 0: # print(iteration, loss) # gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] # Theta = Theta - eta * gradients eta= 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) # My Notes: Cross entropy loss l2_loss = 1/2 * np.sum(np.square(Theta[1:])) # My Notes: l2 regularization, first row of Theta is not regularized as it corresponds to the bias term loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients Theta Theta[1:] ###Output _____no_output_____ ###Markdown My Notes: np.r_ concatenates the np.zeros(...) array with alpha * Theta[1:] along the first axis, (Along the row) ###Code # My Notes np.r_[np.array([1,2,3]), 0, 0, np.array([4,5,6])] # My Notes np.r_['0,2,1', [1,2,3], [4,5,6]] ###Output _____no_output_____ ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code # logits = X_valid.dot(Theta) # Y_proba = softmax(logits) # y_predict = np.argmax(Y_proba, axis=1) # accuracy_score = np.mean(y_predict == y_valid) # accuracy_score # Practice logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) # My Notes: Take the index (AKA your class) of the max value from each row and creates an array accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown My Notes: Stopped here 26/4/2020 Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code # eta = 0.1 # n_iterations = 5001 # m = len(X_train) # epsilon = 1e-7 # alpha = 0.1 # regularization hyperparameter # best_loss = np.infty # Theta = np.random.randn(n_inputs, n_outputs) # for iteration in range(n_iterations): # logits = X_train.dot(Theta) # Y_proba = softmax(logits) # xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) # l2_loss = 1/2 * np.sum(np.square(Theta[1:])) # loss = xentropy_loss + alpha * l2_loss # error = Y_proba - Y_train_one_hot # gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] # Theta = Theta - eta * gradients # logits = X_valid.dot(Theta) # Y_proba = softmax(logits) # xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) # l2_loss = 1/2 * np.sum(np.square(Theta[1:])) # loss = xentropy_loss + alpha * l2_loss # if iteration % 500 == 0: # print(iteration, loss) # if loss < best_loss: # best_loss = loss # else: # print(iteration - 1, best_loss) # print(iteration, loss, "early stopping!") # break eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): # My Notes: Iterating through Training set logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients # My Notes: Iterating through Validation set logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss # My Notes: loss becomes the new best_loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break # logits = X_valid.dot(Theta) # Y_proba = softmax(logits) # y_predict = np.argmax(Y_proba, axis=1) # accuracy_score = np.mean(y_predict == y_valid) # accuracy_score # Practice logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다. ###Code # 파이썬 ≥3.5 필수 import sys assert sys.version_info >= (3, 5) # 사이킷런 ≥0.20 필수 import sklearn assert sklearn.__version__ >= "0.20" # 공통 모듈 임포트 import numpy as np import os # 노트북 실행 결과를 동일하게 유지하기 위해 np.random.seed(42) # 깔끔한 그래프 출력을 위해 %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # 그림을 저장할 위치 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("그림 저장:", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown 정규 방정식을 사용한 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output 그림 저장: generated_data_plot ###Markdown **식 4-4: 정규 방정식**$\hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}$ ###Code X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown $\hat{y} = \mathbf{X} \boldsymbol{\hat{\theta}}$ ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown 책에 있는 그림은 범례와 축 레이블이 있는 그래프입니다: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 `scipy.linalg.lstsq()` 함수("least squares"의 약자)를 사용하므로 이 함수를 직접 사용할 수 있습니다: ###Code # 싸이파이 lstsq() 함수를 사용하려면 scipy.linalg.lstsq(X_b, y)와 같이 씁니다. theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_ (pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: $\boldsymbol{\hat{\theta}} = \mathbf{X}^{-1}\hat{y}$ ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 배치 경사 하강법을 사용한 선형 회귀 **식 4-6: 비용 함수의 그레이디언트 벡터**$\dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta}) = \dfrac{2}{m} \mathbf{X}^T (\mathbf{X} \boldsymbol{\theta} - \mathbf{y})$**식 4-7: 경사 하강법의 스텝**$\boldsymbol{\theta}^{(\text{next step})} = \boldsymbol{\theta} - \eta \dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta})$ ###Code eta = 0.1 # 학습률 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # 랜덤 초기화 for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output 그림 저장: gradient_descent_plot ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 랜덤 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 없음 y_predict = X_new_b.dot(theta) # 책에는 없음 style = "b-" if i > 0 else "r--" # 책에는 없음 plt.plot(X_new, y_predict, style) # 책에는 없음 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 없음 plt.plot(X, y, "b.") # 책에는 없음 plt.xlabel("$x_1$", fontsize=18) # 책에는 없음 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 없음 plt.axis([0, 2, 0, 15]) # 책에는 없음 save_fig("sgd_plot") # 책에는 없음 plt.show() # 책에는 없음 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 랜덤 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output 그림 저장: gradient_descent_paths_plot ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # 책에는 없음 plt.xlabel("Training set size", fontsize=14) # 책에는 없음 plt.ylabel("RMSE", fontsize=14) # 책에는 없음 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("underfitting_learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 ###Output 그림 저장: learning_curves_plot ###Markdown 규제가 있는 모델 ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) ###Output _____no_output_____ ###Markdown **식 4-8: 릿지 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \dfrac{1}{2}\sum\limits_{i=1}^{n}{\theta_i}^2$ ###Code from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output 그림 저장: ridge_regression_plot ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.21 버전의 기본값인 `max_iter=1000`과 `tol=1e-3`으로 지정합니다. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-10: 라쏘 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right|$ ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-12: 엘라스틱넷 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + r \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right| + \dfrac{1 - r}{2} \alpha \sum\limits_{i=1}^{n}{{\theta_i}^2}$ ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown 조기 종료 예제: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 중지된 곳에서 다시 시작합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown 그래프를 그립니다: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output 그림 저장: lasso_vs_ridge_plot ###Markdown 로지스틱 회귀 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() ###Output 그림 저장: logistic_function_plot ###Markdown **식 4-16: 하나의 훈련 샘플에 대한 비용 함수**$c(\boldsymbol{\theta}) =\begin{cases} -\log(\hat{p}) & \text{if } y = 1, \\ -\log(1 - \hat{p}) & \text{if } y = 0.\end{cases}$**식 4-17: 로지스틱 회귀 비용 함수(로그 손실)**$J(\boldsymbol{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) log\left(1 - \hat{p}^{(i)}\right)\right]}$**식 4-18: 로지스틱 비용 함수의 편도 함수**$\dfrac{\partial}{\partial \theta_j} \text{J}(\boldsymbol{\theta}) = \dfrac{1}{m}\sum\limits_{i=1}^{m}\left(\mathbf{\sigma(\boldsymbol{\theta}}^T \mathbf{x}^{(i)}) - y^{(i)}\right)\, x_j^{(i)}$ ###Code from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 너비 y = (iris["target"] == 2).astype(np.int) # Iris virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.22 버전의 기본값인 `solver="lbfgs"`로 지정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown 책에 실린 그림은 조금 더 예쁘게 꾸몄습니다: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() ###Output 그림 저장: logistic_regression_contour_plot ###Markdown **식 4-20: 소프트맥스 함수**$\hat{p}_k = \sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$**식 4-22: 크로스 엔트로피 비용 함수**$J(\boldsymbol{\Theta}) = - \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$**식 4-23: 클래스 k에 대한 크로스 엔트로피의 그레이디언트 벡터**$\nabla_{\boldsymbol{\theta}^{(k)}} \, J(\boldsymbol{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$ ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 너비 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 하지만 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 다음과 같이 수동으로 나누어 보겠습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2개와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항이나 인덱스의 순서가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 하지만 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그레이디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.4106007142918715 5000 0.3956780375390374 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629506 1000 0.503640075014894 1500 0.4946891059460321 2000 0.49129684180754774 2500 0.489899247009333 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "조기 종료!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 여전히 완벽하지만 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다. ###Code # 파이썬 ≥3.5 필수 import sys assert sys.version_info >= (3, 5) # 사이킷런 ≥0.20 필수 import sklearn assert sklearn.__version__ >= "0.20" # 공통 모듈 임포트 import numpy as np import os # 노트북 실행 결과를 동일하게 유지하기 위해 np.random.seed(42) # 깔끔한 그래프 출력을 위해 %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # 그림을 저장할 위치 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("그림 저장:", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output 그림 저장: generated_data_plot ###Markdown **식 4-4: 정규 방정식**$\hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}$ ###Code X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown $\hat{y} = \mathbf{X} \boldsymbol{\hat{\theta}}$ ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown 책에 있는 그림은 범례와 축 레이블이 있는 그래프입니다: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 `scipy.linalg.lstsq()` 함수("least squares"의 약자)를 사용하므로 이 함수를 직접 사용할 수 있습니다: ###Code # 싸이파이 lstsq() 함수를 사용하려면 scipy.linalg.lstsq(X_b, y)와 같이 씁니다. theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_ (pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: $\boldsymbol{\hat{\theta}} = \mathbf{X}^{-1}\hat{y}$ ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 경사 하강법 배치 경사 하강법 **식 4-6: 비용 함수의 그레이디언트 벡터**$\dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta}) = \dfrac{2}{m} \mathbf{X}^T (\mathbf{X} \boldsymbol{\theta} - \mathbf{y})$**식 4-7: 경사 하강법의 스텝**$\boldsymbol{\theta}^{(\text{next step})} = \boldsymbol{\theta} - \eta \dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta})$ ###Code eta = 0.1 # 학습률 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # 랜덤 초기화 for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output 그림 저장: gradient_descent_plot ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 랜덤 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 없음 y_predict = X_new_b.dot(theta) # 책에는 없음 style = "b-" if i > 0 else "r--" # 책에는 없음 plt.plot(X_new, y_predict, style) # 책에는 없음 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 없음 plt.plot(X, y, "b.") # 책에는 없음 plt.xlabel("$x_1$", fontsize=18) # 책에는 없음 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 없음 plt.axis([0, 2, 0, 15]) # 책에는 없음 save_fig("sgd_plot") # 책에는 없음 plt.show() # 책에는 없음 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 랜덤 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output 그림 저장: gradient_descent_paths_plot ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output 그림 저장: high_degree_polynomials_plot ###Markdown 학습 곡선 ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train) + 1): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # 책에는 없음 plt.xlabel("Training set size", fontsize=14) # 책에는 없음 plt.ylabel("RMSE", fontsize=14) # 책에는 없음 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("underfitting_learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 ###Output 그림 저장: learning_curves_plot ###Markdown 규제가 있는 선형 모델 릿지 회귀 ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) ###Output _____no_output_____ ###Markdown **식 4-8: 릿지 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \dfrac{1}{2}\sum\limits_{i=1}^{n}{\theta_i}^2$ ###Code from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output 그림 저장: ridge_regression_plot ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.21 버전의 기본값인 `max_iter=1000`과 `tol=1e-3`으로 지정합니다. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-10: 라쏘 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right|$ 라쏘 회귀 ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown 엘라스틱넷 **식 4-12: 엘라스틱넷 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + r \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right| + \dfrac{1 - r}{2} \alpha \sum\limits_{i=1}^{n}{{\theta_i}^2}$ ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown 조기 종료 ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 중지된 곳에서 다시 시작합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown 그래프를 그립니다: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output 그림 저장: lasso_vs_ridge_plot ###Markdown 로지스틱 회귀 결정 경계 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() ###Output 그림 저장: logistic_function_plot ###Markdown **식 4-16: 하나의 훈련 샘플에 대한 비용 함수**$c(\boldsymbol{\theta}) =\begin{cases} -\log(\hat{p}) & \text{if } y = 1, \\ -\log(1 - \hat{p}) & \text{if } y = 0.\end{cases}$**식 4-17: 로지스틱 회귀 비용 함수(로그 손실)**$J(\boldsymbol{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) log\left(1 - \hat{p}^{(i)}\right)\right]}$**식 4-18: 로지스틱 비용 함수의 편도 함수**$\dfrac{\partial}{\partial \theta_j} \text{J}(\boldsymbol{\theta}) = \dfrac{1}{m}\sum\limits_{i=1}^{m}\left(\mathbf{\sigma(\boldsymbol{\theta}}^T \mathbf{x}^{(i)}) - y^{(i)}\right)\, x_j^{(i)}$ ###Code from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 너비 y = (iris["target"] == 2).astype(int) # Iris virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.22 버전의 기본값인 `solver="lbfgs"`로 지정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown 책에 실린 그림은 조금 더 예쁘게 꾸몄습니다: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary[0], 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary[0], 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) ###Output _____no_output_____ ###Markdown 소프트맥스 회귀 ###Code from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() ###Output 그림 저장: logistic_regression_contour_plot ###Markdown **식 4-20: 소프트맥스 함수**$\hat{p}_k = \sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$**식 4-22: 크로스 엔트로피 비용 함수**$J(\boldsymbol{\Theta}) = - \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$**식 4-23: 클래스 k에 대한 크로스 엔트로피의 그레이디언트 벡터**$\nabla_{\boldsymbol{\theta}^{(k)}} \, J(\boldsymbol{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$ ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 너비 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 하지만 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 다음과 같이 수동으로 나누어 보겠습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2개와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항이나 인덱스의 순서가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 하지만 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그레이디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.4946891059460321 2000 0.4912968418075477 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.489035124439786 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.48884031207388184 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "조기 종료!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 여전히 완벽하지만 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다. ###Code # 파이썬 ≥3.5 필수 import sys assert sys.version_info >= (3, 5) # 사이킷런 ≥0.20 필수 import sklearn assert sklearn.__version__ >= "0.20" # 공통 모듈 임포트 import numpy as np import os # 노트북 실행 결과를 동일하게 유지하기 위해 np.random.seed(42) # 깔끔한 그래프 출력을 위해 %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # 그림을 저장할 위치 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("그림 저장:", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output 그림 저장: generated_data_plot ###Markdown **식 4-4: 정규 방정식**$\hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}$ ###Code X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown $\hat{y} = \mathbf{X} \boldsymbol{\hat{\theta}}$ ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown 책에 있는 그림은 범례와 축 레이블이 있는 그래프입니다: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 `scipy.linalg.lstsq()` 함수("least squares"의 약자)를 사용하므로 이 함수를 직접 사용할 수 있습니다: ###Code # 싸이파이 lstsq() 함수를 사용하려면 scipy.linalg.lstsq(X_b, y)와 같이 씁니다. theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_ (pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: $\boldsymbol{\hat{\theta}} = \mathbf{X}^{-1}\hat{y}$ ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 경사 하강법 배치 경사 하강법 **식 4-6: 비용 함수의 그레이디언트 벡터**$\dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta}) = \dfrac{2}{m} \mathbf{X}^T (\mathbf{X} \boldsymbol{\theta} - \mathbf{y})$**식 4-7: 경사 하강법의 스텝**$\boldsymbol{\theta}^{(\text{next step})} = \boldsymbol{\theta} - \eta \dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta})$ ###Code eta = 0.1 # 학습률 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # 랜덤 초기화 for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output 그림 저장: gradient_descent_plot ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 랜덤 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 없음 y_predict = X_new_b.dot(theta) # 책에는 없음 style = "b-" if i > 0 else "r--" # 책에는 없음 plt.plot(X_new, y_predict, style) # 책에는 없음 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 없음 plt.plot(X, y, "b.") # 책에는 없음 plt.xlabel("$x_1$", fontsize=18) # 책에는 없음 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 없음 plt.axis([0, 2, 0, 15]) # 책에는 없음 save_fig("sgd_plot") # 책에는 없음 plt.show() # 책에는 없음 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 랜덤 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output 그림 저장: gradient_descent_paths_plot ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output 그림 저장: high_degree_polynomials_plot ###Markdown 학습 곡선 ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train) + 1): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # 책에는 없음 plt.xlabel("Training set size", fontsize=14) # 책에는 없음 plt.ylabel("RMSE", fontsize=14) # 책에는 없음 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("underfitting_learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 ###Output 그림 저장: learning_curves_plot ###Markdown 규제가 있는 선형 모델 릿지 회귀 ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) ###Output _____no_output_____ ###Markdown **식 4-8: 릿지 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \dfrac{1}{2}\sum\limits_{i=1}^{n}{\theta_i}^2$ ###Code from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output 그림 저장: ridge_regression_plot ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.21 버전의 기본값인 `max_iter=1000`과 `tol=1e-3`으로 지정합니다. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-10: 라쏘 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right|$ 라쏘 회귀 ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown 엘라스틱넷 **식 4-12: 엘라스틱넷 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + r \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right| + \dfrac{1 - r}{2} \alpha \sum\limits_{i=1}^{n}{{\theta_i}^2}$ ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown 조기 종료 ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 중지된 곳에서 다시 시작합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown 그래프를 그립니다: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output 그림 저장: lasso_vs_ridge_plot ###Markdown 로지스틱 회귀 결정 경계 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() ###Output 그림 저장: logistic_function_plot ###Markdown **식 4-16: 하나의 훈련 샘플에 대한 비용 함수**$c(\boldsymbol{\theta}) =\begin{cases} -\log(\hat{p}) & \text{if } y = 1, \\ -\log(1 - \hat{p}) & \text{if } y = 0.\end{cases}$**식 4-17: 로지스틱 회귀 비용 함수(로그 손실)**$J(\boldsymbol{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) log\left(1 - \hat{p}^{(i)}\right)\right]}$**식 4-18: 로지스틱 비용 함수의 편도 함수**$\dfrac{\partial}{\partial \theta_j} \text{J}(\boldsymbol{\theta}) = \dfrac{1}{m}\sum\limits_{i=1}^{m}\left(\mathbf{\sigma(\boldsymbol{\theta}}^T \mathbf{x}^{(i)}) - y^{(i)}\right)\, x_j^{(i)}$ ###Code from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 너비 y = (iris["target"] == 2).astype(int) # Iris virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.22 버전의 기본값인 `solver="lbfgs"`로 지정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown 책에 실린 그림은 조금 더 예쁘게 꾸몄습니다: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary[0], 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary[0], 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) ###Output _____no_output_____ ###Markdown 소프트맥스 회귀 ###Code from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() ###Output 그림 저장: logistic_regression_contour_plot ###Markdown **식 4-20: 소프트맥스 함수**$\hat{p}_k = \sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$**식 4-22: 크로스 엔트로피 비용 함수**$J(\boldsymbol{\Theta}) = - \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$**식 4-23: 클래스 k에 대한 크로스 엔트로피의 그레이디언트 벡터**$\nabla_{\boldsymbol{\theta}^{(k)}} \, J(\boldsymbol{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$ ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 너비 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 하지만 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 다음과 같이 수동으로 나누어 보겠습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2개와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항이나 인덱스의 순서가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 하지만 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그레이디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.4946891059460321 2000 0.4912968418075477 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.489035124439786 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.48884031207388184 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "조기 종료!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 여전히 완벽하지만 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다. ###Code # 파이썬 ≥3.5 필수 import sys assert sys.version_info >= (3, 5) # 사이킷런 ≥0.20 필수 import sklearn assert sklearn.__version__ >= "0.20" # 공통 모듈 임포트 import numpy as np import os # 노트북 실행 결과를 동일하게 유지하기 위해 np.random.seed(42) # 깔끔한 그래프 출력을 위해 %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # 그림을 저장할 위치 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("그림 저장:", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # 불필요한 경고를 무시합니다 (사이파이 이슈 #5998 참조) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown 정규 방정식을 사용한 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output 그림 저장: generated_data_plot ###Markdown **식 4-4: 정규 방정식**$\hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}$ ###Code X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown $\hat{y} = \mathbf{X} \boldsymbol{\hat{\theta}}$ ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown 책에 있는 그림은 범례와 축 레이블이 있는 그래프입니다: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 `scipy.linalg.lstsq()` 함수("least squares"의 약자)를 사용하므로 이 함수를 직접 사용할 수 있습니다: ###Code # 싸이파이 lstsq() 함수를 사용하려면 scipy.linalg.lstsq(X_b, y)와 같이 씁니다. theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_ (pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: $\boldsymbol{\hat{\theta}} = \mathbf{X}^{-1}\hat{y}$ ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 배치 경사 하강법을 사용한 선형 회귀 **식 4-6: 비용 함수의 그레이디언트 벡터**$\dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta}) = \dfrac{2}{m} \mathbf{X}^T (\mathbf{X} \boldsymbol{\theta} - \mathbf{y})$**식 4-7: 경사 하강법의 스텝**$\boldsymbol{\theta}^{(\text{next step})} = \boldsymbol{\theta} - \eta \dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta})$ ###Code eta = 0.1 # 학습률 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # 랜덤 초기화 for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output 그림 저장: gradient_descent_plot ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 랜덤 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 없음 y_predict = X_new_b.dot(theta) # 책에는 없음 style = "b-" if i > 0 else "r--" # 책에는 없음 plt.plot(X_new, y_predict, style) # 책에는 없음 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 없음 plt.plot(X, y, "b.") # 책에는 없음 plt.xlabel("$x_1$", fontsize=18) # 책에는 없음 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 없음 plt.axis([0, 2, 0, 15]) # 책에는 없음 save_fig("sgd_plot") # 책에는 없음 plt.show() # 책에는 없음 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 랜덤 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output 그림 저장: gradient_descent_paths_plot ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # 책에는 없음 plt.xlabel("Training set size", fontsize=14) # 책에는 없음 plt.ylabel("RMSE", fontsize=14) # 책에는 없음 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("underfitting_learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 ###Output 그림 저장: learning_curves_plot ###Markdown 규제가 있는 모델 ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) ###Output _____no_output_____ ###Markdown **식 4-8: 릿지 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \dfrac{1}{2}\sum\limits_{i=1}^{n}{\theta_i}^2$ ###Code from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) # Pipeline=True이면 현재 모델이 파이프라인내의 모델로 들어가고 그 전에 PolynomialFeatures와 StandardScaler가 수행된다. y_new_regul = model.predict(X_new) # 그런데 pipeline.predict를 수행하면 테스트 데이터에 대해서 파이프라인 함수들을 내부적으로 수행하는 듯 하다. lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=True, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=False, alphas=(0, 10**-5, 1), random_state=42) plt.show() ###Output 그림 저장: ridge_regression_plot ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.21 버전의 기본값인 `max_iter=1000`과 `tol=1e-3`으로 지정합니다. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) # ravel()은 Numpy 다차원 배열을 1차원으로 바꿔줌. sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-10: 라쏘 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right|$ ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-12: 엘라스틱넷 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + r \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right| + \dfrac{1 - r}{2} \alpha \sum\limits_{i=1}^{n}{{\theta_i}^2}$ ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown 조기 종료 예제: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 중지된 곳에서 다시 시작합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown 그래프를 그립니다: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) plt.title("t1 plot", fontsize=16) plt.imshow(t1) plt.show() plt.title("t2 plot", fontsize=16) plt.imshow(t2) plt.show() a1 = t1.ravel() # ravel()은 다차원 어레이를 1차원 어레이로 바꿔준다. a2 = t2.ravel() a1s, a2s = a1.shape, a2.shape T = np.c_[t1.ravel(), t2.ravel()] # (250000, 2). np.c_: 두번째 axis (왼쪽에서 오른 방향)으로 연결되도록 변환한다. Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] a3 = len(Xr) a4 = (1/len(Xr)) a5 = T.dot(Xr.T) a6 = (T.dot(Xr.T) - yr.T) a7 = (T.dot(Xr.T) - yr.T)**2 a8 = np.sum((T.dot(Xr.T) - yr.T)**2, axis=1) a9 = np.sum((T.dot(Xr.T) - yr.T)**2, axis=1).reshape(t1.shape) J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) # J=500x500사이즈. J는 아래로 볼록한 bowl의 일부형상을 비추는 꼴인데, (333,374)지점이 가장 아래지점이고 그 주변이 밑바닥이고 # J의 왼쪽 위가 bowl의 가장자리이고 여기값이 가장 큰값(14.0)이다. plt.title("J plot", fontsize=16) plt.imshow(J) plt.show() a10 = np.linalg.norm(T, ord=1, axis=1) # T=250000x2사이즈. T의 각 행별로 1st norm을 구한다. 1열의 절대값+2열의 절대값 a11 = np.linalg.norm(T, ord=2, axis=1) # T=250000x2사이즈. T의 각 행별로 2nd norm을 구한다. sqrt( (1열값)**2 + (2열값)**2 ) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) # N1=500x500사이즈. N1은 약 (250,120) 지점의 값이 가장 작고, 이 지점을 중심으로 같은길이의 마름모 형태의 등고선이 주변으로 퍼져 나간다. # 오른쪽윗점과 오른쪽아래점이 가장 큰 값이다. # N2=500x500사이즈. N2도 약 (250,120) 지점의 값이 가장 작고, 이 지점을 중심으로 원 형태의 등고선이 주변으로 퍼져 나간다. # 오른쪽윗점과 오른쪽아래점이 가장 큰 값이다. N1min = np.min(N1) # 0.005 N1max = np.max(N1) # 4.5 N2min = np.min(N2) # 0.0036 N2max = np.max(N2) # 3.35 plt.title("N1 plot", fontsize=16) plt.imshow(N1) plt.show() plt.title("N2 plot", fontsize=16) plt.imshow(N2) plt.show() plt.title("N2 ** 2 plot", fontsize=16) plt.imshow(N2 ** 2) plt.show() a12 = np.argmin(J) a13 = J.shape t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] # J의 최소값위치의 x,y축 인덱스 t_init = np.array([[0.25], [-1]]) # figure 4-19의 오른쪽 그림 2개의 흰색점의 초기위치. def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): a1 = X.dot(theta) a2 = (X.dot(theta) - y) a3 = X.T.dot(X.dot(theta) - y) a4 = core * 2 / len(X) * X.T.dot(X.dot(theta) - y) a5 = np.sign(theta) a6 = l1 * np.sign(theta) a7 = l2 * theta gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) N, l1, l2 = N1, 2., 0 JR = J + l1 * N1 + l2 * 0.5 * N2 ** 2 # plt.title("N1 plot l1_{0}_l2_{1}".format(l1, l2), fontsize=16) plt.title("J + 2 * N1", fontsize=16) plt.imshow(JR) plt.show() N, l1, l2 = N2, 0, 2. JR = J + l1 * N1 + l2 * 0.5 * N2 ** 2 # plt.title("N2 plot l1_{0}_l2_{1}".format(l1, l2), fontsize=16) plt.title("J + 2 * 0.5 * N2 ** 2", fontsize=16) plt.imshow(JR) plt.show() fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 b1 = np.argmin(JR) b2 = JR.shape tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) # tr_min_idx = Lasso: (250, 260), Ridge: t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] # t1, t2에 인덱스를 넣어 최소값 위치의 theta1, theta2의 값을 # t1r_min, t2r_min에 저장함 a1 = np.linspace(0, 1, 20) a2 = np.exp(np.linspace(0, 1, 20)) a3 = (np.exp(np.linspace(0, 1, 20)) - 1) # Lasso: [0~1.718] a4 = np.max(J) # Lasso: 14 a5 = np.min(J) # Lasso: 5.68e-06 a6 = (np.max(J) - np.min(J)) # Lasso: 13.9999 a7 = (np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) # Lasso: [0~24.06] a8 = np.max(JR) # Lasso: 19.0 a9 = np.min(JR) # Lasso: 3.35 a10 = (np.max(JR) - np.min(JR)) # Lasso: 15.65 a11 = (np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) # Lasso: [0~26.89] a12 = np.max(N) # Lasso: 4.5 levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) # Lasso: [0~24.06]. J의 가장 minimum값인 0에 14*[0~1.718]을 더한값. (20,) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) # Lasso: [3.35~30.24]. JR의 가장 minimum값인 3.35에 15.65*[0~1.718]을 더한값. (20,) levelsN=np.linspace(0, np.max(N), 10) # Lasso: [0~4.5]. (10,) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) # [[2.0], [0.5]]은 figure 4-19의 노란색 점의 초기위치. # path_N.shape = (201, 2, 1). 2x1 행렬이 201층으로 쌓여있음 path_J_0 = path_J[:,:,0] # path_J (201,2,1)에서 3차원을 없애줘서 (201,2)크기임. path_JR_0 = path_JR[:, :, 0] path_N_0 = path_N[:, :, 0] # path_N (201,2,1)에서 3차원을 없애줘서 (201,2)크기임. path_JR0 = path_JR[:, 0] # path_JR의 (201,2,1)의 두번째 차원의 첫번째 값 위치의 201층인 값들. path_JR1 = path_JR[:, 1] # path_JR의 (201,2,1)의 두번째 차원의 두번째 값 위치의 201층인 값들. path_N0 = path_N[:, 0] # path_N의 (201,2,1)의 두번째 차원의 첫번째 값 위치의 201층인 값들. path_N1 = path_N[:, 1] # path_N의 (201,2,1)의 두번째 차원의 두번째 값 위치의 201층인 값들. ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) # contour(X, Y, Z, levels)는 Z의 값을 그리는데, X는 x축위치, Y는 y축위치, Z는 등고선의 높이이다. X,Y의 사이즈는 Z와 같아야. # levels는 contour를 그릴 등고선의 높이들을 몇개 지정해준다 ax.plot(path_N[:, 0], path_N[:, 1], "y--") # path_N의 x,y좌표를 따라 그려준다 ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") # J의 최소값 위치가 초기점이다 ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output _____no_output_____ ###Markdown 로지스틱 회귀 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() ###Output 그림 저장: logistic_function_plot ###Markdown **식 4-16: 하나의 훈련 샘플에 대한 비용 함수**$c(\boldsymbol{\theta}) =\begin{cases} -\log(\hat{p}) & \text{if } y = 1, \\ -\log(1 - \hat{p}) & \text{if } y = 0.\end{cases}$**식 4-17: 로지스틱 회귀 비용 함수(로그 손실)**$J(\boldsymbol{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) log\left(1 - \hat{p}^{(i)}\right)\right]}$**식 4-18: 로지스틱 비용 함수의 편도 함수**$\dfrac{\partial}{\partial \theta_j} \text{J}(\boldsymbol{\theta}) = \dfrac{1}{m}\sum\limits_{i=1}^{m}\left(\mathbf{\sigma(\boldsymbol{\theta}}^T \mathbf{x}^{(i)}) - y^{(i)}\right)\, x_j^{(i)}$ ###Code from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 너비 y = (iris["target"] == 2).astype(np.int) # Iris virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.22 버전의 기본값인 `solver="lbfgs"`로 지정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) # (1000, 1). 이렇게 하면 1000X1행렬이 된다. X_new1 = np.linspace(0, 3, 1000) # (1000, ) # 단순히 1000개의 배열이 된다. y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") X_new.shape, X_new1.shape ###Output _____no_output_____ ###Markdown 책에 실린 그림은 조금 더 예쁘게 꾸몄습니다: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) y_p_more_0p5 = y_proba[:, 1] >= 0.5 X_new_more_0p5 = X_new[y_proba[:, 1] >= 0.5] # 확률이 0.5이상인 샘플들만의 X_new값들 decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] # 확률이 0.5이상인 샘플들만의 X_new값들 중 1번째 샘플값 plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') # arrow함수: 1,2번째가 화살표 시작 위치의 x,y위치, 3,4번째가 초기위치로부터의 이동할 x,y거리 plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() y_p_more_0p5.shape, X_new_more_0p5.shape decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] # ravel()은 다차원 어레이를 1차원 어레이로 바꿔준다. # np.c_: 두번째 axis (왼쪽에서 오른 방향)으로 연결되도록 변환한다. # x0.ravel().shape, x1.ravel().shape, X_new.shape, y_proba.shape, x0.shape, zz.shape # = ((100000,), (100000,), (100000, 2), (100000, 2), (200, 500), (200, 500)) y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") # X[y==0, 0], X[y==0, 1]: 각각 y==0인 샘플들의 x좌표, y좌표 plt.plot(X[y==1, 0], X[y==1, 1], "g^") # X[y==0, 0], X[y==0, 1]: 각각 y==1인 샘플들의 x좌표, y좌표 zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] # https://scipython.com/blog/plotting-the-decision-boundary-of-a-logistic-regression-model/ plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() x0.ravel().shape, x1.ravel().shape, X_new.shape, y_proba.shape, x0.shape, zz.shape log_reg.coef_, log_reg.intercept_ boundary ###Output _____no_output_____ ###Markdown **식 4-20: 소프트맥스 함수**$\hat{p}_k = \sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$**식 4-22: 크로스 엔트로피 비용 함수**$J(\boldsymbol{\Theta}) = - \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$**식 4-23: 클래스 k에 대한 크로스 엔트로피의 그레이디언트 벡터**$\nabla_{\boldsymbol{\theta}^{(k)}} \, J(\boldsymbol{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$ ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 너비 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] # ravel()은 다차원 어레이를 1차원 어레이로 바꿔준다. # np.c_: 두번째 axis (왼쪽에서 오른 방향)으로 연결되도록 변환한다. # x0.ravel().shape, x1.ravel().shape, X_new.shape, y_proba.shape, y_predict.shape = # (100000,), (100000,), (100000, 2), (100000, 3), (100000,), y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) # plt.cm.brg에서 brg는 뭐지? blue-red-green인가? plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] X_with_bias.shape, y.shape, len(X_with_bias) ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 하지만 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 다음과 같이 수동으로 나누어 보겠습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) # logits=[90x3]이면 exps=[90x3]. logits의 각 값에 exp를 취할 뿐 exp_sums = np.sum(exps, axis=1, keepdims=True) # [90x3]을 행단위로 더하여 [90x1]로 만든다 return exps / exp_sums # [90x3]/[90x1]로 분모의 [90x1]이 3열로 복사되어 [90x3]/[90x3]이 된다고 볼 수 있다. ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2개와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항이나 인덱스의 순서가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 하지만 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그레이디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) # 90 epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) # [3x3] for iteration in range(n_iterations): logits = X_train.dot(Theta) # [90x3]*[3x3] = [90x3] Y_proba = softmax(logits) # [90x3] yk_proba = Y_train_one_hot * np.log(Y_proba + epsilon) # 여기서의 곱은 행렬곱이 아닌 elementwise 곱. [90x3]*[90x3] = [90x3] yk_proba_sum = np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1) # (90,) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) # eq. 4-22를 그대로 구현. error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) # eq. 4-23을 그대로 구현. Theta = Theta - eta * gradients ###Output 0 5.173284880908112 500 0.8258143504756522 1000 0.6740383508681776 1500 0.5891518016822946 2000 0.5353052890403674 2500 0.4975988211901051 3000 0.469220320068328 3500 0.4467104744290491 4000 0.4281482798645294 4500 0.41238656131534807 5000 0.3986986115898958 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) # eq. 4-19 그대로 구현. Y_proba = softmax(logits) # eq. 4-20에서처럼 s_{k}(x)을 소프트맥스 함수에 입력한다. y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) # 90 epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) # [3x3] for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) # eq. 4-8 그대로 구현. loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.48886433374493027 5000 0.48884031207388184 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] # l2규제의 gradient의 식은 어디있지? Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "조기 종료!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 여전히 완벽하지만 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear Regression The Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Gradient Descent Batch Gradient Descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial Regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output Saving figure high_degree_polynomials_plot ###Markdown Learning Curves ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized Linear Models Ridge Regression ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Lasso Regression ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Early Stopping ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic Regression Decision Boundaries ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) ###Output _____no_output_____ ###Markdown Softmax Regression ###Code from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), #petal length dim for lines @chit np.linspace(0, 3.5, 200).reshape(-1, 1), #petal width dim for lines ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown Our perfect model turns out to have slight imperfections. This variability is likely due to the very small size of the dataset: depending on how you sample the training set, validation set and the test set, you can get quite different results. Try changing the random seed and running the code again a few times, you will see that the results will vary. ###Code ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다. ###Code # 파이썬 ≥3.5 필수 import sys assert sys.version_info >= (3, 5) # 사이킷런 ≥0.20 필수 import sklearn assert sklearn.__version__ >= "0.20" # 공통 모듈 임포트 import numpy as np import os # 노트북 실행 결과를 동일하게 유지하기 위해 np.random.seed(42) # 깔끔한 그래프 출력을 위해 %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # 그림을 저장할 위치 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("그림 저장:", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # 불필요한 경고를 무시합니다 (사이파이 이슈 #5998 참조) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown 정규 방정식을 사용한 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output 그림 저장: generated_data_plot ###Markdown **식 4-4: 정규 방정식**$\hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}$ ###Code X_b = np.c_[np.ones((100, 1)), X] theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown $\hat{y} = \mathbf{X} \boldsymbol{\hat{\theta}}$ ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown 책에 있는 그림은 범례와 축 레이블이 있는 그래프입니다: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 `scipy.linalg.lstsq()` 함수("least squares"의 약자)를 사용하므로 이 함수를 직접 사용할 수 있습니다: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_ (pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: $\boldsymbol{\hat{\theta}} = \mathbf{X}^{-1}\hat{y}$ ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 배치 경사 하강법을 사용한 선형 회귀 **식 4-6: 비용 함수의 그레이디언트 벡터**$\dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta}) = \dfrac{2}{m} \mathbf{X}^T (\mathbf{X} \boldsymbol{\theta} - \mathbf{y})$**식 4-7: 경사 하강법의 스텝**$\boldsymbol{\theta}^{(\text{next step})} = \boldsymbol{\theta} - \eta \dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta})$ ###Code eta = 0.1 # 학습률 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output 그림 저장: gradient_descent_plot ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 랜덤 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 없음 y_predict = X_new_b.dot(theta) # 책에는 없음 style = "b-" if i > 0 else "r--" # 책에는 없음 plt.plot(X_new, y_predict, style) # 책에는 없음 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 없음 plt.plot(X, y, "b.") # 책에는 없음 plt.xlabel("$x_1$", fontsize=18) # 책에는 없음 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 없음 plt.axis([0, 2, 0, 15]) # 책에는 없음 save_fig("sgd_plot") # 책에는 없음 plt.show() # 책에는 없음 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 랜덤 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output _____no_output_____ ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # 책에는 없음 plt.xlabel("Training set size", fontsize=14) # 책에는 없음 plt.ylabel("RMSE", fontsize=14) # 책에는 없음 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("underfitting_learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 ###Output 그림 저장: learning_curves_plot ###Markdown 규제가 있는 모델 ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) ###Output _____no_output_____ ###Markdown **식 4-8: 릿지 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \dfrac{1}{2}\sum\limits_{i=1}^{n}{\theta_i}^2$ ###Code from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output 그림 저장: ridge_regression_plot ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.21 버전의 기본값인 `max_iter=1000`과 `tol=1e-3`으로 지정합니다. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-10: 라쏘 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right|$ ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-12: 엘라스틱넷 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + r \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right| + \dfrac{1 - r}{2} \alpha \sum\limits_{i=1}^{n}{{\theta_i}^2}$ ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown 조기 종료 예제: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 중지된 곳에서 다시 시작합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown 그래프를 그립니다: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output 그림 저장: lasso_vs_ridge_plot ###Markdown 로지스틱 회귀 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() ###Output 그림 저장: logistic_function_plot ###Markdown **식 4-16: 하나의 훈련 샘플에 대한 비용 함수**$c(\boldsymbol{\theta}) =\begin{cases} -\log(\hat{p}) & \text{if } y = 1, \\ -\log(1 - \hat{p}) & \text{if } y = 0.\end{cases}$**식 4-17: 로지스틱 회귀 비용 함수(로그 손실)**$J(\boldsymbol{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) log\left(1 - \hat{p}^{(i)}\right)\right]}$**식 4-18: 로지스틱 비용 함수의 편도 함수**$\dfrac{\partial}{\partial \theta_j} \text{J}(\boldsymbol{\theta}) = \dfrac{1}{m}\sum\limits_{i=1}^{m}\left(\mathbf{\sigma(\boldsymbol{\theta}}^T \mathbf{x}^{(i)}) - y^{(i)}\right)\, x_j^{(i)}$ ###Code from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 너비 y = (iris["target"] == 2).astype(np.int) # Iris virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.22 버전의 기본값인 `solver="lbfgs"`로 지정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown 책에 실린 그림은 조금 더 예쁘게 꾸몄습니다: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() ###Output 그림 저장: logistic_regression_contour_plot ###Markdown **식 4-20: 소프트맥스 함수**$\hat{p}_k = \sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$**식 4-22: 크로스 엔트로피 비용 함수**$J(\boldsymbol{\Theta}) = - \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$**식 4-23: 클래스 k에 대한 크로스 엔트로피의 그레이디언트 벡터**$\nabla_{\boldsymbol{\theta}^{(k)}} \, J(\boldsymbol{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$ ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 너비 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 하지만 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 다음과 같이 수동으로 나누어 보겠습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2개와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항이나 인덱스의 순서가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 하지만 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그레이디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.42786510939287936 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.48886433374493027 5000 0.48884031207388184 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "조기 종료!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 여전히 완벽하지만 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) X[0], y[0] # (array([1.85708478]), array([11.38114423])) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) X[0], y[0] #(array([-0.75275929]), array([1.61761105])) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.4106007142918712 5000 0.3956780375390373 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629507 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830817 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output Saving figure generated_data_plot ###Markdown n 4-4. Normal Equation$\hat{\theta}=\left(\mathbf{X}^{T} \cdot \mathbf{X}\right)^{-1} \cdot \mathbf{X}^{T} \cdot \mathbf{y}$```np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)``` ###Code X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent Partial derivatives of the cost function$\frac{\partial}{\partial \theta_{j}} \operatorname{MSE}(\theta)=\frac{2}{m} \sum_{i=1}^{m}\left(\theta^{T} \cdot \mathbf{x}^{(i)}-y^{(i)}\right) x_{j}^{(i)}$ Gradient vector of the cost function$\nabla_{\theta} \operatorname{MSE}(\theta)=\left(\begin{array}{c}{\frac{\partial}{\partial \theta_{0}} \operatorname{MSE}(\theta)} \\ {\frac{\partial}{\partial \theta_{1}} \operatorname{MSE}(\theta)} \\ {\vdots} \\ {\frac{\partial}{\partial \theta_{n}} \operatorname{MSE}(\theta)}\end{array}\right)=\frac{2}{m} \mathbf{X}^{T} \cdot(\mathbf{X} \cdot \theta-\mathbf{y})$ ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) ###Output _____no_output_____ ###Markdown Mean Squared Error$\mathrm{MSE}=\frac{1}{n} \sum_{i=1}^{n}\left(Y_{i}-\hat{Y}_{i}\right)^{2}$ Cost Function$\frac{1}{2 m} \sum_{i=1}^{m}\left(h_{\theta}\left(x^{(i)}\right)-y^{(i)}\right)^{2}$ ###Code theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(14,8)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-+", linewidth=0.5, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=0.5, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-+", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown Our perfect model turns out to have slight imperfections. This variability is likely due to the very small size of the dataset: depending on how you sample the training set, validation set and the test set, you can get quite different results. Try changing the random seed and running the code again a few times, you will see that the results will vary. ###Code ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance print("x_new_b : \n",X_new_b) y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd X_b[:10] ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance X_b.shape plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) lin_reg.predict(X_new).shape ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) X_b.shape len(X_b) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta y.shape y.ravel().shape from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) # sgd_reg.fit(X, y) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 X.shape y.shape ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) # y_proba.shape # a = y_proba[:, 1] >= 0.5 # a X_new.shape print (X_new[0:3]) print (X_new[0:3][0]) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] decision_boundary X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new.shape # x0.shape # x0.ravel().shape # y_proba = log_reg.predict_proba(X_new) # y_proba.shape from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) y_proba.shape y_predict.shape y_predict x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] y X.shape ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code import numpy as np X_with_bias = np.c_[np.ones([len(X), 1]), X] X_with_bias.shape ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] X_train.shape ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code a = np.zeros((10, 3)) a[np.arange(10), 1] =1 a def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) Y_train_one_hot.shape ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) # print (logits.shape) Y_proba = softmax(logits) # print (Y_proba.shape) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot # print (error.shape) if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) # print (gradients.shape) Theta = Theta - eta * gradients ###Output 0 3.3235491052911152 500 0.6603039134340473 1000 0.5848240055005667 1500 0.5341324248674592 2000 0.49712503592239465 2500 0.4684734986837643 3000 0.4453329797444631 3500 0.42604807845103737 4000 0.40958730270193267 4500 0.39527174810845456 5000 0.38263431042255586 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) print (logits.shape) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) print (Y_proba.shape) print (y_predict.shape) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output (30, 3) (30, 3) (30,) ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code np.r_[np.array([[1,2,3]]), np.array([[4,5,6],[4,5,6]] )] # np.array([[4,5,6],[4,5,6]] ).shape # np.array([[1,2,3]]).shape eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss # print ('ploss shplae',np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1).shape) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) # print (np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]].shape) # print (Theta[1:].shape) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 7.269212277457515 500 0.5386045099793083 1000 0.5049803769302914 1500 0.4951726731546381 2000 0.49149062906063995 2500 0.48998117915286216 3000 0.48933475526709597 3500 0.4890509583684393 4000 0.4889244655490105 4500 0.4888675436645237 5000 0.48884176951229086 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code import matplotlib.pyplot as plt x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown Our perfect model turns out to have slight imperfections. This variability is likely due to the very small size of the dataset: depending on how you sample the training set, validation set and the test set, you can get quite different results. Try changing the random seed and running the code again a few times, you will see that the results will vary. ###Code ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") import pandas as pd ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output Saving figure generated_data_plot ###Markdown good explanation of the normal equation (eq. 4-4): https://www.geeksforgeeks.org/ml-normal-equation-in-linear-regression/ ###Code X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown ridge regression $$ J(\theta) = \rm{MSE}(\theta) + \alpha \sum^n_{i=1}\theta^2_i$$$J(\theta)$ is the cost function. derivation of Equation 4-9 https://stats.stackexchange.com/questions/69205/how-to-derive-the-ridge-regression-solution ###Code from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown lasso regression $$ J(\theta) = \rm{MSE}(\theta) + \alpha \sum^n_{i=1}|\theta_i|$$$J(\theta)$ is the cost function ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown elastic net$$ J(\theta) = \rm{MSE}(\theta) + r\alpha\sum^n_{i=1}|\theta_i| + \frac{1-r}{2} \alpha\sum^n_{i=1}\theta^2_i$$$J(\theta)$ is the cost function. Elastic net is middle ground between Ridge regression ($l_2$ norm for regularization) and Lasso Regression ($l_1$ norm for regularization). ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown when should you use plain Linear Regression (i.e., without any regularization), Ridge, Lasso, or Elastic Net? ``It is almost always preferable to have at least a little bit of regularization, so generally you should avoid plain Linear Regression. Ridge is a good default, but if you suspect that only a few features are useful, you should prefer Lasso or Elastic Net because they tend to reduce the useless features’ weights down to zero, as we have discussed. In general, Elastic Net is preferred over Lasso because Lasso may behave erratically when the number of features is greater than the number of training instances or when several features are strongly correlated.'' early stoppinga way to regularize the interative learning algorithm such as gradient descent is to stop training as soon as the validation error reaches a minimum. ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercices in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(n_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 10, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline(( ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), )) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline(( ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), )) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline(( ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), )) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline(( ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), )) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(n_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train_predict, y_train)) val_errors.append(mean_squared_error(y_val_predict, y_val)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(n_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val_predict, y_val) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) for subplot in (221, 223): plt.subplot(subplot) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) for subplot in (223, 224): plt.subplot(subplot) plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.44618386482 500 0.835100303577 1000 0.687696155441 1500 0.601029983545 2000 0.544278281196 2500 0.503726274224 3000 0.472835729391 3500 0.448187250818 4000 0.427834726281 4500 0.410589102282 5000 0.395680325749 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.62957494791 500 0.534163155437 1000 0.503771274864 1500 0.494805645558 2000 0.491408194841 2500 0.490008507445 3000 0.489407428961 3500 0.489143102469 4000 0.489025165491 4500 0.488972058096 5000 0.488948000479 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 설정 파이썬 2와 3을 모두 지원합니다. 공통 모듈을 임포트하고 맷플롯립 그림이 노트북 안에 포함되도록 설정하고 생성한 그림을 저장하기 위한 함수를 준비합니다: ###Code # 파이썬 2와 파이썬 3 지원 from __future__ import division, print_function, unicode_literals # 공통 import numpy as np import os # 일관된 출력을 위해 유사난수 초기화 np.random.seed(42) # 맷플롯립 설정 %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # 한글출력 matplotlib.rc('font', family='NanumBarunGothic') plt.rcParams['axes.unicode_minus'] = False # 그림을 저장할 폴드 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown 정규 방정식을 사용한 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-", linewidth=2, label="예측") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 scipy.linalg.lstsq() 함수("least squares"의 약자)를 사용하므로 직접 호출할 수 있습니다: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_(pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 경사 하강법을 사용한 선형 회귀 ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output _____no_output_____ ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 무작위 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 빠짐 y_predict = X_new_b.dot(theta) # 책에는 빠짐 style = "b-" if i > 0 else "r--" # 책에는 빠짐 plt.plot(X_new, y_predict, style) # 책에는 빠짐 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 빠짐 plt.plot(X, y, "b.") # 책에는 빠짐 plt.xlabel("$x_1$", fontsize=18) # 책에는 빠짐 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 빠짐 plt.axis([0, 2, 0, 15]) # 책에는 빠짐 save_fig("sgd_plot") # 책에는 빠짐 plt.show() # 책에는 빠짐 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 무작위 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="SGD") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="미니배치") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="배치") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output _____no_output_____ ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="예측") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="훈련") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="검증") plt.legend(loc="upper right", fontsize=14) # 책에는 빠짐 plt.xlabel("훈련 세트 크기", fontsize=14) # 책에는 빠짐 plt.ylabel("RMSE", fontsize=14) # 책에는 빠짐 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 빠짐 save_fig("underfitting_learning_curves_plot") # 책에는 빠짐 plt.show() # 책에는 빠짐 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 빠짐 save_fig("learning_curves_plot") # 책에는 빠짐 plt.show() # 책에는 빠짐 ###Output _____no_output_____ ###Markdown 규제가 있는 모델 ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=5, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('최선의 모델', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="검증 세트") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="훈련 세트") plt.legend(loc="upper right", fontsize=14) plt.xlabel("에포크", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 이어서 학습합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # 편향은 무시 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0, labelpad=15) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output _____no_output_____ ###Markdown 로지스틱 회귀 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 넓이 y = (iris["target"] == 2).astype(np.int) # Iris-Virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown 향후 사이킷런 0.22 버전에서 `LogisticRegression` 클래스의 `solver` 매개변수 기본값이 `liblinear`에서 `lbfgs`로 변경될 예정입니다. 사이킷런 0.20 버전에서 `solver` 매개변수를 지정하지 않는 경우 이에 대한 경고 메세지를 출력합니다. 경고 메세지를 피하고 출력 결과를 일관되게 유지하기 위하여 `solver` 매개변수를 `liblinear`로 설정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver='liblinear', random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "결정 경계", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("꽃잎의 폭 (cm)", fontsize=14) plt.ylabel("확률", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver='liblinear', C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Iris-Virginica 아님", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("꽃잎의 길이", fontsize=14) plt.ylabel("꽃잎의 폭", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("꽃잎의 길이", fontsize=14) plt.ylabel("꽃잎의 폭", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 가능한 한가지 방법은 다음과 같습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항과 인덱스가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그래디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.44824244188957774 4000 0.42786510939287936 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.4946891059460321 2000 0.4912968418075477 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.48891736218308185 4500 0.4888643337449302 5000 0.4888403120738818 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 그래도 완벽하고 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("꽃잎 길이", fontsize=14) plt.ylabel("꽃잎 폭", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output Saving figure generated_data_plot ###Markdown use the inv() function from NumPy’s linear algebra module (np.linalg) to compute the inverse of a matrix, and the dot() method for matrix multiplication: ###Code X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. The penalty hyperparameter sets the type of regularization term to use. Specifying "l2" indicates that you want SGD to add a regularization term to the cost function equal to half the square of the ℓ2 norm of the weight vector: this is simply Ridge Regression. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390373 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629506 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.48989924700933296 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다. ###Code # 파이썬 ≥3.5 필수 import sys assert sys.version_info >= (3, 5) # 사이킷런 ≥0.20 필수 import sklearn assert sklearn.__version__ >= "0.20" # 공통 모듈 임포트 import numpy as np import os # 노트북 실행 결과를 동일하게 유지하기 위해 np.random.seed(42) # 깔끔한 그래프 출력을 위해 %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # 그림을 저장할 위치 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("그림 저장:", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # 불필요한 경고를 무시합니다 (사이파이 이슈 #5998 참조) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown 정규 방정식을 사용한 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown 책에 있는 그림은 범례와 축 레이블이 있는 그래프입니다: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 `scipy.linalg.lstsq()` 함수("least squares"의 약자)를 사용하므로 이 함수를 직접 사용할 수 있습니다: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_ (pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 배치 경사 하강법을 사용한 선형 회귀 ###Code eta = 0.1 # 학습률 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # 랜덤 초기화 for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output 그림 저장: gradient_descent_plot ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 랜덤 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 없음 y_predict = X_new_b.dot(theta) # 책에는 없음 style = "b-" if i > 0 else "r--" # 책에는 없음 plt.plot(X_new, y_predict, style) # 책에는 없음 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 없음 plt.plot(X, y, "b.") # 책에는 없음 plt.xlabel("$x_1$", fontsize=18) # 책에는 없음 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 없음 plt.axis([0, 2, 0, 15]) # 책에는 없음 save_fig("sgd_plot") # 책에는 없음 plt.show() # 책에는 없음 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 랜덤 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output 그림 저장: gradient_descent_paths_plot ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # 책에는 없음 plt.xlabel("Training set size", fontsize=14) # 책에는 없음 plt.ylabel("RMSE", fontsize=14) # 책에는 없음 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("underfitting_learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 ###Output 그림 저장: learning_curves_plot ###Markdown 규제가 있는 모델 ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output 그림 저장: ridge_regression_plot ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.21 버전의 기본값인 `max_iter=1000`과 `tol=1e-3`으로 지정합니다. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown 조기 종료 예제: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 중지된 곳에서 다시 시작합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown 그래프를 그립니다: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output 그림 저장: lasso_vs_ridge_plot ###Markdown 로지스틱 회귀 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 너비 y = (iris["target"] == 2).astype(np.int) # Iris virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.22 버전의 기본값인 `solver="lbfgs"`로 지정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown 책에 실린 그림은 조금 더 예쁘게 꾸몄습니다: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 너비 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 하지만 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 다음과 같이 수동으로 나누어 보겠습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2개와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항이나 인덱스의 순서가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 하지만 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그레이디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.42786510939287936 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.48886433374493027 5000 0.48884031207388184 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "조기 종료!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 여전히 완벽하지만 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear Regression The Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Gradient Descent Batch Gradient Descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial Regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output Saving figure high_degree_polynomials_plot ###Markdown Learning Curves ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train) + 1): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=5, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) # i.e., features #print (type(iris)) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") # ground truth plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) # plot a vertical line plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10, random_state=42) # large regularization term, basically use linear boundary log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] # concat. on 2nd dim to create coordinate lists of points to evaluate y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446183864821945 500 0.8351003035768683 1000 0.6876961554414912 1500 0.6010299835452122 2000 0.5442782811959167 2500 0.5037262742244605 3000 0.4728357293908468 3500 0.4481872508179334 4000 0.4278347262806174 4500 0.4105891022823527 5000 0.39568032574889406 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629574947908294 500 0.5341631554372782 1000 0.5037712748637474 1500 0.4948056455575166 2000 0.49140819484111964 2500 0.4900085074445459 3000 0.48940742896132616 3500 0.4891431024691195 4000 0.48902516549065855 4500 0.48897205809605315 5000 0.4889480004791563 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2 / m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ('poly_features', PolynomialFeatures(degree=10, include_bias=False)), ('lin_reg', LinearRegression()) ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver='cholesky', random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver='liblinear', random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405649 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.44824244188957774 4000 0.42786510939287936 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629507 1000 0.503640075014894 1500 0.49468910594603205 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451199 3500 0.489035124439786 4000 0.4889173621830818 4500 0.48886433374493027 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear Regression The Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Gradient Descent Batch Gradient Descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial Regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output Saving figure high_degree_polynomials_plot ###Markdown Learning Curves ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train) + 1): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized Linear Models Ridge Regression ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Lasso Regression ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Early Stopping ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic Regression Decision Boundaries ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) ###Output _____no_output_____ ###Markdown Softmax Regression ###Code from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.44824244188957774 4000 0.4278651093928792 4500 0.41060071429187134 5000 0.3956780375390373 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629507 1000 0.503640075014894 1500 0.4946891059460322 2000 0.4912968418075477 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(n_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 10, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(n_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train_predict, y_train)) val_errors.append(mean_squared_error(y_val_predict, y_val)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(n_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val_predict, y_val) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) for subplot in (221, 223): plt.subplot(subplot) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) for subplot in (223, 224): plt.subplot(subplot) plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.44618386482 500 0.835100303577 1000 0.687696155441 1500 0.601029983545 2000 0.544278281196 2500 0.503726274224 3000 0.472835729391 3500 0.448187250818 4000 0.427834726281 4500 0.410589102282 5000 0.395680325749 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.62957494791 500 0.534163155437 1000 0.503771274864 1500 0.494805645558 2000 0.491408194841 2500 0.490008507445 3000 0.489407428961 3500 0.489143102469 4000 0.489025165491 4500 0.488972058096 5000 0.488948000479 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다. ###Code # 파이썬 ≥3.5 필수 import sys assert sys.version_info >= (3, 5) # 사이킷런 ≥0.20 필수 import sklearn assert sklearn.__version__ >= "0.20" # 공통 모듈 임포트 import numpy as np import os # 노트북 실행 결과를 동일하게 유지하기 위해 np.random.seed(42) # 깔끔한 그래프 출력을 위해 %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # 그림을 저장할 위치 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("그림 저장:", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # 불필요한 경고를 무시합니다 (사이파이 이슈 #5998 참조) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown 정규 방정식을 사용한 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output 그림 저장: generated_data_plot ###Markdown 정규 방정식: $\hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}$ ###Code X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown $\hat{y} = \mathbf{X} \boldsymbol{\hat{\theta}}$ ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown 책에 있는 그림은 범례와 축 레이블이 있는 그래프입니다: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 `scipy.linalg.lstsq()` 함수("least squares"의 약자)를 사용하므로 이 함수를 직접 사용할 수 있습니다: ###Code # 싸이파이 lstsq() 함수를 사용하려면 scipy.linalg.lstsq(X_b, y)와 같이 씁니다. theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_ (pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: $\boldsymbol{\hat{\theta}} = \mathbf{X}^{-1}\hat{y}$ ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 배치 경사 하강법을 사용한 선형 회귀 $\dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta}) = \dfrac{2}{m} \mathbf{X}^T (\mathbf{X} \boldsymbol{\theta} - \mathbf{y})$$\boldsymbol{\theta}^{(\text{next step})} = \boldsymbol{\theta} - \eta \dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta})$ ###Code eta = 0.1 # 학습률 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # 랜덤 초기화 for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output 그림 저장: gradient_descent_plot ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 랜덤 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 없음 y_predict = X_new_b.dot(theta) # 책에는 없음 style = "b-" if i > 0 else "r--" # 책에는 없음 plt.plot(X_new, y_predict, style) # 책에는 없음 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 없음 plt.plot(X, y, "b.") # 책에는 없음 plt.xlabel("$x_1$", fontsize=18) # 책에는 없음 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 없음 plt.axis([0, 2, 0, 15]) # 책에는 없음 save_fig("sgd_plot") # 책에는 없음 plt.show() # 책에는 없음 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 랜덤 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output 그림 저장: gradient_descent_paths_plot ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # 책에는 없음 plt.xlabel("Training set size", fontsize=14) # 책에는 없음 plt.ylabel("RMSE", fontsize=14) # 책에는 없음 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("underfitting_learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 ###Output 그림 저장: learning_curves_plot ###Markdown 규제가 있는 모델 ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output 그림 저장: ridge_regression_plot ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.21 버전의 기본값인 `max_iter=1000`과 `tol=1e-3`으로 지정합니다. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown 조기 종료 예제: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 중지된 곳에서 다시 시작합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown 그래프를 그립니다: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output 그림 저장: lasso_vs_ridge_plot ###Markdown 로지스틱 회귀 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 너비 y = (iris["target"] == 2).astype(np.int) # Iris virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.22 버전의 기본값인 `solver="lbfgs"`로 지정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown 책에 실린 그림은 조금 더 예쁘게 꾸몄습니다: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 너비 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 하지만 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 다음과 같이 수동으로 나누어 보겠습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2개와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항이나 인덱스의 순서가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 하지만 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그레이디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.42786510939287936 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.48886433374493027 5000 0.48884031207388184 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "조기 종료!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 여전히 완벽하지만 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다. ###Code # 파이썬 ≥3.5 필수 import sys assert sys.version_info >= (3, 5) # 사이킷런 ≥0.20 필수 import sklearn assert sklearn.__version__ >= "0.20" # 공통 모듈 임포트 import numpy as np import os # 노트북 실행 결과를 동일하게 유지하기 위해 np.random.seed(42) # 깔끔한 그래프 출력을 위해 %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # 그림을 저장할 위치 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("그림 저장:", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown 정규 방정식을 사용한 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output 그림 저장: generated_data_plot ###Markdown **식 4-4: 정규 방정식**$\hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}$ ###Code X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown $\hat{y} = \mathbf{X} \boldsymbol{\hat{\theta}}$ ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown 책에 있는 그림은 범례와 축 레이블이 있는 그래프입니다: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 `scipy.linalg.lstsq()` 함수("least squares"의 약자)를 사용하므로 이 함수를 직접 사용할 수 있습니다: ###Code # 싸이파이 lstsq() 함수를 사용하려면 scipy.linalg.lstsq(X_b, y)와 같이 씁니다. theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_ (pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: $\boldsymbol{\hat{\theta}} = \mathbf{X}^{-1}\hat{y}$ ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 배치 경사 하강법을 사용한 선형 회귀 **식 4-6: 비용 함수의 그레이디언트 벡터**$\dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta}) = \dfrac{2}{m} \mathbf{X}^T (\mathbf{X} \boldsymbol{\theta} - \mathbf{y})$**식 4-7: 경사 하강법의 스텝**$\boldsymbol{\theta}^{(\text{next step})} = \boldsymbol{\theta} - \eta \dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta})$ ###Code eta = 0.1 # 학습률 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # 랜덤 초기화 for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output 그림 저장: gradient_descent_plot ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 랜덤 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 없음 y_predict = X_new_b.dot(theta) # 책에는 없음 style = "b-" if i > 0 else "r--" # 책에는 없음 plt.plot(X_new, y_predict, style) # 책에는 없음 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 없음 plt.plot(X, y, "b.") # 책에는 없음 plt.xlabel("$x_1$", fontsize=18) # 책에는 없음 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 없음 plt.axis([0, 2, 0, 15]) # 책에는 없음 save_fig("sgd_plot") # 책에는 없음 plt.show() # 책에는 없음 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 랜덤 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output 그림 저장: gradient_descent_paths_plot ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # 책에는 없음 plt.xlabel("Training set size", fontsize=14) # 책에는 없음 plt.ylabel("RMSE", fontsize=14) # 책에는 없음 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("underfitting_learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 ###Output 그림 저장: learning_curves_plot ###Markdown 규제가 있는 모델 ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) ###Output _____no_output_____ ###Markdown **식 4-8: 릿지 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \dfrac{1}{2}\sum\limits_{i=1}^{n}{\theta_i}^2$ ###Code from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output 그림 저장: ridge_regression_plot ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.21 버전의 기본값인 `max_iter=1000`과 `tol=1e-3`으로 지정합니다. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-10: 라쏘 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right|$ ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-12: 엘라스틱넷 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + r \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right| + \dfrac{1 - r}{2} \alpha \sum\limits_{i=1}^{n}{{\theta_i}^2}$ ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown 조기 종료 예제: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 중지된 곳에서 다시 시작합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown 그래프를 그립니다: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output 그림 저장: lasso_vs_ridge_plot ###Markdown 로지스틱 회귀 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() ###Output 그림 저장: logistic_function_plot ###Markdown **식 4-16: 하나의 훈련 샘플에 대한 비용 함수**$c(\boldsymbol{\theta}) =\begin{cases} -\log(\hat{p}) & \text{if } y = 1, \\ -\log(1 - \hat{p}) & \text{if } y = 0.\end{cases}$**식 4-17: 로지스틱 회귀 비용 함수(로그 손실)**$J(\boldsymbol{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) log\left(1 - \hat{p}^{(i)}\right)\right]}$**식 4-18: 로지스틱 비용 함수의 편도 함수**$\dfrac{\partial}{\partial \theta_j} \text{J}(\boldsymbol{\theta}) = \dfrac{1}{m}\sum\limits_{i=1}^{m}\left(\mathbf{\sigma(\boldsymbol{\theta}}^T \mathbf{x}^{(i)}) - y^{(i)}\right)\, x_j^{(i)}$ ###Code from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 너비 y = (iris["target"] == 2).astype(np.int) # Iris virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.22 버전의 기본값인 `solver="lbfgs"`로 지정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown 책에 실린 그림은 조금 더 예쁘게 꾸몄습니다: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() ###Output 그림 저장: logistic_regression_contour_plot ###Markdown **식 4-20: 소프트맥스 함수**$\hat{p}_k = \sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$**식 4-22: 크로스 엔트로피 비용 함수**$J(\boldsymbol{\Theta}) = - \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$**식 4-23: 클래스 k에 대한 크로스 엔트로피의 그레이디언트 벡터**$\nabla_{\boldsymbol{\theta}^{(k)}} \, J(\boldsymbol{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$ ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 너비 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 하지만 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 다음과 같이 수동으로 나누어 보겠습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2개와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항이나 인덱스의 순서가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 하지만 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그레이디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.4106007142918715 5000 0.3956780375390374 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629506 1000 0.503640075014894 1500 0.4946891059460321 2000 0.49129684180754774 2500 0.489899247009333 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "조기 종료!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 여전히 완벽하지만 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 10, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=5, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train_predict, y_train)) val_errors.append(mean_squared_error(y_val_predict, y_val)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val_predict, y_val) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) for subplot in (221, 223): plt.subplot(subplot) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) for subplot in (223, 224): plt.subplot(subplot) plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.44618386482 500 0.835100303577 1000 0.687696155441 1500 0.601029983545 2000 0.544278281196 2500 0.503726274224 3000 0.472835729391 3500 0.448187250818 4000 0.427834726281 4500 0.410589102282 5000 0.395680325749 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.62957494791 500 0.534163155437 1000 0.503771274864 1500 0.494805645558 2000 0.491408194841 2500 0.490008507445 3000 0.489407428961 3500 0.489143102469 4000 0.489025165491 4500 0.488972058096 5000 0.488948000479 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercises1 to 11: not about code 12 Exercise solutions 1. to 11. See appendix A: https://learning.oreilly.com/library/view/hands-on-machine-learning/9781492032632/app01.htmlidm45022116036904 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear Regression The Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Gradient Descent Batch Gradient Descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial Regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output Saving figure high_degree_polynomials_plot ###Markdown Learning Curves ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train) + 1): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized Linear Models Ridge Regression ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Lasso Regression ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Early Stopping ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic Regression Decision Boundaries ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) ###Output _____no_output_____ ###Markdown Softmax Regression ###Code from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다. ###Code # 파이썬 ≥3.5 필수 import sys assert sys.version_info >= (3, 5) # 사이킷런 ≥0.20 필수 import sklearn assert sklearn.__version__ >= "0.20" # 공통 모듈 임포트 import numpy as np import os # 노트북 실행 결과를 동일하게 유지하기 위해 np.random.seed(42) # 깔끔한 그래프 출력을 위해 %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # 그림을 저장할 위치 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("그림 저장:", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown 정규 방정식을 사용한 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output 그림 저장: generated_data_plot ###Markdown **식 4-4: 정규 방정식**$\hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}$ ###Code X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown $\hat{y} = \mathbf{X} \boldsymbol{\hat{\theta}}$ ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown 책에 있는 그림은 범례와 축 레이블이 있는 그래프입니다: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 `scipy.linalg.lstsq()` 함수("least squares"의 약자)를 사용하므로 이 함수를 직접 사용할 수 있습니다: ###Code # 싸이파이 lstsq() 함수를 사용하려면 scipy.linalg.lstsq(X_b, y)와 같이 씁니다. theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_ (pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: $\boldsymbol{\hat{\theta}} = \mathbf{X}^{-1}\hat{y}$ ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 배치 경사 하강법을 사용한 선형 회귀 **식 4-6: 비용 함수의 그레이디언트 벡터**$\dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta}) = \dfrac{2}{m} \mathbf{X}^T (\mathbf{X} \boldsymbol{\theta} - \mathbf{y})$**식 4-7: 경사 하강법의 스텝**$\boldsymbol{\theta}^{(\text{next step})} = \boldsymbol{\theta} - \eta \dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta})$ ###Code eta = 0.1 # 학습률 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # 랜덤 초기화 for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output 그림 저장: gradient_descent_plot ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 랜덤 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 없음 y_predict = X_new_b.dot(theta) # 책에는 없음 style = "b-" if i > 0 else "r--" # 책에는 없음 plt.plot(X_new, y_predict, style) # 책에는 없음 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 없음 plt.plot(X, y, "b.") # 책에는 없음 plt.xlabel("$x_1$", fontsize=18) # 책에는 없음 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 없음 plt.axis([0, 2, 0, 15]) # 책에는 없음 save_fig("sgd_plot") # 책에는 없음 plt.show() # 책에는 없음 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 랜덤 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output 그림 저장: gradient_descent_paths_plot ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # 책에는 없음 plt.xlabel("Training set size", fontsize=14) # 책에는 없음 plt.ylabel("RMSE", fontsize=14) # 책에는 없음 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("underfitting_learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 ###Output 그림 저장: learning_curves_plot ###Markdown 규제가 있는 모델 ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) ###Output _____no_output_____ ###Markdown **식 4-8: 릿지 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \dfrac{1}{2}\sum\limits_{i=1}^{n}{\theta_i}^2$ ###Code from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output 그림 저장: ridge_regression_plot ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.21 버전의 기본값인 `max_iter=1000`과 `tol=1e-3`으로 지정합니다. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-10: 라쏘 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right|$ ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-12: 엘라스틱넷 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + r \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right| + \dfrac{1 - r}{2} \alpha \sum\limits_{i=1}^{n}{{\theta_i}^2}$ ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown 조기 종료 예제: ###Code from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 중지된 곳에서 다시 시작합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown 그래프를 그립니다: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output 그림 저장: lasso_vs_ridge_plot ###Markdown 로지스틱 회귀 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() ###Output 그림 저장: logistic_function_plot ###Markdown **식 4-16: 하나의 훈련 샘플에 대한 비용 함수**$c(\boldsymbol{\theta}) =\begin{cases} -\log(\hat{p}) & \text{if } y = 1, \\ -\log(1 - \hat{p}) & \text{if } y = 0.\end{cases}$**식 4-17: 로지스틱 회귀 비용 함수(로그 손실)**$J(\boldsymbol{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) log\left(1 - \hat{p}^{(i)}\right)\right]}$**식 4-18: 로지스틱 비용 함수의 편도 함수**$\dfrac{\partial}{\partial \theta_j} \text{J}(\boldsymbol{\theta}) = \dfrac{1}{m}\sum\limits_{i=1}^{m}\left(\mathbf{\sigma(\boldsymbol{\theta}}^T \mathbf{x}^{(i)}) - y^{(i)}\right)\, x_j^{(i)}$ ###Code from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 너비 y = (iris["target"] == 2).astype(np.int) # Iris virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.22 버전의 기본값인 `solver="lbfgs"`로 지정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown 책에 실린 그림은 조금 더 예쁘게 꾸몄습니다: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() ###Output 그림 저장: logistic_regression_contour_plot ###Markdown **식 4-20: 소프트맥스 함수**$\hat{p}_k = \sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$**식 4-22: 크로스 엔트로피 비용 함수**$J(\boldsymbol{\Theta}) = - \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$**식 4-23: 클래스 k에 대한 크로스 엔트로피의 그레이디언트 벡터**$\nabla_{\boldsymbol{\theta}^{(k)}} \, J(\boldsymbol{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$ ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 너비 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 하지만 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 다음과 같이 수동으로 나누어 보겠습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2개와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항이나 인덱스의 순서가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 하지만 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그레이디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.4106007142918715 5000 0.3956780375390374 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629506 1000 0.503640075014894 1500 0.4946891059460321 2000 0.49129684180754774 2500 0.489899247009333 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "조기 종료!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 여전히 완벽하지만 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Training Linear Models** Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np n_samples = 100 X = 2 * np.random.rand(n_samples, 1) y = 4 + 3 * X + np.random.randn(n_samples, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() %%time X_b = np.c_[np.ones((n_samples, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() %%time from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code %%time theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output _____no_output_____ ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: y_predict = X_new_b.dot(theta) style = "b-" if i > 0 else "r--" plt.plot(X_new, y_predict, style) random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("sgd_plot") plt.show() theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output _____no_output_____ ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output _____no_output_____ ###Markdown Regularized models Ridge ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=5, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Early Stopping ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output _____no_output_____ ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. What Linear Regression training algorithm can you use if you have a training set with millions offeatures?2. Suppose the features in your training set have very different scales. What algorithms might sufferfrom this, and how? What can you do about it?3. Can Gradient Descent get stuck in a local minimum when training a Logistic Regression model?4. Do all Gradient Descent algorithms lead to the same model provided you let them run long enough?5. Suppose you use Batch Gradient Descent and you plot the validation error at every epoch. If younotice that the validation error consistently goes up, what is likely going on? How can you fix this?6. Is it a good idea to stop Mini-batch Gradient Descent immediately when the validation error goesup?7. Which Gradient Descent algorithm (among those we discussed) will reach the vicinity of the optimalsolution the fastest? Which will actually converge? How can you make the others converge as well?8. Suppose you are using Polynomial Regression. You plot the learning curves and you notice that thereis a large gap between the training error and the validation error. What is happening? What are threeways to solve this?9. Suppose you are using Ridge Regression and you notice that the training error and the validationerror are almost equal and fairly high. Would you say that the model suffers from high bias or highvariance? Should you increase the regularization hyperparameter α or reduce it?10. Why would you want to use:Ridge Regression instead of plain Linear Regression (i.e., without any regularization)?Lasso instead of Ridge Regression?Elastic Net instead of Lasso?11. Suppose you want to classify pictures as outdoor/indoor and daytime/nighttime. Should youimplement two Logistic Regression classifiers or one Softmax Regression classifier? 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output _____no_output_____ ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output _____no_output_____ ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(n_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 10, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(n_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train_predict, y_train)) val_errors.append(mean_squared_error(y_val_predict, y_val)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(n_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val_predict, y_val) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) for subplot in (221, 223): plt.subplot(subplot) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) for subplot in (223, 224): plt.subplot(subplot) plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.44618386482 500 0.835100303577 1000 0.687696155441 1500 0.601029983545 2000 0.544278281196 2500 0.503726274224 3000 0.472835729391 3500 0.448187250818 4000 0.427834726281 4500 0.410589102282 5000 0.395680325749 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.62957494791 500 0.534163155437 1000 0.503771274864 1500 0.494805645558 2000 0.491408194841 2500 0.490008507445 3000 0.489407428961 3500 0.489143102469 4000 0.489025165491 4500 0.488972058096 5000 0.488948000479 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Extra Knowledge > [Differences between the **L1-norm** and the **L2-norm**](http://www.chioka.in/differences-between-the-l1-norm-and-the-l2-norm-least-absolute-deviations-and-least-squares/) (Least Absolute Deviations and Least Squares)![](http://www.chioka.in/wp-content/uploads/2013/12/L1-vs-L2-norm-visualization.png) > [**Sparse models**](https://dawn.cs.stanford.edu/2017/08/29/taba/) – models where only a small fraction of parameters are non-zero. **Sparsity** is beneficial in several ways: sparse models are more easily interpretable by humans, and sparsity can yield statistical benefits – such as reducing the number of examples that have to be observed to learn the model. In a sense, we can think of sparsity as an antidote to the oft-maligned curse of dimensionality. > The score t from ${\sigma(t)} = \frac{1}{1+e^{-t}}$ is often called the **logit**. The name comes from the fact that the logit function, defined as logit(p) = log(p/(1-p)), is the inverse of the logistic function. Indeed, if you compute the logit of the estimated probability **p**, you will find that the result is **t**. The logit is also called the log-odds, since it is the log of the rato between the estimated probability for the positive clss and the estimated probability for the negative class. > **Cross Entropy** originated from information theory. Suppose you want to efficiently transmit information about the weather every day. If there are eight options (sunny, rainy, etc.), you could encode each option using 3 bits since 2$_3$ = 8. However, if you think it will be sunny almost every day, it would be much more efficient to code “sunny” on just one bit (0) and the other seven options on 4 bits (starting with a 1). Cross entropy measures the average number of bits you actually send per option. If your assumption about the weather is perfect, cross entropy will just be equal to the entropy of the weather itself (i.e., its intrinsic unpredictability). But if your assumptions are wrong (e.g., if it rains often), cross entropy will be greater by an amount called the Kullback–Leibler divergence. Linear regression using the Normal Equation To find the value of θ that minimizes the cost function, there is a closed-form solution—in other words, a mathematical equation that gives the result directly. This is called the Normal Equation:$\hat{\theta} = {(X^TX)}^{-1} {X^T} {y}$, where:* $\hat{\theta}$ is the value of θ that minimizes the cost function* y is the vector of target values containing y(1) to y(m) ###Code import numpy as np X = 2 * np.random.rand(100, 1) # The function that we used to generate the data is y = 4 + 3x1 + Gaussian noise y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output Saving figure generated_data_plot ###Markdown Now let’s compute $\hat{\theta}$ using the Normal Equation. * use the inv() function to compute the inverse of a matrix* the dot() method for matrix multiplication ###Code X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown We would have hoped for ${\theta}_0$ = 4 and ${\theta}_1$ = 3 instead of ${\theta}_0$ = 4.215 and ${\theta}_1$ = 2.770. Close enough, but the noise made it impossible to recover the exact parameters of the original function. Now you can make predictions using $\hat{\theta}$: ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() ###Output Saving figure linear_model_predictions_plot ###Markdown > Note that Scikit-Learn separates the bias term *intercept_* from the feature weights *coef_* ###Code from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes ${X}^+{y}$, where ${X}^{+}$ is the _pseudoinverse_ of ${X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown The pseudoinverse itself is computed using a standard matrix factorization technique called Singular Value Decomposition (SVD)> **Why use this approach?**1. The pseudoinverse is computed as ${X}^+ = {V} {E}^+ {U}^T$. This approach is more efficient than computing the Normal Equation, plus it handles edge cases nicely: indeed, the Normal Equation may not work if the matrix ${X}^T {X}$ is not invertible (i.e., singular), such as if m < n or if some features are redundant, but the pseudoinverse is always defined.2. The Normal Equation computes the inverse of ${X}^T {X}$, which is an (n + 1) × (n + 1) matrix (where n is the number of features). The computational complexity of inverting such a matrix is typically about O(${n}^{2.4}$) to O(${n}^{3}$). The SVD approach used by Scikit-Learn’s LinearRegression class is about O(${n}^{2}$) -- If you double the number of features, you multiply the computation time by roughly 4, compared to 8 (Normal Eq) > Now we will look at very **different ways to train a Linear Regression** model, better suited for cases where there are a **large number of features, or too many training instances to fit in memory**. Normal Equation vs Gradient Descent> While the Normal Equation can only perform Linear Regression, the Gradient Descent algorithms can be used to train many other models Comparison of algos for Linear Regression| Algorithm | Large m | Out-of-core support | Large n | Hyperparams | Scaling required | Scikit-Learn ||-----------------|---------|---------------------|---------|-------------|------------------|------------------|| Normal Equation | Fast | No | Slow | 0 | No | n/a || SVD | Fast | No | Slow | 0 | No | LinearRegression || Batch GD | Slow | No | Fast | 2 | Yes | SGDRegressor || Stochastic GD | Fast | Yes | Fast | ≥2 | Yes | SGDRegressor || Mini-batch GD | Fast | Yes | Fast | ≥2 | Yes | SGDRegressor | > There is almost no difference after training: all these algorithms end up with very similar models and make predictions in exactly the same way. Gradient descentGradient Descent is a very generic optimization algorithm capable of finding optimal solutions to a wide range of problems. The general idea of Gradient Descent is to **tweak parameters iteratively in order to minimize a cost function**. > Fortunately, the MSE cost function for a Linear Regression model happens to be a convex function > When using Gradient Descent, you should ensure that all features have a similar scale (e.g., using Scikit-Learn’s StandardScaler class), or else it will take much longer to converge. ![](https://github.com/nyculescu/handson-ml2/blob/master/images/training_linear_models/gradient_descent_paths_in_parameter_space.jpg?raw=1)Gradient Descent algorithms end up near the minimum, but Batch GD’s path actually stops at the minimum, while both Stochastic GD and Mini-batch GD continue to walk around. > Batch GD takes a lot of time to take each step, and Stochastic GD and Mini-batch GD would also reach the minimum if you used a good learning schedule. 1) Linear regression using batch gradient descent We need to calculate how much the cost function will change if you change ${\theta_j}$ just a little bit (partial derivative).The gradient vector, noted ${\nabla}_{\theta}{MSE(\theta)}$, contains all the partial derivatives of the cost function (one for each model parameter).$${\nabla}_{\theta}{MSE(\theta)} = \left(\begin{array}{rrr}{\frac{\partial}{\partial\theta_0}{MSE(\theta)}}\\{\frac{\partial}{\partial\theta_1}{MSE(\theta)}}\\{...}\\{\frac{\partial}{\partial\theta_n}{MSE(\theta)}}\end{array}\right)= \frac{2}{m}{X^T}{X\theta-y}$$Batch Gradient Descent uses the whole batch of training data at every step. > **Training a Linear Regression** model when there are **hundreds of thousands of features** is much **faster using Gradient Descent** than using the Normal Equation or SVD decomposition Once you have the gradient vector, which points uphill, just go in the opposite direction to go downhill. This means subtracting ${\nabla}_{\theta}{MSE(\theta)}$ from θ. This is where the learning rate η comes into play: multiply the gradient vector by η to determine the size of the downhill step.Gradient Descent step: ${\theta}^{nextstep} = {\theta-\eta}{\nabla_\theta}{MSE(\theta)}$ ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) ###Output _____no_output_____ ###Markdown &darr; The first 10 steps of Gradient Descent using three different learning rates (the dashed line represents the starting point) ###Code np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown On the left, the learning rate is too low: the algorithm will eventually reach the solution, but it will take a long time. In the middle, the learning rate looks pretty good: in just a few iterations, it has already converged to the solution. On the right, the learning rate is too high: the algorithm diverges, jumping all over the place and actually getting further and further away from the solution at every step. > To find a good *learning rate*, you can use *grid search*. > You may want to **limit the number of iterations** so that grid search can eliminate models that take too long to converge.A simple solution is to set a very large number of iterations but to interrupt the algorithm when the gradient vector becomes tiny — that is, when its norm becomes smaller than a tiny number ϵ (called the *tolerance*) — because this happens when Gradient Descent has (almost) reached the minimum. 2) Stochastic Gradient Descent The main problem with Batch Gradient Descent is the fact that it uses the whole training set to compute the gradients at every step.At the opposite extreme, Stochastic Gradient Descent just picks a random instance in the training set at every step and computes the gradients based only on that single instance, but on the other hand, this algo is much less regular than Batch Gradient Descent: instead of gently decreasing until it reaches the minimum, the cost function will bounce up and down, decreasing only on average - over time it will end up very close to the minimum, but once it gets there it will continue to bounce around, never settling down. So once the algorithm stops, the final parameter values are good, but not optimal --> Therefore randomness is good to escape from local optima, but bad because it means that the algorithm can never settle at the minimum. One solution to this dilemma is to gradually reduce the learning rate. > SGD can be implemented as an **out-of-core algo** (online learning algos can also be used to train systems on huge datasets that cannot fit in one machine's main memory) The function that determines the learning rate at each iteration is called the **learning schedule**. If the learning rate is reduced too quickly, you may get stuck in a local minimum, or even end up frozen halfway to the minimum. If the learning rate is reduced too slowly, you may jump around the minimum for a long time and end up with a suboptimal solution if you halt training too early. ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) # While the Batch Gradient Descent code iterated 1,000 times through the whole training set, # this code goes through the training set only 50 times and reaches a fairly good solution n_epochs = 50 # by convention we iterate by rounds of m iterations; each round is called an epoch t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta ###Output _____no_output_____ ###Markdown > When using SGD, the training instances must be independent and identically distributed (IID) to ensure that the params get pulles toward the global minimum, on average.A simple way to ensure this is to shuffle the instances during training. If you don't shuffle the instances - e.g., if the instances are srted by label - then SGD will start by optimizing for one label, then the next, and so on, and it will not settle close to the global minimum. ###Code from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, # the code will run for max 1000 epochs * tol=1e-3, # * or until the loss drops by less than 0.001 during one epoch penalty=None, # it doesn't use any regularization eta0=0.1, # learning rate random_state=42 ) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 3) Mini-batch gradient descent Mini-batch GD computes the gradients on small random sets of instances called mini-batches. > The main advantage of Mini-batch GD over Stochastic GD is that you can get a **performance boost** from hardware optimization of matrix operations, especially when **using GPUs**. ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression What if your data is actually more complex than a simple straight line? Surprisingly, you can actually use a linear model to fit nonlinear data. A simple way to do this is to add powers of each feature as new features, then train a linear model on this extended set of features. This technique is called **Polynomial Regression**. ###Code import numpy as np import numpy.random as rnd np.random.seed(42) # generate some nonlinear data, based on a simple quadratic equation m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() ###Output Saving figure quadratic_data_plot ###Markdown Clearly, a straight line will never fit this data properly ###Code from sklearn.preprocessing import PolynomialFeatures # add the square (2nd-degree polynomial) of the X feature in the training set as new feature poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] ###Output _____no_output_____ ###Markdown X_poly now contains the original feature of X plus the square of this feature. ###Code from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() # fit a LinearRegression model to this extended training data lin_reg.fit(X_poly, y) print("bias term: %s" % lin_reg.intercept_) print("weights: %s" % lin_reg.coef_) ###Output bias term: [1.78134581] weights: [[0.93366893 0.56456263]] ###Markdown The model estimates $\hat{y} = {0.56}{x_1^2} + {0.93}{x_1} + {1.78}$ when in fact the original function was ${y} = {0.5}{x_1^2} + {1.0}{x_1} + {2.0} + {Gaussian \space noise}$ ###Code X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() ###Output Saving figure quadratic_predictions_plot ###Markdown Note that when there are multiple features, **Polynomial Regression is capable of finding relationships between features** (which is something a plain Linear Regression model cannot do).For example, if there were two features a and b, PolynomialFeatures with degree=3 would not only add the features $a^2$, $a^3$, $b^2$, and $b^3$, but also the combinations $ab$, $a^2b$, and $ab^2$.> PolynomialFeatures(degree=d) transforms an array containing n features into an array containing $\frac{(n+d)!}{(d!n!)}$ features, where n! is the factorial of n, equal to 1 × 2 × 3 × ⋯ × n. Beware of the combinatorial explosion of the number of features! Learning Curves ###Code from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output Saving figure high_degree_polynomials_plot ###Markdown &uarr; applies a 300-degree polynomial model to the preceding training data, and compares the result with a pure linear model and a quadratic model (2nd-degree polynomial). Notice how the 300-degree polynomial model wiggles around to get as close as possible to the training instances.* high-degree Polynomial Regression model is severely overfitting the training data* the linear model is underfitting it **How can you tell that your model is overfitting or underfitting the data?**1) Use cross-validation to get an estimate of a model’s generalization performance. * If a model performs well on the training data but generalizes poorly according to the cross-validation metrics, then your model is overfitting. * If it performs poorly on both, then it is underfitting. This is one way to tell when a model is too simple or too complex.2) Look at the learning curves: these are plots of the model’s performance on the training set and the validation set as a function of the training set size (or the training iteration). To generate the plots, simply train the model several times on different sized subsets of the training set. ###Code # The following code defines a function that plots the learning curves of a model given some training data from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="training set") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="validation set") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure underfitting_learning_curves_plot ###Markdown &uarr; First, let’s look at the performance on the training data: when there are just one or two instances in the training set, the model can fit them perfectly, which is why the curve starts at zero. But as new instances are added to the training set, it becomes impossible for the model to fit the training data perfectly, both because the data is noisy and because it is not linear at all. So the error on the training data goes up until it reaches a plateau, at which point adding new instances to the training set doesn’t make the average error much better or worse. Now let’s look at the performance of the model on the validation data. When the model is trained on very few training instances, it is incapable of generalizing properly, which is why the validation error is initially quite big. Then as the model is shown more training examples, it learns and thus the validation error slowly goes down. However, once again a straight line cannot do a good job modeling the data, so the error ends up at a plateau, very close to the other curve.These learning curves are typical of an underfitting model. Both curves have reached a plateau; they are close and fairly high. > If your model is underfitting the training data, adding more training examples will not help. You need to use a more complex model or come up with better features. ###Code # learning curves of a 10th-degree polynomial model on the same data from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown &uarr; These learning curves look a bit like the previous ones, but there are two very important differences:The error on the training data is much lower than with the Linear Regression model.There is a gap between the curves. This means that the model performs significantly better on the training data than on the validation data, which is the hallmark of an overfitting model. However, if you used a much larger training set, the two curves would continue to get closer. > One way to improve an overfitting model is to feed it more training data until the validation error reaches the training error. An important theoretical result of statistics and Machine Learning is the fact that a **model’s generalization error can be expressed as the sum of three very different errors**:1) BiasThis part of the generalization error is due to wrong assumptions, such as assuming that the data is linear when it is actually quadratic. A high-bias model is most likely to underfit the training data.2) VarianceThis part is due to the model’s excessive sensitivity to small variations in the training data. A model with many degrees of freedom (such as a high-degree polynomial model) is likely to have high variance, and thus to overfit the training data.3) Irreducible errorThis part is due to the noisiness of the data itself. The only way to reduce this part of the error is to clean up the data (e.g., fix the data sources, such as broken sensors, or detect and remove outliers).> Increasing a model’s complexity will typically increase its variance and reduce its bias. Conversely, reducing a model’s complexity increases its bias and reduces its variance. This is why it is called a tradeoff. Regularized models > A good way to **reduce overfitting** is to **regularize the model** (i.e., to constrain it): the fewer degrees of freedom it has, the harder it will be for it to overfit the data. For example, a simple way to regularize a polynomial model is to reduce the number of polynomial degrees. > For a **linear model**, **regularization** is typically achieved by **constraining the weights of the model**.Three different ways to constrain the weights:* Ridge Regression* Lasso Regression* Elastic Net ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) ###Output _____no_output_____ ###Markdown Ridge Regression (Tikhonov regularization)Is a regularized version of Linear Regression. A regularization term equal to $\alpha\sum_{i=1}^n{\theta_i^2}$ is added to the cost function - this forces the learning algorithm to not only fit the data but also **keep the model weights as small as possible**.The hyperparameter α controls how much you want to regularize the model. If α = 0 then Ridge Regression is just Linear Regression. If α is very large, then all weights end up very close to zero and the result is a flat line going through the data’s mean.> The regularization term should only be added to the cost function during training> Once the model is trained, you want to evaluate the model’s performance using the unregularized performance measure.>> &uarr; $?$ Ridge Regression cost function:${J(\theta)} = {MSE(\theta)} + {\alpha}\frac{1}{2}\sum_{i=1}^n{\theta_i^2}$> Note that the bias term $θ_0$ is not regularized (the sum starts at i = 1, not 0).> *J(θ) is a common notation for cost functions that don't have a short name*> If we define **w** as the vector of feature weights ($θ_1$ to $θ_n$), then the regularization term is simply equal to ½(∥ w ∥$_2$)$^2$, where ∥ w ∥$_2$ represents the ℓ$_2$ norm of the weight vector. For Gradient Descent, just add αw to the MSE gradient vector. > It is quite common for the cost function used during training to be different from the performance measure used for testing. Apart from regularization, another reason why they might be different is that **a good training cost function should have optimization-friendly derivatives, while the performance measure used for testing should be as close as possible to the final objective**. A good example of this is a classifier trained using a cost function such as the log loss (discussed in a moment) but evaluated using precision/recall. Ridge Regression closed-form solution (Eq 4-9):$\hat{\theta} = {(X^TX+\alpha A)}^{-1} {X^T} {y}$, where* A is the (n + 1) × (n + 1) identity matrix except with a 0 in the top-left cell, corresponding to the bias term ###Code # perform Ridge Regression with Scikit-Learn using a closed-form solution (Eq 4-9) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) # perform Ridge Regression with Scikit-Learn using a Stochastic Average Gradient descent ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown On the **left**, plain Ridge models are used, leading to linear predictions -- **linear model**On the **right**, the data is first expanded using PolynomialFeatures(degree=10), then it is scaled using a StandardScaler, and finally the Ridge models are applied to the resulting features: this is *Polynomial Regression with Ridge regularization* -- **polynomial model** >> $?/$ what's with the StandardScaler? **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code # perform Ridge Regression with Scikit-Learn using Stochastic Gradient Descent sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Lasso Regression (Least Absolute Shrinkage and Selection Operator Regression) Just like Ridge Regression, it adds a regularization term to the cost function, but it uses the ℓ$_1$ norm of the weight vector instead of half the square of the ℓ$_2$ norm(Eq 4-10) Lasso Regression cost function: ${J(\theta)} = {MSE(\theta)} + {\alpha}\sum_{i=1}^n{|\theta_i|}$ ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() ###Output /usr/local/lib/python3.6/dist-packages/sklearn/linear_model/_coordinate_descent.py:476: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Duality gap: 2.802867703827423, tolerance: 0.0009294783355207351 positive) ###Markdown An important characteristic of Lasso Regression is that it tends to completely eliminate the weights of the least important features (i.e., set them to zero) -- **Lasso Regression** automatically performs feature selection and **outputs a sparse model** (i.e., with few nonzero feature weights). E.g., In the right img (with α = 10-7) the dashed line looks quadratic, almost linear: all the weights for the high-degree polynomial features are equal to zero. The Lasso cost function is not differentiable at θ$_i$ = 0 (for i = 1, 2, ⋯, n), but Gradient Descent still works fine if you use a subgradient vector **g** (you can think of a subgradient vector at a nondifferentiable point as an intermediate vector between the gradient vectors around that point) instead when any θi = 0.Lasso Regression subgradient vector:![](https://learning.oreilly.com/library/view/hands-on-machine-learning/9781491962282/assets/eq_38.png) ###Code # A small Scikit-Learn example using the Lasso class from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown > &uarr; we could instead use an SGDRegressor(penalty="l1") Plain Linear Regression vs Lasso vs Ridge vs Elastic Net regularization > take a look into the book at page 138 for Lasso vs Ridge and at the *lasso_vs_ridge_plot*> Elastic Net is a middle ground between Lasso regression and Ridge regression > So when should you use plain Linear Regression (i.e., without any regularization), Ridge, Lasso, or Elastic Net?* It is almost always preferable to have at least a little bit of regularization, so generally you should avoid plain Linear Regression* **Ridge is a good default, but** if you suspect that only **a few features are actually useful**, you should **prefer Lasso or Elastic Net** since they tend to reduce the useless features’ weights down to zero* Lasso may behave erratically (chaotic) when: * the number of features is greater than the number of training instances * or when several features are strongly correlated ###Code best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Elastic Net The regularization term is a simple mix of both Lasso and Rigde's regularization terms (controlling the mix ratio **r**)* r = 0 -> Elastic Net = Rigde Regression* r = 1 -> Elastic Net = Lasso Regression(Eq 4-12) Elastic Net cost function: ${J(\theta)} = {MSE(\theta)} + {r\alpha}\sum_{i=1}^n{|\theta_i|} + {\frac{1-r}{2}\alpha}\sum_{i=1}^n{\theta_i^2}$ ###Code # Example using Scikit-Learn’s ElasticNet (l1_ratio corresponds to the mix ratio r) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping A very different way to regularize iterative learning algorithms such as Gradient Descent is to stop training as soon as the validation error reaches a minimum -- early stopping Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, # with warm_start=True, when the fit() method is called, # it just continues training where it left off instead of restarting from scratch warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() ###Output Saving figure early_stopping_plot ###Markdown &uarr; after a while the **validation error** stops decreasing and actually starts to **go back up**. This indicates that the model has started to **overfit the training data**. > With early stopping you just stop training as soon as the validation error reaches the minimum> With Stochastic and Mini-batch Gradient Descent, the curves are not so smooth, and it may be hard to know whether you have reached the minimum or not --> one solution is to stop only after the validation error has been above the minimum for some time, then roll back the model parameters to the point where the validation error was at a minimum Logistic regression (Logit Regression) Some regression algorithms can be used for classification as well (and vice versa). Logistic Regression is commonly used to estimate the probability that an instance belongs to a particular class (e.g., what is the probability that this email is spam?). If the estimated probability is greater than 50%, then the model predicts that the instance belongs to that class (called the positive class, labeled “1”), or else it predicts that it does not (i.e., it belongs to the negative class, labeled “0”) --> this makes it a binary classifier. Estimating Probabilities Just like a Linear Regression model, a Logistic Regression model computes a weighted sum of the input features (plus a bias term), but instead of outputting the result directly like the Linear Regression model does, it outputs the logistic of this result(Eq 4-13) Logistic Regression model estimated probability (vectorized form): $\hat{p} = {h_{\theta}}{(x)} = {\sigma(\theta^Tx)}$The logistic, σ(·), is a sigmoid function that outputs a number between 0 and 1.(Eq. 4-14) Logistic function: ${\sigma(t)} = \frac{1}{1+e^{-t}}$ ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() ###Output Saving figure logistic_function_plot ###Markdown &uarr; * σ(t) < 0.5 when t < 0* σ(t) ≥ 0.5 when t ≥ 0so a Logistic Regression model predicts 1 if $x^T\theta$ is positive, and 0 if it is negative Training and Cost Function Now we know how a Logistic Regression model estimates probabilities and makes predictions.**But how is it trained?**The objective of training is to set the parameter vector θ so that the model estimates high probabilities for positive instances (y = 1) and low probabilities for negative instances (y = 0). (Eq 4-16) Cost function of a single training instance: $$c(\theta) =\begin{cases}-log(\hat{p}),& \text{if y=1}\\-log(1-\hat{p}),& \text{if y=0}\end{cases}$$This cost function makes sense because * –log(t) grows very large when t approaches 0, * so the cost will be large if the model estimates a probability close to 0 for a positive instance, * and it will also be very large if the model estimates a probability close to 1 for a negative instance. * On the other hand, – log(t) is close to 0 when t is close to 1, * so the cost will be close to 0 if the estimated probability is close to 0 for a negative instance * or close to 1 for a positive instance, which is precisely what we want.> The cost function over the whole training set is simply the average cost over all training instances. (Eq 4-17) Logistic Regression cost function (log loss): $J(\theta)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}log(\hat{p}^{(i)})+(1-y^{(i)})log(1-\hat{p}^{(i)})]$* The bad news is that there is no known closed-form equation to compute the value of θ that minimizes this cost function (there is no equivalent of the Normal Equation). * But the good news is that this cost function is convex, so Gradient Descent (or any other optimization algorithm) is guaranteed to find the global minimum (if the learning rate is not too large and you wait long enough) The "Logistic cost function partial derivatives" (Eq 4-5, Partial derivatives of the cost function ) --> once you have the gradient vector containing all the partial derivatives you can use it in the **Batch Gradient Descent algorithm**.For **Stochastic GD** you would of course just take one instance at a time, and for **Mini-batch GD** you would use a mini-batch at a time. Decision Boundaries ###Code from sklearn import datasets # Let’s use the iris dataset to illustrate Logistic Regression # This is a famous dataset that contains the sepal and petal length and width of 150 iris flowers # of three different species: Iris-Setosa, Iris-Versicolor, and Iris-Virginica iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) # Let’s try to build a classifier to detect the Iris-Virginica type based only on the petal width feature X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) # Let’s look at the model’s estimated probabilities for flowers with petal widths varying from 0 to 3 cm X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary ###Output _____no_output_____ ###Markdown The petal width of Iris-Virginica flowers (represented by triangles) ranges from 1.4 cm to 2.5 cm, while the other iris flowers (represented by squares) generally have a smaller petal width, ranging from 0.1 cm to 1.8 cm. Notice that there is a bit of overlap. Above about 2 cm the classifier is highly confident that the flower is an Iris-Virginica (it outputs a high probability to that class), while below 1 cm it is highly confident that it is not an Iris-Virginica (high probability for the “Not Iris-Virginica” class). In between these extremes, the classifier is unsure. However, if you ask it to predict the class (using the predict() method rather than the predict_proba() method), it will return whichever class is the most likely. Therefore, there is a decision boundary at around 1.6 cm where both probabilities are equal to 50%: if the petal width is higher than 1.6 cm, the classifier will predict that the flower is an Iris-Virginica, or else it will predict that it is not (even if it is not very confident): ###Code log_reg.predict([[1.7], [1.5]]) ###Output _____no_output_____ ###Markdown > The hyperparameter controlling the regularization strength of a Scikit-Learn LogisticRegression model is not alpha (as in other linear models), but its inverse: C. The higher the value of C, the less the model is regularized. ###Code from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() ###Output Saving figure logistic_regression_contour_plot ###Markdown The dashed line represents the points where the model estimates a 50% probability: this is the model’s decision boundary. > Note that it is a linear boundary.Each parallel line represents the points where the model outputs a specific probability, from 15% (bottom left) to 90% (top right). All the flowers beyond the top-right line have an over 90% chance of being Iris-Virginica according to the model. > Logistic Regression models can be regularized using ℓ1 or ℓ2 penalties. Scitkit-Learn actually adds an ℓ2 penalty by default. Softmax Regression The **Logistic Regression model can be generalized to support multiple classes directly**, without having to train and combine multiple binary classifiers -- this is called Softmax Regression, or Multinomial Logistic Regression. when given an instance x, the Softmax Regression model first computes a score sk(x) for each class k, then estimates the probability of each class by applying the softmax function (also called the normalized exponential) to the scores. The equation to compute sk(x) should look familiar, as it is just like the equation for Linear Regression prediction.(Eq 4-19) Softmax score for class k: $s_k(x)=(\theta^{(k)})^Tx$Note that each class has its own dedicated parameter vector θ$_{(k)}$. All these vectors are typically stored as rows in a parameter matrix Θ.Once you have computed the score of every class for the instance x, you can estimate the probability $\hat{p}_k$ that the instance belongs to class k by running the scores through the softmax function (Eq 4-20): it computes the exponential of every score, then normalizes them (dividing by the sum of all the exponentials)(Eq 4-20) Softmax function: $\hat{p}_k = \sigma(s(x))_k = \frac{exp(s_k(x))}{\sum_{j=1}^Kexp(s_j(x))}$, where* K is the number of classes* s(x) is a vector containing the scores of each class for the instance x* σ(s(x))$_k$ is the estimated probability that the instance x belongs to class k given the scores of each class for that instanceJust like the Logistic Regression classifier, the Softmax Regression classifier predicts the class with the highest estimated probability (which is simply the class with the highest score), as shown in Equation 4-21: $\hat{y} = \underset{k}{argmax}\space \sigma(s(x))_k = \underset{k}{argmax}\space s_k(x) = \underset{k}{argmax}\space((\theta^{(k)})^Tx)$* The argmax operator returns the value of a variable that maximizes a function. In this equation, it returns the value of k that maximizes the estimated probability σ(s(x))$_k$. > The Softmax Regression classifier predicts only one class at a time. You cannot use it to recognize multiple people in one picture. The objective is to have a model that estimates a high probability for the target class (and consequently a low probability for the other classes). Minimizing the cost function, called the **cross entropy**, should lead to this objective because it penalizes the model when it estimates a low probability for a target class. Cross entropy is frequently used to measure how well a set of estimated class probabilities match the target classes. ###Code # Let’s use Softmax Regression to classify the iris flowers into all three classes X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] # LogisticRegression uses one-versus-all by default when you train it on more than two classes, # but you can set the multi_class hyperparameter to "multinomial" to switch it to Softmax Regression instead. softmax_reg = LogisticRegression(multi_class="multinomial", # set the multi_class hyperparameter to "multinomial" to switch it to Softmax Regression solver="lbfgs", # You must also specify a solver that supports Softmax Regression, such as the "lbfgs" solver C=10, # It also applies ℓ2 regularization by default, which you can control using the hyperparameter C random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() ###Output Saving figure softmax_regression_contour_plot ###Markdown &uarr; Figure shows the resulting decision boundaries, represented by the background colors. Notice that the decision boundaries between any two classes are linear. The figure also shows the probabilities for the Iris-Versicolor class, represented by the curved lines (e.g., the line labeled with 0.450 represents the 45% probability boundary). Notice that the model can predict a class that has an estimated probability below 50%. For example, at the point where all decision boundaries meet, all classes have an equal estimated probability of 33%. &darr; So the next time you find an iris with 5 cm long and 2 cm wide petals, you can ask your model to tell you what type of iris it is, and it will answer Iris-Virginica (class 2) with 94.2% probability (or Iris-Versicolor with 5.8% probability) ###Code softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.42786510939287936 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.48886433374493027 5000 0.48884031207388184 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown Our perfect model turns out to have slight imperfections. This variability is likely due to the very small size of the dataset: depending on how you sample the training set, validation set and the test set, you can get quite different results. Try changing the random seed and running the code again a few times, you will see that the results will vary. ###Code ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercices in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import numpy.random as rnd import os # to make this notebook's output stable across runs rnd.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code X = 2 * rnd.rand(100, 1) y = 4 + 3 * X + rnd.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() import numpy.linalg as LA X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = LA.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) rnd.seed(42) theta = rnd.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] n_iterations = 50 t0, t1 = 5, 50 # learning schedule hyperparameters rnd.seed(42) theta = rnd.randn(2,1) # random initialization def learning_schedule(t): return t0 / (t + t1) m = len(X_b) for epoch in range(n_iterations): for i in range(m): if epoch == 0 and i < 20: y_predict = X_new_b.dot(theta) style = "b-" if i > 0 else "r--" plt.plot(X_new, y_predict, style) random_index = rnd.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("sgd_plot") plt.show() theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(n_iter=50, penalty=None, eta0=0.1) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 rnd.seed(42) theta = rnd.randn(2,1) # random initialization t0, t1 = 10, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = rnd.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd rnd.seed(42) m = 100 X = 6 * rnd.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + rnd.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline(( ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), )) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="Training set") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Training set size", fontsize=14) plt.ylabel("RMSE", fontsize=14) lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) save_fig("underfitting_learning_curves_plot") plt.show() from sklearn.pipeline import Pipeline polynomial_regression = Pipeline(( ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("sgd_reg", LinearRegression()), )) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) save_fig("learning_curves_plot") plt.show() ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge rnd.seed(42) m = 20 X = 3 * rnd.rand(m, 1) y = 1 + 0.5 * X + rnd.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline(( ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), )) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100)) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1)) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky") ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag") ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1)) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) rnd.seed(42) m = 100 X = 6 * rnd.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + rnd.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline(( ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), )) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(n_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train_predict, y_train)) val_errors.append(mean_squared_error(y_val_predict, y_val)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(n_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val_predict, y_val) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) for subplot in (221, 223): plt.subplot(subplot) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) for subplot in (223, 224): plt.subplot(subplot) plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) from sklearn.linear_model import LogisticRegression X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 log_reg = LogisticRegression() log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial", solver="lbfgs", C=10) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown Linear Regression The Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Gradient Descent Batch Gradient Descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial Regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output Saving figure high_degree_polynomials_plot ###Markdown Learning Curves ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train) + 1): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized Linear Models Ridge Regression ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Lasso Regression ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Elastic Net ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown Early Stopping ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) from copy import deepcopy poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = deepcopy(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic Regression Decision Boundaries ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) ###Output _____no_output_____ ###Markdown Softmax Regression ###Code from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code from sklearn import datasets iris = datasets.load_iris() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) rnd_indices[:5] # Training observations X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] # Validation observations # (this feels a bit hacky) X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] # Test observations X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) # Uses the original values of the clases in order to select the parts of the array where to change the value to 1 Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 # random initialisation Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: # Each 500 iterations it prints the loss function output loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) # note that the gradients matrix has shape n_inputs x n_outputs Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 # learning rate n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot # The second part in the gradients sum is the regularisation/penalisation component # Note how it's zero for the intercept gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code # Note that "the model" is given by the Theta matrix. # Fitting the model equals to finding the optimal values for Thera logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty # initialisation in infinity Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) # Adding early stopping is just adding this if/else logic if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) # contour for the class iris versicolor zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) # This plots the actual observations plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) # This colours the background based on the predicted class plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **4장 – 모델 훈련** _이 노트북은 4장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._ 구글 코랩에서 실행하기 설정 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지도 확인합니다. ###Code # 파이썬 ≥3.5 필수 import sys assert sys.version_info >= (3, 5) # 사이킷런 ≥0.20 필수 import sklearn assert sklearn.__version__ >= "0.20" # 공통 모듈 임포트 import numpy as np import os # 노트북 실행 결과를 동일하게 유지하기 위해 np.random.seed(42) # 깔끔한 그래프 출력을 위해 %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # 그림을 저장할 위치 PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("그림 저장:", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # 불필요한 경고를 무시합니다 (사이파이 이슈 #5998 참조) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown 정규 방정식을 사용한 선형 회귀 ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() ###Output 그림 저장: generated_data_plot ###Markdown **식 4-4: 정규 방정식**$\hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}$ ###Code X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1을 추가합니다. theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown $\hat{y} = \mathbf{X} \boldsymbol{\hat{\theta}}$ ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1을 추가합니다. y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown 책에 있는 그림은 범례와 축 레이블이 있는 그래프입니다: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown `LinearRegression` 클래스는 `scipy.linalg.lstsq()` 함수("least squares"의 약자)를 사용하므로 이 함수를 직접 사용할 수 있습니다: ###Code # 싸이파이 lstsq() 함수를 사용하려면 scipy.linalg.lstsq(X_b, y)와 같이 씁니다. theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown 이 함수는 $\mathbf{X}^+\mathbf{y}$을 계산합니다. $\mathbf{X}^{+}$는 $\mathbf{X}$의 _유사역행렬_ (pseudoinverse)입니다(Moore–Penrose 유사역행렬입니다). `np.linalg.pinv()`을 사용해서 유사역행렬을 직접 계산할 수 있습니다: $\boldsymbol{\hat{\theta}} = \mathbf{X}^{-1}\hat{y}$ ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown 배치 경사 하강법을 사용한 선형 회귀 **식 4-6: 비용 함수의 그레이디언트 벡터**$\dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta}) = \dfrac{2}{m} \mathbf{X}^T (\mathbf{X} \boldsymbol{\theta} - \mathbf{y})$**식 4-7: 경사 하강법의 스텝**$\boldsymbol{\theta}^{(\text{next step})} = \boldsymbol{\theta} - \eta \dfrac{\partial}{\partial \boldsymbol{\theta}} \text{MSE}(\boldsymbol{\theta})$ ###Code eta = 0.1 # 학습률 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # 랜덤 초기화 for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output 그림 저장: gradient_descent_plot ###Markdown 확률적 경사 하강법 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # 랜덤 초기화 for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # 책에는 없음 y_predict = X_new_b.dot(theta) # 책에는 없음 style = "b-" if i > 0 else "r--" # 책에는 없음 plt.plot(X_new, y_predict, style) # 책에는 없음 random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # 책에는 없음 plt.plot(X, y, "b.") # 책에는 없음 plt.xlabel("$x_1$", fontsize=18) # 책에는 없음 plt.ylabel("$y$", rotation=0, fontsize=18) # 책에는 없음 plt.axis([0, 2, 0, 15]) # 책에는 없음 save_fig("sgd_plot") # 책에는 없음 plt.show() # 책에는 없음 theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown 미니배치 경사 하강법 ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # 랜덤 초기화 t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output 그림 저장: gradient_descent_paths_plot ###Markdown 다항 회귀 ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # 책에는 없음 plt.xlabel("Training set size", fontsize=14) # 책에는 없음 plt.ylabel("RMSE", fontsize=14) # 책에는 없음 lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("underfitting_learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # 책에는 없음 save_fig("learning_curves_plot") # 책에는 없음 plt.show() # 책에는 없음 ###Output 그림 저장: learning_curves_plot ###Markdown 규제가 있는 모델 ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) ###Output _____no_output_____ ###Markdown **식 4-8: 릿지 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \dfrac{1}{2}\sum\limits_{i=1}^{n}{\theta_i}^2$ ###Code from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output 그림 저장: ridge_regression_plot ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.21 버전의 기본값인 `max_iter=1000`과 `tol=1e-3`으로 지정합니다. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-10: 라쏘 회귀의 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right|$ ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown **식 4-12: 엘라스틱넷 비용 함수**$J(\boldsymbol{\theta}) = \text{MSE}(\boldsymbol{\theta}) + r \alpha \sum\limits_{i=1}^{n}\left| \theta_i \right| + \dfrac{1 - r}{2} \alpha \sum\limits_{i=1}^{n}{{\theta_i}^2}$ ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown 조기 종료 예제: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 중지된 곳에서 다시 시작합니다 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown 그래프를 그립니다: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output 그림 저장: lasso_vs_ridge_plot ###Markdown 로지스틱 회귀 ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() ###Output 그림 저장: logistic_function_plot ###Markdown **식 4-16: 하나의 훈련 샘플에 대한 비용 함수**$c(\boldsymbol{\theta}) =\begin{cases} -\log(\hat{p}) & \text{if } y = 1, \\ -\log(1 - \hat{p}) & \text{if } y = 0.\end{cases}$**식 4-17: 로지스틱 회귀 비용 함수(로그 손실)**$J(\boldsymbol{\theta}) = -\dfrac{1}{m} \sum\limits_{i=1}^{m}{\left[ y^{(i)} log\left(\hat{p}^{(i)}\right) + (1 - y^{(i)}) log\left(1 - \hat{p}^{(i)}\right)\right]}$**식 4-18: 로지스틱 비용 함수의 편도 함수**$\dfrac{\partial}{\partial \theta_j} \text{J}(\boldsymbol{\theta}) = \dfrac{1}{m}\sum\limits_{i=1}^{m}\left(\mathbf{\sigma(\boldsymbol{\theta}}^T \mathbf{x}^{(i)}) - y^{(i)}\right)\, x_j^{(i)}$ ###Code from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # 꽃잎 너비 y = (iris["target"] == 2).astype(np.int) # Iris virginica이면 1 아니면 0 ###Output _____no_output_____ ###Markdown **노트**: 향후 버전이 바뀌더라도 동일한 결과를 만들기 위해 사이킷런 0.22 버전의 기본값인 `solver="lbfgs"`로 지정합니다. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown 책에 실린 그림은 조금 더 예쁘게 꾸몄습니다: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() ###Output 그림 저장: logistic_regression_contour_plot ###Markdown **식 4-20: 소프트맥스 함수**$\hat{p}_k = \sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$**식 4-22: 크로스 엔트로피 비용 함수**$J(\boldsymbol{\Theta}) = - \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$**식 4-23: 클래스 k에 대한 크로스 엔트로피의 그레이디언트 벡터**$\nabla_{\boldsymbol{\theta}^{(k)}} \, J(\boldsymbol{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$ ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 너비 y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown 연습문제 해답 1. to 11. 부록 A를 참고하세요. 12. 조기 종료를 사용한 배치 경사 하강법으로 소프트맥스 회귀 구현하기(사이킷런을 사용하지 않고) 먼저 데이터를 로드합니다. 앞서 사용했던 Iris 데이터셋을 재사용하겠습니다. ###Code X = iris["data"][:, (2, 3)] # 꽃잎 길이, 꽃잎 넓이 y = iris["target"] ###Output _____no_output_____ ###Markdown 모든 샘플에 편향을 추가합니다 ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown 결과를 일정하게 유지하기 위해 랜덤 시드를 지정합니다: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown 데이터셋을 훈련 세트, 검증 세트, 테스트 세트로 나누는 가장 쉬운 방법은 사이킷런의 `train_test_split()` 함수를 사용하는 것입니다. 하지만 이 연습문제의 목적은 직접 만들어 보면서 알고리즘을 이해하는 것이므로 다음과 같이 수동으로 나누어 보겠습니다: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown 타깃은 클래스 인덱스(0, 1 그리고 2)이지만 소프트맥스 회귀 모델을 훈련시키기 위해 필요한 것은 타깃 클래스의 확률입니다. 각 샘플에서 확률이 1인 타깃 클래스를 제외한 다른 클래스의 확률은 0입니다(다른 말로하면 주어진 샘플에 대한 클래스 확률이 원-핫 벡터입니다). 클래스 인덱스를 원-핫 벡터로 바꾸는 간단한 함수를 작성하겠습니다: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown 10개 샘플만 넣어 이 함수를 테스트해 보죠: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown 잘 되네요, 이제 훈련 세트와 테스트 세트의 타깃 클래스 확률을 담은 행렬을 만들겠습니다: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown 이제 소프트맥스 함수를 만듭니다. 다음 공식을 참고하세요:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown 훈련을 위한 준비를 거의 마쳤습니다. 입력과 출력의 개수를 정의합니다: ###Code n_inputs = X_train.shape[1] # == 3 (특성 2개와 편향) n_outputs = len(np.unique(y_train)) # == 3 (3개의 붓꽃 클래스) ###Output _____no_output_____ ###Markdown 이제 좀 복잡한 훈련 파트입니다! 이론적으로는 간단합니다. 그냥 수학 공식을 파이썬 코드로 바꾸기만 하면 됩니다. 하지만 실제로는 꽤 까다로운 면이 있습니다. 특히, 항이나 인덱스의 순서가 뒤섞이기 쉽습니다. 제대로 작동할 것처럼 코드를 작성했더라도 실제 제대로 계산하지 못합니다. 확실하지 않을 때는 각 항의 크기를 기록하고 이에 상응하는 코드가 같은 크기를 만드는지 확인합니다. 각 항을 독립적으로 평가해서 출력해 보는 것도 좋습니다. 사실 사이킷런에 이미 잘 구현되어 있기 때문에 이렇게 할 필요는 없습니다. 하지만 직접 만들어 보면 어떻게 작동하는지 이해하는데 도움이 됩니다.구현할 공식은 비용함수입니다:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$그리고 그레이디언트 공식입니다:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$$\hat{p}_k^{(i)} = 0$이면 $\log\left(\hat{p}_k^{(i)}\right)$를 계산할 수 없습니다. `nan` 값을 피하기 위해 $\log\left(\hat{p}_k^{(i)}\right)$에 아주 작은 값 $\epsilon$을 추가하겠습니다. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.42786510939287936 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown 바로 이겁니다! 소프트맥스 모델을 훈련시켰습니다. 모델 파라미터를 확인해 보겠습니다: ###Code Theta ###Output _____no_output_____ ###Markdown 검증 세트에 대한 예측과 정확도를 확인해 보겠습니다: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 이 모델이 매우 잘 작동하는 것 같습니다. 연습을 위해서 $\ell_2$ 규제를 조금 추가해 보겠습니다. 다음 코드는 위와 거의 동일하지만 손실에 $\ell_2$ 페널티가 추가되었고 그래디언트에도 항이 추가되었습니다(`Theta`의 첫 번째 원소는 편향이므로 규제하지 않습니다). 학습률 `eta`도 증가시켜 보겠습니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.48886433374493027 5000 0.48884031207388184 ###Markdown 추가된 $\ell_2$ 페널티 때문에 이전보다 손실이 조금 커보이지만 더 잘 작동하는 모델이 되었을까요? 확인해 보죠: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 와우, 완벽한 정확도네요! 운이 좋은 검증 세트일지 모르지만 잘 된 것은 맞습니다. 이제 조기 종료를 추가해 보죠. 이렇게 하려면 매 반복에서 검증 세트에 대한 손실을 계산해서 오차가 증가하기 시작할 때 멈춰야 합니다. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # 규제 하이퍼파라미터 best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "조기 종료!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown 여전히 완벽하지만 더 빠릅니다. 이제 전체 데이터셋에 대한 모델의 예측을 그래프로 나타내 보겠습니다: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown 이제 테스트 세트에 대한 모델의 최종 정확도를 측정해 보겠습니다: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown Chapter 4 – Training Linear Models(训练模型)_This notebook contains all the sample code and solutions to the exercises in chapter 4._ SetupFirst, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown 线性回归 Linear regression using the Normal Equation(用标准方程进行线性回归) 标准方程:$\hat{\theta}=(X^T \cdot X)^{-1} \cdot X^T \cdot y$,其中 - $\hat{\theta}$是使成本(代价)函数最小的$\theta$- $y$是包含$y^{(1)}$到$y^{(m)}$的目标值向量 ###Code # 生成一些线性数据 import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best ###Output _____no_output_____ ###Markdown 我们实际用来生成数据的函数是$y=4+3x_0+\text{高斯噪声}$。期待的是$\theta_0=4,\theta_1=3$,得到的是$\theta_0=4.215,\theta_1=2.77$,非常接近,噪声的存在使其不可能完全还原为原本的函数。 ###Code X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() # Scikit-Learn的等效代码 from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. 梯度下降 Linear regression using batch gradient descent(批量梯度下降)成本(代价)函数的偏导数:$\displaystyle \frac{\partial}{\partial{\theta_j}}\text{MSE}(\theta)=\frac{2}{m}\sum_{i=1}^m(\theta^T \cdot x^{(i)} - y^{(i)})x_j^{(i)}$ 成本(代价)函数的梯度向量:$\displaystyle \nabla_{\theta} \text{MSE}(\theta)=\left( \begin{array}{c} \displaystyle \frac{\partial}{\partial{\theta_0}}\text{MSE}(\theta) \\ \displaystyle \frac{\partial}{\partial{\theta_1}}\text{MSE}(\theta) \\\vdots \\\displaystyle \frac{\partial}{\partial{\theta_n}}\text{MSE}(\theta)\end{array} \right)= \frac{2}{m} X^T \cdot (X \cdot \theta - y)$ 计算梯度下降步长:$\theta^{(\text{next step})} = \theta - \eta \nabla_{\theta} \text{MSE}(\theta)$ ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent(随机梯度下降)当成本(代价)函数非常不规则时,随机梯度下降其实可以帮助算法跳出局部最小值,所以相比批量梯度下降,它对找到全局最小值更有优势。 **模拟退火:**逐步降低学习率,开始的步长比较大(这有助于快速进展和逃离局部最小值),然后越来越小,让算法尽量靠近全局最小值。 ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters # 简单的学习计划 def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta ###Output _____no_output_____ ###Markdown 在Sickit-Learn里,用SGD执行线性回归可以使用SGDRegressor类,其默认优化的成本函数是平方误差。 ###Code from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent(小批量梯度下降) ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression(多项式回归) ###Code import numpy as np import numpy.random as rnd np.random.seed(42) # 基于简单的二次方程产生一些非线性数据(添加随机噪声) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() # 将每个特征的平方(二次多项式)作为新特征加入训练集 from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] # 对扩展后的训练集匹配一个LinearRegression模型 lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() ###Output Saving figure quadratic_predictions_plot ###Markdown 模型预估是$\hat{y}=0.56x^2+0.93x+1.78$,而实际上原本的函数是$y=0.5x^2+1.0x+2+\text{高斯噪声}$ 学习曲线 ###Code from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() ###Output Saving figure high_degree_polynomials_plot ###Markdown 300阶多项式回归模型严重地过度拟合了训练数据,而线性模型则是拟合不足,这个案例中泛化结果最好的是二次模型。 ###Code from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split # 绘制模型的学习曲线 def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure underfitting_learning_curves_plot ###Markdown 这条学习曲线是典型的模型拟合不足。两条曲线均到达高地,非常接近,而且相当高。 ###Code from sklearn.pipeline import Pipeline # 10阶多项式模型的学习曲线 polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown 这条学习曲线和前一条相比有两个非常重大的区别:- 训练数据的误差远低于线性回归模型- 两条曲线之间有一定差距。该模型在训练数据上的表现比验证集上要好很多,这正是过度拟合的标志。但是如果使用更大的训练集,这两条曲线会越来越近。 Regularized models(正则线性模型) 岭回归&emsp;&emsp;岭回归是线性回归的正则化版:在成本(代价)函数中添加一个等于$\alpha \sum_{i=1}^n \theta_i^2$的正则项。这使得学习中的算法不仅需要拟合数据,同时还要让模型权重保持最小。 &emsp;&emsp;超参数$\alpha$控制的是对模型进行正则化的程度。如果$\alpha=0$,则岭回归就是线性模型。如果$\alpha$非常大,那么所有的权重都将非常接近于零,结果是一条穿过数据平均值的水平线。 &emsp;&emsp;岭回归模型的成本函数:$\displaystyle J(\theta)=\text{MSE}(\theta)+\alpha \frac{1}{2}\sum_{i=1}^n \theta^2$ ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), # 在执行岭回归之前,必须对数据进行缩放,因为它对输入特征的大小非常敏感。大多数正则化模型都是如此 ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown 闭式解的岭回归:$\hat{\theta}=(X^T \cdot X + \alpha A)^{-1} \cdot X^T \cdot y$ ###Code # 使用Sickit-Learn执行闭式解的岭回归,利用Cholesky的矩阵因式分解法 from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) # 使用随机梯度下降 sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) # 使用Ridge类的sag求解器 ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown 套索回归&emsp;&emsp;套索回归叫做_最小绝对收缩和选择算子回归_,简称Lasso回归。与岭回归一张,也是向成本(代价)函数增加一个正则项,但正则项是权重向量的$l_1$范数。 &emsp;&emsp;**Lasso回归代价函数:**$\displaystyle J(\theta)=\text{MSE}(\theta) + \alpha \sum_{i=1}^n |\theta_i|$ &emsp;&emsp;Lasso回归的重要特点是它倾向于完全消除掉最不重要特征的权重(将它们置为零),换句话说,Lasso回归会自动执行特征选择并输出一个稀疏模型(即只有很少的特征有非零权重)。 ###Code from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() # 使用Lasso训练模型 from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) ###Output _____no_output_____ ###Markdown 弹性网络&emsp;&emsp;弹性网络是岭回归与Lasso回归之间的中间地带,其正则化就是岭回归和Lasso回归的正则项混合,混合比例通过$r$来控制。当$r=0$时,弹性网络即等同于岭回归,而当$r=1$时,即相当于Lasso回归。 &emsp;&emsp;弹性网络代价函数:$\displaystyle J(\theta)=\text{MSE}(\theta)+r\alpha \sum_{i=1}^n |\theta_i| + \frac{1-r}{2} \alpha \sum_{i=1}^n \theta_i^2$ ###Code from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) ###Output _____no_output_____ ###Markdown 早期停止法 ###Code np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) # 90阶多项式回归模型 poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression(逻辑回归)逻辑回归模型概率估算(向量化形式):$$\hat{p}=h_{\theta}(x)=\sigma(\theta^T \cdot x)$$逻辑函数:$$\displaystyle \sigma(t)=\frac{1}{1+\exp{(-t)}}$$ ###Code # 绘制sigmoid函数,输出为一个0到1之间的数字 t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() ###Output Saving figure logistic_function_plot ###Markdown &emsp;&emsp;一旦逻辑回归模型估算出实例$x$属于正类的概率$\hat{p}=h_{\theta}(x)$,可以轻松做出预测$\hat{y}$ &emsp;&emsp;逻辑回归模型预测:$$\hat{y}=\left \{ \begin{array}{lc}0 & (\hat{p}<0.5) \\1 & (\hat{p} \geqslant 0.5)\end{array} \right.$$ &emsp;&emsp;注意,当$t<0$时,$\sigma(t)<0.5$,当$t \geqslant 0$时,$\sigma(t) \geqslant 0.5$,所以如果$\theta^T \cdot x$是正类,逻辑回归模型预测结果是1,如果是负类,则预测为0。 单个训练实例的代价函数:$$c(\theta)=\left\{ \begin{array}{lc} -\log(\hat{p}) & (y=1) \\-\log(1-\hat{p}) & (y=0)\end{array} \right.$$ 逻辑回归代价函数(log损失函数):$$J(\theta)=-\frac{1}{m}\sum_{i=1}^m \left[ y^{(i)}\log(\hat{p}^{(i)}) + (1-y^{(i)})\log(1-\hat{p}^{(i)}) \right]$$这是一个凸函数,可以通过梯度下降(或是其他任意优化算法)保证能够找出全局最小值。 Logistic代价函数的偏导数:$$\frac{\partial}{\partial \theta_j}J(\theta)=\frac{1}{m}\sum_{i=1}{m}(\sigma(\theta^T \cdot x^{(i)}) -y^{(i)})x_J^{(i)}$$ ###Code # 加载IRIS数据集 from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) # 仅基于花瓣宽度创建一个分类器来检测Virginica鸢尾花 X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 # 训练逻辑回归模型 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() ###Output Saving figure logistic_regression_contour_plot ###Markdown Softmax 回归(多元逻辑回归)&emsp;&emsp;逻辑回归模型经过推广,可以直接支持多个类别,而不需要训练并组合多个二元分类器。 &emsp;&emsp;Softmax函数:$$\displaystyle \hat{p}_k=\sigma(s(x))_k=\frac{\exp(s_k(x))}{\displaystyle \sum_{j=1}^K \exp(s_j(x))}$$其中 - $K$是类别的数量- $s(x)$是实例$x$每个类别的分数的向量- $\sigma(s(x))_k$是给定的类别分数下,实例$x$属于类别$k$的概率Softmax回归分类器预测:$$\displaystyle \hat{y}=\mathop{\arg\max}_{k} \sigma(s(x))_k = \mathop{\arg\max}_{k} s_k(x) = \mathop{\arg\max}_{k}(\theta_k^T \cdot x)$$- $\arg\max$运算符返回的是使函数最大化所对应的变量的值。在这个等式里,它返回的是使估算概率$\sigma(s(x))_k$最大的$k$的值。 &emsp;&emsp;交叉熵代价函数:$$J(\Theta)=-\frac{1}{m}\sum_{i=1}^m \sum_{k=1}^K y_k^{(i)} \log(\hat{p}_k^{(i)})$$- 如果第$i$个实例的目标类别为$k$,则$y_k^{(i)}$等于1,否则为0 对于类别$k$的交叉熵梯度向量:$$\nabla_{\theta_k}J(\Theta)=\frac{1}{m} \sum_{i=1}^m (\hat{p}_k^{(i)} - y_k^{(i)})x^{(i)}$$ ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] # 使用Softmax回归,指定一个支持Softmax回归的求解器——lbfgs softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions(练习题解答) 1. to 11.See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(用Softmax回归进行批量梯度下降训练,并实施早期停止法)(without using Scikit-Learn) (不使用Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.4106007142918712 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629506 1000 0.503640075014894 1500 0.4946891059460321 2000 0.4912968418075477 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.489035124439786 4000 0.4889173621830817 4500 0.4888643337449302 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) if iteration % 500 == 0: xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss print(iteration, loss) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(n_iter=50, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 10, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(n_iter=1, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train_predict, y_train)) val_errors.append(mean_squared_error(y_val_predict, y_val)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(n_iter=1, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val_predict, y_val) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) for subplot in (221, 223): plt.subplot(subplot) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) for subplot in (223, 224): plt.subplot(subplot) plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output _____no_output_____ ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output _____no_output_____ ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Run in Google Colab Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions_plot") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[1, 1], [1, -1], [1, 0.5]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.05, n_iterations = 200): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8)) for i, N, l1, l2, title in ((0, N1, 2., 0, "Lasso"), (1, N2, 0, 2., "Ridge")): JR = J + l1 * N1 + l2 * 0.5 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(np.array([[2.0], [0.5]]), Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) ax = axes[i, 0] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, N / 2., levels=levelsN) ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.set_title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) ax.set_ylabel(r"$\theta_2$", fontsize=16, rotation=0) ax = axes[i, 1] ax.grid(True) ax.axhline(y=0, color='k') ax.axvline(x=0, color='k') ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) ax.plot(path_JR[:, 0], path_JR[:, 1], "w-o") ax.plot(path_N[:, 0], path_N[:, 1], "y--") ax.plot(0, 0, "ys") ax.plot(t1_min, t2_min, "ys") ax.plot(t1r_min, t2r_min, "rs") ax.set_title(title, fontsize=16) ax.axis([t1a, t1b, t2a, t2b]) if i == 1: ax.set_xlabel(r"$\theta_1$", fontsize=16) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693313 2000 0.5444496861981873 2500 0.5038530181431525 3000 0.4729228972192248 3500 0.4482424418895776 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.503640075014894 1500 0.49468910594603216 2000 0.4912968418075477 2500 0.489899247009333 3000 0.48929905984511984 3500 0.48903512443978603 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ###Code # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown **Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent ###Code eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code from sklearn.linear_model import Ridge np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()), ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, penalty=None, eta0=0.0005, warm_start=True, learning_rate="constant", random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() from sklearn.base import clone sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) best_epoch, best_model t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="liblinear", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____ ###Markdown **Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. ###Code # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "training_linear_models" IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID) os.makedirs(IMAGES_PATH, exist_ok=True) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension) print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) # Ignore useless warnings (see SciPy issue #5998) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") ###Output _____no_output_____ ###Markdown Linear regression using the Normal Equation ###Code import numpy as np X = 2 * np.random.rand(100, 1) y = 4 + 3 * X + np.random.randn(100, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([0, 2, 0, 15]) save_fig("generated_data_plot") plt.show() X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0], [2]]) X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance y_predict = X_new_b.dot(theta_best) y_predict plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() ###Output _____no_output_____ ###Markdown The figure in the book actually corresponds to the following code, with a legend and axis labels: ###Code plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 2, 0, 15]) save_fig("linear_model_predictions") plt.show() from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ lin_reg.predict(X_new) ###Output _____no_output_____ ###Markdown The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly: ###Code theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6) theta_best_svd ###Output _____no_output_____ ###Markdown This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly: ###Code np.linalg.pinv(X_b).dot(y) ###Output _____no_output_____ ###Markdown Linear regression using batch gradient descent ###Code eta = 0.1 # learning rate n_iterations = 1000 m = 100 theta = np.random.randn(2,1) # random initialization for iteration in range(n_iterations): gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients theta X_new_b.dot(theta) theta_path_bgd = [] def plot_gradient_descent(theta, eta, theta_path=None): m = len(X_b) plt.plot(X, y, "b.") n_iterations = 1000 for iteration in range(n_iterations): if iteration < 10: y_predict = X_new_b.dot(theta) style = "b-" if iteration > 0 else "r--" plt.plot(X_new, y_predict, style) gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) theta = theta - eta * gradients if theta_path is not None: theta_path.append(theta) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 2, 0, 15]) plt.title(r"$\eta = {}$".format(eta), fontsize=16) np.random.seed(42) theta = np.random.randn(2,1) # random initialization plt.figure(figsize=(10,4)) plt.subplot(131); plot_gradient_descent(theta, eta=0.02) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd) plt.subplot(133); plot_gradient_descent(theta, eta=0.5) save_fig("gradient_descent_plot") plt.show() ###Output Saving figure gradient_descent_plot ###Markdown Stochastic Gradient Descent ###Code theta_path_sgd = [] m = len(X_b) np.random.seed(42) n_epochs = 50 t0, t1 = 5, 50 # learning schedule hyperparameters def learning_schedule(t): return t0 / (t + t1) theta = np.random.randn(2,1) # random initialization for epoch in range(n_epochs): for i in range(m): if epoch == 0 and i < 20: # not shown in the book y_predict = X_new_b.dot(theta) # not shown style = "b-" if i > 0 else "r--" # not shown plt.plot(X_new, y_predict, style) # not shown random_index = np.random.randint(m) xi = X_b[random_index:random_index+1] yi = y[random_index:random_index+1] gradients = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(epoch * m + i) theta = theta - eta * gradients theta_path_sgd.append(theta) # not shown plt.plot(X, y, "b.") # not shown plt.xlabel("$x_1$", fontsize=18) # not shown plt.ylabel("$y$", rotation=0, fontsize=18) # not shown plt.axis([0, 2, 0, 15]) # not shown save_fig("sgd_plot") # not shown plt.show() # not shown theta from sklearn.linear_model import SGDRegressor sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ ###Output _____no_output_____ ###Markdown Mini-batch gradient descent ###Code theta_path_mgd = [] n_iterations = 50 minibatch_size = 20 np.random.seed(42) theta = np.random.randn(2,1) # random initialization t0, t1 = 200, 1000 def learning_schedule(t): return t0 / (t + t1) t = 0 for epoch in range(n_iterations): shuffled_indices = np.random.permutation(m) X_b_shuffled = X_b[shuffled_indices] y_shuffled = y[shuffled_indices] for i in range(0, m, minibatch_size): t += 1 xi = X_b_shuffled[i:i+minibatch_size] yi = y_shuffled[i:i+minibatch_size] gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(t) theta = theta - eta * gradients theta_path_mgd.append(theta) theta theta_path_bgd = np.array(theta_path_bgd) theta_path_sgd = np.array(theta_path_sgd) theta_path_mgd = np.array(theta_path_mgd) plt.figure(figsize=(7,4)) plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic") plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch") plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch") plt.legend(loc="upper left", fontsize=16) plt.xlabel(r"$\theta_0$", fontsize=20) plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0) plt.axis([2.5, 4.5, 2.3, 3.9]) save_fig("gradient_descent_paths_plot") plt.show() ###Output Saving figure gradient_descent_paths_plot ###Markdown Polynomial regression ###Code import numpy as np import numpy.random as rnd np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show() from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0] X_poly[0] lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_ X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show() from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show() from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) # not shown in the book plt.xlabel("Training set size", fontsize=14) # not shown plt.ylabel("RMSE", fontsize=14) # not shown lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) # not shown in the book save_fig("underfitting_learning_curves_plot") # not shown plt.show() # not shown from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) # not shown save_fig("learning_curves_plot") # not shown plt.show() # not shown ###Output Saving figure learning_curves_plot ###Markdown Regularized models ###Code np.random.seed(42) m = 20 X = 3 * np.random.rand(m, 1) y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5 X_new = np.linspace(0, 3, 100).reshape(100, 1) from sklearn.linear_model import Ridge ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) ridge_reg = Ridge(alpha=1, solver="sag", random_state=42) ridge_reg.fit(X, y) ridge_reg.predict([[1.5]]) from sklearn.linear_model import Ridge def plot_model(model_class, polynomial, alphas, **model_kargs): for alpha, style in zip(alphas, ("b-", "g--", "r:")): model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression() if polynomial: model = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("std_scaler", StandardScaler()), ("regul_reg", model), ]) model.fit(X, y) y_new_regul = model.predict(X_new) lw = 2 if alpha > 0 else 1 plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha)) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left", fontsize=15) plt.xlabel("$x_1$", fontsize=18) plt.axis([0, 3, 0, 4]) plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42) save_fig("ridge_regression_plot") plt.show() ###Output Saving figure ridge_regression_plot ###Markdown **Note**: to be future-proof, we set `max_iter=1000` and `tol=1e-3` because these will be the default values in Scikit-Learn 0.21. ###Code sgd_reg = SGDRegressor(penalty="l2", max_iter=1000, tol=1e-3, random_state=42) sgd_reg.fit(X, y.ravel()) sgd_reg.predict([[1.5]]) from sklearn.linear_model import Lasso plt.figure(figsize=(8,4)) plt.subplot(121) plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42) plt.ylabel("$y$", rotation=0, fontsize=18) plt.subplot(122) plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), random_state=42) save_fig("lasso_regression_plot") plt.show() from sklearn.linear_model import Lasso lasso_reg = Lasso(alpha=0.1) lasso_reg.fit(X, y) lasso_reg.predict([[1.5]]) from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42) elastic_net.fit(X, y) elastic_net.predict([[1.5]]) np.random.seed(42) m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1) X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10) ###Output _____no_output_____ ###Markdown Early stopping example: ###Code from sklearn.base import clone poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, include_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val) sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg) ###Output _____no_output_____ ###Markdown Create the graph: ###Code sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005, random_state=42) n_epochs = 500 train_errors, val_errors = [], [] for epoch in range(n_epochs): sgd_reg.fit(X_train_poly_scaled, y_train) y_train_predict = sgd_reg.predict(X_train_poly_scaled) y_val_predict = sgd_reg.predict(X_val_poly_scaled) train_errors.append(mean_squared_error(y_train, y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) best_epoch = np.argmin(val_errors) best_val_rmse = np.sqrt(val_errors[best_epoch]) plt.annotate('Best model', xy=(best_epoch, best_val_rmse), xytext=(best_epoch, best_val_rmse + 1), ha="center", arrowprops=dict(facecolor='black', shrink=0.05), fontsize=16, ) best_val_rmse -= 0.03 # just to make the graph look better plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2) plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set") plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Epoch", fontsize=14) plt.ylabel("RMSE", fontsize=14) save_fig("early_stopping_plot") plt.show() best_epoch, best_model %matplotlib inline import matplotlib.pyplot as plt import numpy as np t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5 # ignoring bias term t1s = np.linspace(t1a, t1b, 500) t2s = np.linspace(t2a, t2b, 500) t1, t2 = np.meshgrid(t1s, t2s) T = np.c_[t1.ravel(), t2.ravel()] Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]]) yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:] J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape) N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape) N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape) t_min_idx = np.unravel_index(np.argmin(J), J.shape) t1_min, t2_min = t1[t_min_idx], t2[t_min_idx] t_init = np.array([[0.25], [-1]]) def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50): path = [theta] for iteration in range(n_iterations): gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta theta = theta - eta * gradients path.append(theta) return np.array(path) plt.figure(figsize=(12, 8)) for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")): JR = J + l1 * N1 + l2 * N2**2 tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape) t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx] levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J) levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR) levelsN=np.linspace(0, np.max(N), 10) path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0) path_JR = bgd_path(t_init, Xr, yr, l1, l2) path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0) plt.subplot(221 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9) plt.contour(t1, t2, N, levels=levelsN) plt.plot(path_J[:, 0], path_J[:, 1], "w-o") plt.plot(path_N[:, 0], path_N[:, 1], "y-^") plt.plot(t1_min, t2_min, "rs") plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0) plt.subplot(222 + i * 2) plt.grid(True) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9) plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o") plt.plot(t1r_min, t2r_min, "rs") plt.title(title, fontsize=16) plt.axis([t1a, t1b, t2a, t2b]) if i == 1: plt.xlabel(r"$\theta_1$", fontsize=20) save_fig("lasso_vs_ridge_plot") plt.show() ###Output Saving figure lasso_vs_ridge_plot ###Markdown Logistic regression ###Code t = np.linspace(-10, 10, 100) sig = 1 / (1 + np.exp(-t)) plt.figure(figsize=(9, 3)) plt.plot([-10, 10], [0, 0], "k-") plt.plot([-10, 10], [0.5, 0.5], "k:") plt.plot([-10, 10], [1, 1], "k:") plt.plot([0, 0], [-1.1, 1.1], "k-") plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$") plt.xlabel("t") plt.legend(loc="upper left", fontsize=20) plt.axis([-10, 10, -0.1, 1.1]) save_fig("logistic_function_plot") plt.show() from sklearn import datasets iris = datasets.load_iris() list(iris.keys()) print(iris.DESCR) X = iris["data"][:, 3:] # petal width y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0 ###Output _____no_output_____ ###Markdown **Note**: To be future-proof we set `solver="lbfgs"` since this will be the default value in Scikit-Learn 0.22. ###Code from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", random_state=42) log_reg.fit(X, y) X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") ###Output _____no_output_____ ###Markdown The figure in the book actually is actually a bit fancier: ###Code X_new = np.linspace(0, 3, 1000).reshape(-1, 1) y_proba = log_reg.predict_proba(X_new) decision_boundary = X_new[y_proba[:, 1] >= 0.5][0] plt.figure(figsize=(8, 3)) plt.plot(X[y==0], y[y==0], "bs") plt.plot(X[y==1], y[y==1], "g^") plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2) plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica") plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica") plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center") plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b') plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g') plt.xlabel("Petal width (cm)", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 3, -0.02, 1.02]) save_fig("logistic_regression_plot") plt.show() decision_boundary log_reg.predict([[1.7], [1.5]]) from sklearn.linear_model import LogisticRegression X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.int) log_reg = LogisticRegression(solver="lbfgs", C=10**10, random_state=42) log_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(2.9, 7, 500).reshape(-1, 1), np.linspace(0.8, 2.7, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = log_reg.predict_proba(X_new) plt.figure(figsize=(10, 4)) plt.plot(X[y==0, 0], X[y==0, 1], "bs") plt.plot(X[y==1, 0], X[y==1, 1], "g^") zz = y_proba[:, 1].reshape(x0.shape) contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg) left_right = np.array([2.9, 7]) boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1] plt.clabel(contour, inline=1, fontsize=12) plt.plot(left_right, boundary, "k--", linewidth=3) plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center") plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center") plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.axis([2.9, 7, 0.8, 2.7]) save_fig("logistic_regression_contour_plot") plt.show() X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42) softmax_reg.fit(X, y) x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_proba = softmax_reg.predict_proba(X_new) y_predict = softmax_reg.predict(X_new) zz1 = y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="center left", fontsize=14) plt.axis([0, 7, 0, 3.5]) save_fig("softmax_regression_contour_plot") plt.show() softmax_reg.predict([[5, 2]]) softmax_reg.predict_proba([[5, 2]]) ###Output _____no_output_____ ###Markdown Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier. ###Code X = iris["data"][:, (2, 3)] # petal length, petal width y = iris["target"] ###Output _____no_output_____ ###Markdown We need to add the bias term for every instance ($x_0 = 1$): ###Code X_with_bias = np.c_[np.ones([len(X), 1]), X] ###Output _____no_output_____ ###Markdown And let's set the random seed so the output of this exercise solution is reproducible: ###Code np.random.seed(2042) ###Output _____no_output_____ ###Markdown The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation: ###Code test_ratio = 0.2 validation_ratio = 0.2 total_size = len(X_with_bias) test_size = int(total_size * test_ratio) validation_size = int(total_size * validation_ratio) train_size = total_size - test_size - validation_size rnd_indices = np.random.permutation(total_size) X_train = X_with_bias[rnd_indices[:train_size]] y_train = y[rnd_indices[:train_size]] X_valid = X_with_bias[rnd_indices[train_size:-test_size]] y_valid = y[rnd_indices[train_size:-test_size]] X_test = X_with_bias[rnd_indices[-test_size:]] y_test = y[rnd_indices[-test_size:]] ###Output _____no_output_____ ###Markdown The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance: ###Code def to_one_hot(y): n_classes = y.max() + 1 m = len(y) Y_one_hot = np.zeros((m, n_classes)) Y_one_hot[np.arange(m), y] = 1 return Y_one_hot ###Output _____no_output_____ ###Markdown Let's test this function on the first 10 instances: ###Code y_train[:10] to_one_hot(y_train[:10]) ###Output _____no_output_____ ###Markdown Looks good, so let's create the target class probabilities matrix for the training set and the test set: ###Code Y_train_one_hot = to_one_hot(y_train) Y_valid_one_hot = to_one_hot(y_valid) Y_test_one_hot = to_one_hot(y_test) ###Output _____no_output_____ ###Markdown Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$ ###Code def softmax(logits): exps = np.exp(logits) exp_sums = np.sum(exps, axis=1, keepdims=True) return exps / exp_sums ###Output _____no_output_____ ###Markdown We are almost ready to start training. Let's define the number of inputs and outputs: ###Code n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term) n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes) ###Output _____no_output_____ ###Markdown Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values. ###Code eta = 0.01 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) Theta = Theta - eta * gradients ###Output 0 5.446205811872683 500 0.8350062641405651 1000 0.6878801447192402 1500 0.6012379137693314 2000 0.5444496861981872 2500 0.5038530181431525 3000 0.47292289721922487 3500 0.44824244188957774 4000 0.4278651093928793 4500 0.41060071429187134 5000 0.3956780375390374 ###Markdown And that's it! The Softmax model is trained. Let's look at the model parameters: ###Code Theta ###Output _____no_output_____ ###Markdown Let's make predictions for the validation set and check the accuracy score: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot if iteration % 500 == 0: print(iteration, loss) gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients ###Output 0 6.629842469083912 500 0.5339667976629505 1000 0.5036400750148942 1500 0.49468910594603216 2000 0.4912968418075476 2500 0.48989924700933296 3000 0.4892990598451198 3500 0.4890351244397859 4000 0.4889173621830818 4500 0.4888643337449303 5000 0.4888403120738818 ###Markdown Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out: ###Code logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing. ###Code eta = 0.1 n_iterations = 5001 m = len(X_train) epsilon = 1e-7 alpha = 0.1 # regularization hyperparameter best_loss = np.infty Theta = np.random.randn(n_inputs, n_outputs) for iteration in range(n_iterations): logits = X_train.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss error = Y_proba - Y_train_one_hot gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]] Theta = Theta - eta * gradients logits = X_valid.dot(Theta) Y_proba = softmax(logits) xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1)) l2_loss = 1/2 * np.sum(np.square(Theta[1:])) loss = xentropy_loss + alpha * l2_loss if iteration % 500 == 0: print(iteration, loss) if loss < best_loss: best_loss = loss else: print(iteration - 1, best_loss) print(iteration, loss, "early stopping!") break logits = X_valid.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_valid) accuracy_score ###Output _____no_output_____ ###Markdown Still perfect, but faster. Now let's plot the model's predictions on the whole dataset: ###Code x0, x1 = np.meshgrid( np.linspace(0, 8, 500).reshape(-1, 1), np.linspace(0, 3.5, 200).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new] logits = X_new_with_bias.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) zz1 = Y_proba[:, 1].reshape(x0.shape) zz = y_predict.reshape(x0.shape) plt.figure(figsize=(10, 4)) plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica") plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor") plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa") from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x0, x1, zz, cmap=custom_cmap) contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg) plt.clabel(contour, inline=1, fontsize=12) plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) plt.legend(loc="upper left", fontsize=14) plt.axis([0, 7, 0, 3.5]) plt.show() ###Output _____no_output_____ ###Markdown And now let's measure the final model's accuracy on the test set: ###Code logits = X_test.dot(Theta) Y_proba = softmax(logits) y_predict = np.argmax(Y_proba, axis=1) accuracy_score = np.mean(y_predict == y_test) accuracy_score ###Output _____no_output_____
jupyter_notebooks/2.4_salary_b_cleaning.ipynb
###Markdown Batter Salary Data Cleaning---*By Ihza Gonzales*This notebook aims to clean the historical data of stats and salaries acquired from SeanLahmen.com. The data for salaries is from 1985-2016. Import Libraries--- ###Code import numpy as np import pandas as pd pd.set_option('display.max_columns', None) ###Output _____no_output_____ ###Markdown Import Historical Salary Dataset--- ###Code salary = pd.read_csv('../data/lahman_database/Salaries.csv') salary.head() ###Output _____no_output_____ ###Markdown Import Historical Batting Stats--- ###Code bat = pd.read_csv('../data/lahman_database/Batting.csv') bat.head() ###Output _____no_output_____ ###Markdown Only get stats after 1985 *This is because salary dataset starts at 1985* ###Code bat = bat[bat['yearID']>1985] bat = bat[bat['AB']> 50] bat.head() ###Output _____no_output_____ ###Markdown Create new column for batting average ###Code bat['AVG'] = round((bat['H'] / bat['AB']), 3) bat.head() ###Output _____no_output_____ ###Markdown Create new column for on-base percentage ###Code numerator = bat['H'] + bat['BB'] + bat['HBP'] plate = bat['AB'] + bat['BB'] + bat['HBP'] + bat['SF'] bat['OBP'] = round((numerator / plate), 3) bat.head() ###Output _____no_output_____ ###Markdown Create new column for slugging percentage ###Code first = bat['H'] - (bat['2B'] + bat['3B'] + bat['HR']) bat['SLG'] = round(((first + (2 * bat['2B']) + (3 * bat['3B']) + (4 * bat['HR'])) / bat['AB']), 3) bat.head() ###Output _____no_output_____ ###Markdown Create new column for on-base plus slugging percentage ###Code bat['OPS'] = bat['OBP'] + bat['SLG'] bat.head() ###Output _____no_output_____ ###Markdown Drop unwanted columns ###Code bat.drop(columns = ['GIDP', 'IBB', 'stint', 'SB', 'HBP', 'SH', 'SF', 'CS'], inplace = True) bat.head() ###Output _____no_output_____ ###Markdown Merge salary and batting dataset ###Code df = bat.merge(salary, how = 'inner', left_on = ['playerID', 'yearID'], right_on = ['playerID', 'yearID']) df.head() ###Output _____no_output_____ ###Markdown Drop repeated columns from merge ###Code df.drop(columns = ['teamID_y', 'lgID_y'], inplace = True) df.head() df.isnull().sum() ###Output _____no_output_____ ###Markdown Save clean dataset ###Code df.to_csv('../data/past_salaries_bat.csv') ###Output _____no_output_____
notebooks/archive/CRO_Multi_Label_Classification_with_TF_IDF.ipynb
###Markdown TF-IDF Multi-label classifier ###Code import sys import os sys.path.append('..') import pandas as pd import numpy as np from sklearn import metrics from data import constants ############################## CONFIG ############################## CATEGORY_LEVEL = 'cro' #@param ["cro", "cro_sub_type_combined"] AVERAGING_STRATEGY= 'macro' #@param ["micro", "macro", "weighted"] #################################################################### # To make the notebook reproducible (not guaranteed for pytorch on different releases/platforms!) SEED_VALUE = 2 df_train = pd.read_pickle("/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Labelling/annual reports/Firm_AnnualReport_Labels_Training.pkl") df_test = pd.read_pickle("/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Labelling/annual reports/Firm_AnnualReport_Labels_Test.pkl") id_columns = ['report_id', 'page', 'paragraph_no'] df_train["id"] = df_train.apply(lambda row: "_".join([str(row[c]) for c in id_columns]), axis=1) df_test["id"] = df_test.apply(lambda row: "_".join([str(row[c]) for c in id_columns]), axis=1) # Set params if CATEGORY_LEVEL == 'cro': category_labels = constants.cro_categories # Drop n/a's # df.query('cro == ["PR", "TR", "OP"]', inplace=True) else: category_labels = constants.cro_sub_categories # df.query('cro_sub_type_combined.notnull() and cro_sub_type_combined != ""', inplace=True, engine='python') no_of_categories = len(category_labels) label_list = [c["label"] for c in category_labels] label_code_list = [f'{c["code"]}_actual' for c in category_labels] # The comparison hack is to filter NaN's, i.e. all the rows that were not labelled df = df.query("labelling_dataset == labelling_dataset") filter_str = " or ".join([f'{c} == {c}' for c in label_code_list]) # df_training_pos = df.query(f"labelling_dataset == 'training' & ({filter_str})") df_pos = df.query(filter_str) filter_str = " and ".join([f'{c} != {c}' for c in label_code_list]) df_neg = df.query(filter_str) docs_train = df_train.text docs_test = df_test.text labels_train = df_train[label_code_list] labels_test = df_test[label_code_list] labels_train = (labels_train > 0) * 1 labels_test = (labels_test > 0) * 1 # TEMPORARY - Load # Load classifier import pickle with open(os.path.join("/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Models/stoxx_inference", 'multilabel_svm_cro.pkl'), 'rb') as f: grid_clf = pickle.load(f) from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from sklearn.multiclass import OneVsRestClassifier from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV from sklearn.svm import SVC from sklearn.utils import shuffle docs_train, labels_train = shuffle(docs_train, labels_train, random_state=SEED_VALUE) pipeline_svm = Pipeline([ ('bow', CountVectorizer(strip_accents = 'ascii')), ('tfidf', TfidfTransformer()), ('classifier', OneVsRestClassifier( SVC(probability=True, random_state=SEED_VALUE)) ), # Note: Nested estimator ]) # Parameters to tune automatically with a grid search # Note: The nested estimator is accessible via the __estimator identifier param_svm = [ { 'bow__ngram_range': [(1, 1), (1, 2), (1, 3)], 'bow__max_features': [None, 50, 200], 'bow__stop_words': ['english', None], 'tfidf__use_idf': (True, False), 'classifier__estimator__C': [1, 10, 100], 'classifier__estimator__kernel': ['linear', 'rbf']}, ] grid_clf = GridSearchCV( pipeline_svm, param_grid=param_svm, refit=True, n_jobs=-1, scoring='roc_auc', # 'roc_auc' gives an error # cv=StratifiedKFold(label_train, n_folds=5), ) # Grid search fitting grid_clf.fit(docs_train, labels_train) cv_results = pd.DataFrame(grid_clf.cv_results_) print(f"Best score: {grid_clf.best_score_}") print(f"Best params: \n{grid_clf.best_params_}") # Predict for test preds = grid_clf.predict(docs_test) preds_prob = grid_clf.predict_proba(docs_test) # Run prediction on the test set. threshold = 0.5 preds_bool = (preds > threshold) # Use the accuracy and F1 metric to score our classifier's performance on the test set. accuracy = metrics.accuracy_score(labels_test, preds) score = metrics.f1_score(labels_test, preds, average=AVERAGING_STRATEGY) roc_auc = metrics.roc_auc_score(labels_test, preds_prob, average=AVERAGING_STRATEGY) cms = metrics.multilabel_confusion_matrix(labels_test, preds_bool) print(metrics.classification_report(labels_test, preds_bool, target_names=label_list)) print(f"Accuracy: {accuracy}, F1: {score}, RoC AUC: {roc_auc}") import matplotlib.pyplot as plt from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve, auc from numpy import interp from itertools import cycle n_classes = no_of_categories preds = preds labels = labels_test.to_numpy() # Compute ROC curve and ROC area for each class fpr = dict() tpr = dict() roc_auc = dict() for i in range(n_classes): # roc_auc = metrics.roc_auc_score(labels_test, preds_prob, average=AVERAGING_STRATEGY) fpr[i], tpr[i], _ = roc_curve(labels[:, i], preds[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) # Compute micro-average ROC curve and ROC area # roc_auc = metrics.roc_auc_score(labels_test, preds_prob, average=AVERAGING_STRATEGY) fpr["micro"], tpr["micro"], _ = roc_curve(labels.ravel(), preds.ravel()) roc_auc["micro"] = auc(fpr["micro"], tpr["micro"]) # First aggregate all false positive rates all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)])) # Then interpolate all ROC curves at this points mean_tpr = np.zeros_like(all_fpr) for i in range(n_classes): mean_tpr += interp(all_fpr, fpr[i], tpr[i]) # Finally average it and compute AUC mean_tpr /= n_classes fpr["macro"] = all_fpr tpr["macro"] = mean_tpr roc_auc["macro"] = auc(fpr["macro"], tpr["macro"]) lw = 2 # Plot all ROC curves plt.figure(figsize = (10,7)) plt.plot(fpr["micro"], tpr["micro"], label='micro-average ROC curve (area = {0:0.2f})' ''.format(roc_auc["micro"]), color='deeppink', linestyle=':', linewidth=4) plt.plot(fpr["macro"], tpr["macro"], label='macro-average ROC curve (area = {0:0.2f})' ''.format(roc_auc["macro"]), color='navy', linestyle=':', linewidth=4) colors = cycle(['aqua', 'darkorange', 'cornflowerblue', 'green', 'red', 'blue', 'black']) for i, color in zip(range(n_classes), colors): plt.plot(fpr[i], tpr[i], color=color, lw=lw, label='ROC curve of {0} (area = {1:0.2f})' ''.format(label_list[i], roc_auc[i])) plt.plot([0, 1], [0, 1], 'k--', lw=lw) plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive (FP) Rate') plt.ylabel('True Positive (TP) Rate') plt.title(f'RoC curves on each {CATEGORY_LEVEL} class') plt.legend(loc="lower right") plt.show() import pickle with open(f"/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Models/stoxx_inference/multilabel_svm_{CATEGORY_LEVEL}.pkl", 'wb') as f: grid_clf.label_list = label_list pickle.dump(grid_clf, f, 4) ###Output _____no_output_____ ###Markdown ###Code # Set counts to binary 1/0 doc_labels = (doc_labels > 0) * 1 doc_labels.to_numpy() # Ensure the docs and labels are the same shape assert np.shape(docs)[0] == np.shape(doc_labels)[0] train_dataset = pd.DataFrame(docs, columns=['text']) train_dataset['labels'] = doc_labels.values.tolist() train_neg_dataset['labels'] = np.zeros((np.shape(train_neg_dataset)[0], no_of_categories), dtype=np.int8).tolist() train_neg_dataset = train_neg_dataset.sample(600, random_state=SEED_VALUE) # Set texts and labels docs = train_dataset.text doc_labels = train_dataset.labels # Add some negative examples docs = pd.concat([docs, train_neg_dataset.text]) doc_labels = pd.concat([doc_labels, train_neg_dataset.labels]) # Load data # from google.colab import drive # drive.mount('/content/drive') # df = pd.read_pickle("/content/drive/My Drive/fin-disclosures-nlp/data/labels/Firm_AnnualReport_Labels_100_combined.pkl") df = pd.read_pickle("/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Labelling/annual reports/Firm_AnnualReport_Labels_Training_Positive.pkl") df_neg = pd.read_pickle("/Users/david/Nextcloud/Dokumente/Education/Uni Bern/Master Thesis/Analyzing Financial Climate Disclosures with NLP/Labelling/annual reports/Firm_AnnualReport_Labels_Training_Negative.pkl") # Set id id_columns = ['report_id', 'page', 'paragraph_no'] df["id"] = df.apply(lambda row: "_".join([str(row[c]) for c in id_columns]), axis=1) df_neg["id"] = df_neg.apply(lambda row: "_".join([str(row[c]) for c in id_columns]), axis=1) # Set params if CATEGORY_LEVEL == 'cro': category_labels = [{"code": "PR", "label": "PR"}, {"code": "TR", "label": "TR"}, {"code": "OP", "label": "OP"}] # Drop n/a's df.query('cro == ["PR", "TR", "OP"]', inplace=True) no_of_categories = len(df.cro.unique()) else: category_labels = [{"code": "ACUTE", "label": "PR - Acute"}, {"code": "CHRON", "label": "PR - Chronic"}, {"code": "POLICY", "label": "TR - Policy"}, {"code": "MARKET", "label": "TR - Market & Technology"}, {"code": "REPUTATION", "label": "TR - Reputation"}, {"code": "PRODUCTS", "label": "OP - Products, Services & Markets"}, {"code": "RESILIENCE", "label": "OP - Resource Efficiency & Resilience"}] df.query('cro_sub_type_combined.notnull() and cro_sub_type_combined != ""', inplace=True, engine='python') no_of_categories = len(df.cro_sub_type_combined.unique()) label_list = [c["label"] for c in category_labels] # Verification that there are no unlabelled/unwanted categories assert len(category_labels) == no_of_categories # Prepare negative dataset neg_docs = df_neg.groupby(["id"]).first().text assert neg_docs.shape[0] == df_neg.shape[0] train_neg_dataset = pd.DataFrame(neg_docs, columns=['text']) docs = df.groupby(["id"]).first().text doc_labels = pd.crosstab(df.id, df[CATEGORY_LEVEL], dropna=False) doc_labels = doc_labels[map(lambda c: c['code'], category_labels)] # Set counts to binary 1/0 doc_labels = (doc_labels > 0) * 1 doc_labels.to_numpy() # Ensure the docs and labels are the same shape assert np.shape(docs)[0] == np.shape(doc_labels)[0] train_dataset = pd.DataFrame(docs, columns=['text']) train_dataset['labels'] = doc_labels.values.tolist() train_neg_dataset['labels'] = np.zeros((np.shape(train_neg_dataset)[0], no_of_categories), dtype=np.int8).tolist() train_neg_dataset = train_neg_dataset.sample(600, random_state=SEED_VALUE) # Set texts and labels docs = train_dataset.text doc_labels = train_dataset.labels # Add some negative examples docs = pd.concat([docs, train_neg_dataset.text]) doc_labels = pd.concat([doc_labels, train_neg_dataset.labels]) doc_labels = np.vstack(doc_labels[:,]) doc_labels.sum(axis=0) from sklearn.model_selection import train_test_split # Split to train/test (temporary) X_train, X_test, y_train, y_test = train_test_split(docs, doc_labels, test_size=0.1, random_state=SEED_VALUE) # from sklearn.feature_extraction.text import TfidfVectorizer # Vectorize the 2 datasets using tf-idf. # vectorizer = TfidfVectorizer() # vectors_train = vectorizer.fit_transform(X_train) # vectors_test = vectorizer.transform(X_test) from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import SVC from sklearn.decomposition import PCA from sklearn.cross_decomposition import CCA # clf = OneVsRestClassifier(SVC(kernel='linear', random_state=SEED_VALUE), probability=True) # clf.fit(vectors_train, y_train) # preds = clf.predict(vectors_test) # preds_prob = clf.predict_proba(vectors_test) y_test.sum(axis=0) print(f"Accuracy: {accuracy}, F1: {score}, RoC AUC: {roc_auc}") df_cleaned = df_cleaned.drop(['firm_name', 'ticker', 'country', 'icb_industry', 'icb_supersector', 'should_infer', 'is_inferred', 'company', 'orig_report_type', 'report_type', 'year', 'input_file', 'output_file', 'company_id'], axis=1) df_cleaned cols = list(df_cleaned.columns.values) cols # df = df[['mean', '0', '1', '2', '3']] df_cleaned.id df_output = df_cleaned[['id', 'report_id', 'page_no', 'paragraph_no', 'labelling_dataset', 'coder', 'negative_type', 'is_adjunct', 'text', 'PR_actual', 'TR_actual', 'OP_actual', 'ACUTE_actual', 'CHRON_actual', 'REPUTATION_actual', 'MARKET_actual', 'POLICY_actual', 'PRODUCTS_actual', 'RESILIENCE_actual']] ###Output _____no_output_____
Modulo3/Clase12_OptimizacionMediaVarianza.ipynb
###Markdown Optimización media-varianza [Evaluación de medio término](http://cursos.iteso.mx/course/view.php?id=1480)La **teoría de portafolios** es uno de los avances más importantes en las finanzas modernas e inversiones.- Apareció por primera vez en un [artículo corto](https://www.math.ust.hk/~maykwok/courses/ma362/07F/markowitz_JF.pdf) llamado "Portfolio Selection" en la edición de Marzo de 1952 de "the Journal of Finance".- Escrito por un desconocido estudiante de la Universidad de Chicago, llamado Harry Markowitz.- Escrito corto (sólo 14 páginas), poco texto, fácil de entender, muchas gráficas y unas cuantas referencias.- No se le prestó mucha atención hasta los 60s.Finalmente, este trabajo se convirtió en una de las más grandes ideas en finanzas, y le dió a Markowitz el Premio Nobel casi 40 años después.- Markowitz estaba incidentalmente interesado en los mercados de acciones e inversiones.- Estaba más bien interesado en entender cómo las personas tomaban sus mejores decisiones cuando se enfrentaban con "trade-offs".- Principio de conservación de la miseria. O, dirían los instructores de gimnasio: "no pain, no gain".- Si queremos más de algo, tenemos que perder en algún otro lado.- El estudio de este fenómeno era el que le atraía a Markowitz.De manera que nadie se hace rico poniendo todo su dinero en la cuenta de ahorros. La única manera de esperar altos rendimientos es si se toma bastante riesgo. Sin embargo, riesgo significa también la posibilidad de perder, tanto como ganar.Pero, ¿qué tanto riesgo es necesario?, y ¿hay alguna manera de minimizar el riesgo mientras se maximizan las ganancias?- Markowitz básicamente cambió la manera en que los inversionistas pensamos acerca de esas preguntas.- Alteró completamente la práctica de la administración de inversiones.- Incluso el título de su artículo era innovador. Portafolio: una colección de activos en lugar de tener activos individuales.- En ese tiempo, un portafolio se refería a una carpeta de piel.- En el resto de este módulo, nos ocuparemos de la parte analítica de la teoría de portafolios, la cual puede ser resumida en dos frases: - No pain, no gain. - No ponga todo el blanquillo en una sola bolsa. **Objetivos:**- ¿Qué es la línea de asignación de capital?- ¿Qué es el radio de Sharpe?- ¿Cómo deberíamos asignar nuestro capital entre un activo riesgoso y un activo libre de riesgo?*Referencia:*- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.___ 1. Línea de asignación de capital 1.1. MotivaciónEl proceso de construcción de un portafolio tiene entonces los siguientes dos pasos:1. Escoger un portafolio de activos riesgosos.2. Decidir qué tanto de tu riqueza invertirás en el portafolio y qué tanto invertirás en activos libres de riesgo.Al paso 2 lo llamamos **decisión de asignación de activos**. Preguntas importantes:1. ¿Qué es el portafolio óptimo de activos riesgosos? - ¿Cuál es el mejor portafolio de activos riesgosos? - Es un portafolio eficiente en media-varianza.2. ¿Qué es la distribución óptima de activos? - ¿Cómo deberíamos distribuir nuestra riqueza entre el portafolo riesgoso óptimo y el activo libre de riesgo? - Concepto de **línea de asignación de capital**. - Concepto de **radio de Sharpe**. Dos suposiciones importantes:- Funciones de utilidad media-varianza.- Inversionista averso al riesgo. La idea sorprendente que saldrá de este análisis, es que cualquiera que sea la actitud del inversionista de cara al riesgo, el mejor portafolio de activos riesgosos es idéntico para todos los inversionistas.Lo que nos importará a cada uno de nosotros en particular, es simplemente la desición óptima de asignación de activos.___ 1.2. Línea de asignación de capital Sean:- $r_s$ el rendimiento del activo riesgoso,- $r_f$ el rendimiento libre de riesgo, y- $w$ la fracción invertida en el activo riesgoso. Realizar deducción de la línea de asignación de capital en el tablero. **Tres doritos después...** Línea de asignación de capital (LAC):$E[r_p]$ se relaciona con $\sigma_p$ de manera afín. Es decir, mediante la ecuación de una recta:$$E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p.$$- La pendiente de la LAC es el radio de Sharpe $\frac{E[r_s-r_f]}{\sigma_s}=\frac{E[r_s]-r_f}{\sigma_s}$,- el cual nos dice qué tanto rendimiento obtenemos por unidad de riesgo asumido en la tenencia del activo (portafolio) riesgoso. Ahora, la pregunta es, ¿dónde sobre esta línea queremos estar?___ 1.3. Resolviendo para la asignación óptima de capitalRecapitulando de la clase pasada, tenemos las curvas de indiferencia: **queremos estar en la curva de indiferencia más alta posible, que sea tangente a la LAC**. Ver en el tablero. Analíticamente, el problema es$$\max_{w} \quad E[U(r_p)]\equiv\max_{w} \quad E[r_p]-\frac{1}{2}\gamma\sigma_p^2,$$donde los puntos $(\sigma_p,E[r_p])$ se restringen a estar en la LAC, esto es $E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p$ y $\sigma_p=w\sigma_s$. Entonces el problema anterior se puede escribir de la siguiente manera:$$\max_{w} \quad r_f+wE[r_s-r_f]-\frac{1}{2}\gamma w^2\sigma_s^2.$$ Encontrar la $w$ que maximiza la anterior expresión en el tablero. **Tres doritos después...** La solución es entonces:$$w^\ast=\frac{E[r_s]-r_f}{\gamma\sigma_s^2}.$$De manera intuitiva:- $w^\ast\propto E[r_s-r_f]$: a más exceso de rendimiento que se obtenga del activo riesgoso, más querremos invertir en él.- $w^\ast\propto \frac{1}{\gamma}$: mientras más averso al riesgo seas, menos querrás invertir en el activo riesgoso.- $w^\ast\propto \frac{1}{\sigma_s^2}$: mientras más riesgoso sea el activo, menos querrás invertir en él.___ 2. Ejemplo de asignación óptima de capital: acciones y billetes de EU Pongamos algunos números con algunos datos, para ilustrar la derivación que acabamos de hacer.En este caso, consideraremos:- **Portafolio riesgoso**: mercado de acciones de EU (representados en algún índice de mercado como el S&P500).- **Activo libre de riesgo**: billetes del departamento de tesorería de EU (T-bills).Tenemos los siguientes datos:$$E[r_{US}]=11.9\%,\quad \sigma_{US}=19.15\%, \quad r_f=1\%.$$ Recordamos que podemos escribir la expresión de la LAC como:\begin{align}E[r_p]&=r_f+\left[\frac{E[r_{US}-r_f]}{\sigma_{US}}\right]\sigma_p\\ &=0.01+\text{S.R.}\sigma_p,\end{align}donde $\text{S.R}=\frac{0.119-0.01}{0.1915}\approx0.569$ es el radio de Sharpe (¿qué es lo que es esto?).Grafiquemos la LAC con estos datos reales: ###Code # Importamos librerías que vamos a utilizar import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Datos Ers = 0.119 ss = 0.1915 rf = 0.01 # Radio de Sharpe para este activo RS = (Ers - rf) / ss # Vector de volatilidades del portafolio (sugerido: 0% a 50%) sp = np.linspace(0, 0.5) # LAC Erp = RS *sp + rf # Gráfica plt.figure(figsize=(6,4)) plt.axhline(y = 0, color = 'k') plt.axvline(x = 0, color = 'k') plt.plot(sp, Erp, 'r', lw = 2, label = "LAC") plt.plot(0, rf, 'og', ms = 5, label = 'Activo libre de riesgo') plt.plot(ss, Ers, 'ob', ms = 5, label = 'Activo riesgoso') plt.grid() plt.xlabel('Volatilidad $\sigma$') plt.ylabel('Rendimiento esperado $E[r]$') plt.legend(loc='best') ###Output _____no_output_____ ###Markdown Bueno, y ¿en qué punto de esta línea querríamos estar?- Pues ya vimos que depende de tus preferencias.- En particular, de tu actitud de cara al riesgo, medido por tu coeficiente de aversión al riesgo.Solución al problema de asignación óptima de capital:$$\max_{w} \quad E[U(r_p)]$$$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}$$ Dado que ya tenemos datos, podemos intentar para varios coeficientes de aversión al riesgo: ###Code # importar pandas import pandas as pd # Crear un DataFrame con los pesos, rendimiento # esperado y volatilidad del portafolio óptimo # entre los activos riesgoso y libre de riesgo # cuyo índice sean los coeficientes de aversión # al riesgo del 1 al 10 (enteros) gamma = np.arange(1, 11, 1) df = pd.DataFrame({'$\gamma$': gamma, '$W^\ast$': (Ers - rf) / (gamma * ss**2)}) df ###Output _____no_output_____ ###Markdown Optimización media-varianzaLa **teoría de portafolios** es uno de los avances más importantes en las finanzas modernas e inversiones.- Apareció por primera vez en un [artículo corto](https://www.math.ust.hk/~maykwok/courses/ma362/07F/markowitz_JF.pdf) llamado "Portfolio Selection" en la edición de Marzo de 1952 de "the Journal of Finance".- Escrito por un desconocido estudiante de la Universidad de Chicago, llamado Harry Markowitz.- Escrito corto (sólo 14 páginas), poco texto, fácil de entender, muchas gráficas y unas cuantas referencias.- No se le prestó mucha atención hasta los 60s.Finalmente, este trabajo se convirtió en una de las más grandes ideas en finanzas, y le dió a Markowitz el Premio Nobel casi 40 años después.- Markowitz estaba incidentalmente interesado en los mercados de acciones e inversiones.- Estaba más bien interesado en entender cómo las personas tomaban sus mejores decisiones cuando se enfrentaban con "trade-offs".- Principio de conservación de la miseria. O, dirían los instructores de gimnasio: "no pain, no gain".- Si queremos más de algo, tenemos que perder en algún otro lado.- El estudio de este fenómeno era el que le atraía a Markowitz.De manera que nadie se hace rico poniendo todo su dinero en la cuenta de ahorros. La única manera de esperar altos rendimientos es si se toma bastante riesgo. Sin embargo, riesgo significa también la posibilidad de perder, tanto como ganar.Pero, ¿qué tanto riesgo es necesario?, y ¿hay alguna manera de minimizar el riesgo mientras se maximizan las ganancias?- Markowitz básicamente cambió la manera en que los inversionistas pensamos acerca de esas preguntas.- Alteró completamente la práctica de la administración de inversiones.- Incluso el título de su artículo era innovador. Portafolio: una colección de activos en lugar de tener activos individuales.- En ese tiempo, un portafolio se refería a una carpeta de piel.- En el resto de este módulo, nos ocuparemos de la parte analítica de la teoría de portafolios, la cual puede ser resumida en dos frases: - No pain, no gain. - No ponga todo el blanquillo en una sola bolsa. **Objetivos:**- ¿Qué es la línea de asignación de capital?- ¿Qué es el radio de Sharpe?- ¿Cómo deberíamos asignar nuestro capital entre un activo riesgoso y un activo libre de riesgo?*Referencia:*- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.___ 1. Línea de asignación de capital 1.1. MotivaciónEl proceso de construcción de un portafolio tiene entonces los siguientes dos pasos:1. Escoger un portafolio de activos riesgosos.2. Decidir qué tanto de tu riqueza invertirás en el portafolio y qué tanto invertirás en activos libres de riesgo.Al paso 2 lo llamamos **decisión de asignación de activos**. Preguntas importantes:1. ¿Qué es el portafolio óptimo de activos riesgosos? - ¿Cuál es el mejor portafolio de activos riesgosos? - Es un portafolio eficiente en media-varianza.2. ¿Qué es la distribución óptima de activos? - ¿Cómo deberíamos distribuir nuestra riqueza entre el portafolo riesgoso óptimo y el activo libre de riesgo? - Concepto de **línea de asignación de capital**. - Concepto de **radio de Sharpe**. Dos suposiciones importantes:- Funciones de utilidad media-varianza.- Inversionista averso al riesgo. La idea sorprendente que saldrá de este análisis, es que cualquiera que sea la actitud del inversionista de cara al riesgo, el mejor portafolio de activos riesgosos es idéntico para todos los inversionistas.Lo que nos importará a cada uno de nosotros en particular, es simplemente la desición óptima de asignación de activos.___ 1.2. Línea de asignación de capital Sean:- $r_s$ el rendimiento del activo/portafolio riesgoso,- $r_f$ el rendimiento libre de riesgo, y- $w$ la fracción invertida en el activo riesgoso. Realizar deducción de la línea de asignación de capital en el tablero. **Tres doritos después...** Línea de asignación de capital (LAC):$E[r_p]$ se relaciona con $\sigma_p$ de manera afín. Es decir, mediante la ecuación de una recta:$$E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p.$$- La pendiente de la LAC es el radio de Sharpe $\frac{E[r_s-r_f]}{\sigma_s}=\frac{E[r_s]-r_f}{\sigma_s}$,- el cual nos dice qué tanto rendimiento obtenemos por unidad de riesgo asumido en la tenencia del activo (portafolio) riesgoso. Ahora, la pregunta es, ¿dónde sobre esta línea queremos estar?___ 1.3. Resolviendo para la asignación óptima de capitalRecapitulando de la clase pasada, tenemos las curvas de indiferencia: **queremos estar en la curva de indiferencia más alta posible, que sea tangente a la LAC**. Ver en el tablero. Analíticamente, el problema es$$\max_{w} \quad E[U(r_p)]\equiv\max_{w} \quad E[r_p]-\frac{1}{2}\gamma\sigma_p^2,$$donde los puntos $(\sigma_p,E[r_p])$ se restringen a estar en la LAC, esto es $E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p$ y $\sigma_p=w\sigma_s$. Entonces el problema anterior se puede escribir de la siguiente manera:$$\max_{w} \quad r_f+wE[r_s-r_f]-\frac{1}{2}\gamma w^2\sigma_s^2.$$ Encontrar la $w$ que maximiza la anterior expresión en el tablero. **Tres doritos después...** La solución es entonces:$$w^\ast=\frac{E[r_s]-r_f}{\gamma\sigma_s^2}.$$De manera intuitiva:- $w^\ast\propto E[r_s-r_f]$: a más exceso de rendimiento que se obtenga del activo riesgoso, más querremos invertir en él.- $w^\ast\propto \frac{1}{\gamma}$: mientras más averso al riesgo seas, menos querrás invertir en el activo riesgoso.- $w^\ast\propto \frac{1}{\sigma_s^2}$: mientras más riesgoso sea el activo, menos querrás invertir en él.___ 2. Ejemplo de asignación óptima de capital: acciones y billetes de EU Pongamos algunos números con algunos datos, para ilustrar la derivación que acabamos de hacer.En este caso, consideraremos:- **Portafolio riesgoso**: mercado de acciones de EU (representados en algún índice de mercado como el S&P500).- **Activo libre de riesgo**: billetes del departamento de tesorería de EU (T-bills).Tenemos los siguientes datos:$$E[r_{US}]=11.9\%,\quad \sigma_{US}=19.15\%, \quad r_f=1\%.$$ Recordamos que podemos escribir la expresión de la LAC como:\begin{align}E[r_p]&=r_f+\left[\frac{E[r_{US}-r_f]}{\sigma_{US}}\right]\sigma_p\\ &=0.01+\text{S.R.}\sigma_p,\end{align}donde $\text{S.R}=\frac{0.119-0.01}{0.1915}\approx0.569$ es el radio de Sharpe (¿qué es lo que es esto?).Grafiquemos la LAC con estos datos reales: ###Code # Importamos librerías que vamos a utilizar from matplotlib import pyplot as plt import numpy as np # Datos Ers = 0.119 ss = 0.1915 rf = 0.01 # Radio de Sharpe para este activo RS = (Ers - rf) / ss # Vector de volatilidades del portafolio (sugerido: 0% a 30%) sp = np.linspace(0, 0.3, 101) # LAC lac = RS * sp + rf # Gráfica plt.figure(figsize=(6, 4)) plt.plot(0, rf, "or", ms=10, label="Activo libre de riesgo") plt.plot(ss, Ers, "og", ms=10, label="Portafolio de activos riesgosos") plt.plot(sp, lac, '--k', lw=3, label="Línea de asignación de capital (LAC)") plt.grid() plt.legend() plt.xlabel("Volatilidad $\sigma$") plt.ylabel("Rendimiento esperado $E[r]$") ###Output _____no_output_____ ###Markdown Bueno, y ¿en qué punto de esta línea querríamos estar?- Pues ya vimos que depende de tus preferencias.- En particular, de tu actitud de cara al riesgo, medido por tu coeficiente de aversión al riesgo.Solución al problema de asignación óptima de capital:$$\max_{w} \quad E[U(r_p)]$$$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}$$ Dado que ya tenemos datos, podemos intentar para varios coeficientes de aversión al riesgo: ###Code # importar pandas import pandas as pd # Crear un DataFrame con los pesos, rendimiento # esperado y volatilidad del portafolio óptimo # entre los activos riesgoso y libre de riesgo # cuyo índice sean los coeficientes de aversión # al riesgo del 1 al 10 (enteros) gamma = np.arange(1, 11) optimal_alloc = pd.DataFrame(data={"gamma": gamma, "w_opt": (Ers - rf) / (gamma * ss**2), "1-w_opt": 1 - (Ers - rf) / (gamma * ss**2) }) optimal_alloc ###Output _____no_output_____ ###Markdown Optimización media-varianzaLa **teoría de portafolios** es uno de los avances más importantes en las finanzas modernas e inversiones.- Apareció por primera vez en un [artículo corto](https://www.math.ust.hk/~maykwok/courses/ma362/07F/markowitz_JF.pdf) llamado "Portfolio Selection" en la edición de Marzo de 1952 de "the Journal of Finance".- Escrito por un desconocido estudiante de la Universidad de Chicago, llamado Harry Markowitz.- Escrito corto (sólo 14 páginas), poco texto, fácil de entender, muchas gráficas y unas cuantas referencias.- No se le prestó mucha atención hasta los 60s.Finalmente, este trabajo se convirtió en una de las más grandes ideas en finanzas, y le dió a Markowitz el Premio Nobel casi 40 años después.- Markowitz estaba incidentalmente interesado en los mercados de acciones e inversiones.- Estaba más bien interesado en entender cómo las personas tomaban sus mejores decisiones cuando se enfrentaban con "trade-offs".- Principio de conservación de la miseria. O, dirían los instructores de gimnasio: "no pain, no gain".- Si queremos más de algo, tenemos que perder en algún otro lado.- El estudio de este fenómeno era el que le atraía a Markowitz.De manera que nadie se hace rico poniendo todo su dinero en la cuenta de ahorros. La única manera de esperar altos rendimientos es si se toma bastante riesgo. Sin embargo, riesgo significa también la posibilidad de perder, tanto como ganar.Pero, ¿qué tanto riesgo es necesario?, y ¿hay alguna manera de minimizar el riesgo mientras se maximizan las ganancias?- Markowitz básicamente cambió la manera en que los inversionistas pensamos acerca de esas preguntas.- Alteró completamente la práctica de la administración de inversiones.- Incluso el título de su artículo era innovador. Portafolio: una colección de activos en lugar de tener activos individuales.- En ese tiempo, un portafolio se refería a una carpeta de piel.- En el resto de este módulo, nos ocuparemos de la parte analítica de la teoría de portafolios, la cual puede ser resumida en dos frases: - No pain, no gain. - No ponga todo el blanquillo en una sola bolsa. **Objetivos:**- ¿Qué es la línea de asignación de capital?- ¿Qué es el radio de Sharpe?- ¿Cómo deberíamos asignar nuestro capital entre un activo riesgoso y un activo libre de riesgo?*Referencia:*- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.___ 1. Línea de asignación de capital 1.1. MotivaciónEl proceso de construcción de un portafolio tiene entonces los siguientes dos pasos:1. Escoger un portafolio de activos riesgosos.2. Decidir qué tanto de tu riqueza invertirás en el portafolio y qué tanto invertirás en activos libres de riesgo.Al paso 2 lo llamamos **decisión de asignación de activos**. Preguntas importantes:1. ¿Qué es el portafolio óptimo de activos riesgosos? - ¿Cuál es el mejor portafolio de activos riesgosos? - Es un portafolio eficiente en media-varianza.2. ¿Qué es la distribución óptima de activos? - ¿Cómo deberíamos distribuir nuestra riqueza entre el portafolo riesgoso óptimo y el activo libre de riesgo? - Concepto de **línea de asignación de capital**. - Concepto de **radio de Sharpe**. Dos suposiciones importantes:- Funciones de utilidad media-varianza.- Inversionista averso al riesgo. La idea sorprendente que saldrá de este análisis, es que cualquiera que sea la actitud del inversionista de cara al riesgo, el mejor portafolio de activos riesgosos es idéntico para todos los inversionistas.Lo que nos importará a cada uno de nosotros en particular, es simplemente la desición óptima de asignación de activos.___ 1.2. Línea de asignación de capital Sean:- $r_s$ el rendimiento del activo/portafolio riesgoso,- $r_f$ el rendimiento libre de riesgo, y- $w$ la fracción invertida en el activo riesgoso. Realizar deducción de la línea de asignación de capital en el tablero. **Tres doritos después...** Línea de asignación de capital (LAC):$E[r_p]$ se relaciona con $\sigma_p$ de manera afín. Es decir, mediante la ecuación de una recta:$$E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p.$$- La pendiente de la LAC es el radio de Sharpe $\frac{E[r_s-r_f]}{\sigma_s}=\frac{E[r_s]-r_f}{\sigma_s}$,- el cual nos dice qué tanto rendimiento obtenemos por unidad de riesgo asumido en la tenencia del activo (portafolio) riesgoso. Ahora, la pregunta es, ¿dónde sobre esta línea queremos estar?___ 1.3. Resolviendo para la asignación óptima de capitalRecapitulando de la clase pasada, tenemos las curvas de indiferencia: **queremos estar en la curva de indiferencia más alta posible, que sea tangente a la LAC**. Ver en el tablero. Analíticamente, el problema es$$\max_{w} \quad E[U(r_p)]\equiv\max_{w} \quad E[r_p]-\frac{1}{2}\gamma\sigma_p^2,$$donde los puntos $(\sigma_p,E[r_p])$ se restringen a estar en la LAC, esto es $E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p$ y $\sigma_p=w\sigma_s$. Entonces el problema anterior se puede escribir de la siguiente manera:$$\max_{w} \quad r_f+wE[r_s-r_f]-\frac{1}{2}\gamma w^2\sigma_s^2.$$ Encontrar la $w$ que maximiza la anterior expresión en el tablero. **Tres doritos después...** La solución es entonces:$$w^\ast=\frac{E[r_s]-r_f}{\gamma\sigma_s^2}.$$De manera intuitiva:- $w^\ast\propto E[r_s-r_f]$: a más exceso de rendimiento que se obtenga del activo riesgoso, más querremos invertir en él.- $w^\ast\propto \frac{1}{\gamma}$: mientras más averso al riesgo seas, menos querrás invertir en el activo riesgoso.- $w^\ast\propto \frac{1}{\sigma_s^2}$: mientras más riesgoso sea el activo, menos querrás invertir en él.___ 2. Ejemplo de asignación óptima de capital: acciones y billetes de EU Pongamos algunos números con algunos datos, para ilustrar la derivación que acabamos de hacer.En este caso, consideraremos:- **Portafolio riesgoso**: mercado de acciones de EU (representados en algún índice de mercado como el S&P500).- **Activo libre de riesgo**: billetes del departamento de tesorería de EU (T-bills).Tenemos los siguientes datos:$$E[r_{US}]=11.9\%,\quad \sigma_{US}=19.15\%, \quad r_f=1\%.$$ Recordamos que podemos escribir la expresión de la LAC como:\begin{align}E[r_p]&=r_f+\left[\frac{E[r_{US}-r_f]}{\sigma_{US}}\right]\sigma_p\\ &=0.01+\text{S.R.}\sigma_p,\end{align}donde $\text{S.R}=\frac{0.119-0.01}{0.1915}\approx0.569$ es el radio de Sharpe (¿qué es lo que es esto?).Grafiquemos la LAC con estos datos reales: ###Code # Importamos librerías que vamos a utilizar # Datos # Radio de Sharpe para este activo # Vector de volatilidades del portafolio (sugerido: 0% a 30%) # LAC # Gráfica ###Output _____no_output_____ ###Markdown Bueno, y ¿en qué punto de esta línea querríamos estar?- Pues ya vimos que depende de tus preferencias.- En particular, de tu actitud de cara al riesgo, medido por tu coeficiente de aversión al riesgo.Solución al problema de asignación óptima de capital:$$\max_{w} \quad E[U(r_p)]$$$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}$$ Dado que ya tenemos datos, podemos intentar para varios coeficientes de aversión al riesgo: ###Code # importar pandas # Crear un DataFrame con los pesos, rendimiento # esperado y volatilidad del portafolio óptimo # entre los activos riesgoso y libre de riesgo # cuyo índice sean los coeficientes de aversión # al riesgo del 1 al 10 (enteros) ###Output _____no_output_____ ###Markdown Optimización media-varianzaLa **teoría de portafolios** es uno de los avances más importantes en las finanzas modernas e inversiones.- Apareció por primera vez en un [artículo corto](https://www.math.ust.hk/~maykwok/courses/ma362/07F/markowitz_JF.pdf) llamado "Portfolio Selection" en la edición de Marzo de 1952 de "the Journal of Finance".- Escrito por un desconocido estudiante de la Universidad de Chicago, llamado Harry Markowitz.- Escrito corto (sólo 14 páginas), poco texto, fácil de entender, muchas gráficas y unas cuantas referencias.- No se le prestó mucha atención hasta los 60s.Finalmente, este trabajo se convirtió en una de las más grandes ideas en finanzas, y le dió a Markowitz el Premio Nobel casi 40 años después.- Markowitz estaba incidentalmente interesado en los mercados de acciones e inversiones.- Estaba más bien interesado en entender cómo las personas tomaban sus mejores decisiones cuando se enfrentaban con "trade-offs".- Principio de conservación de la miseria. O, dirían los instructores de gimnasio: "no pain, no gain".- Si queremos más de algo, tenemos que perder en algún otro lado.- El estudio de este fenómeno era el que le atraía a Markowitz.De manera que nadie se hace rico poniendo todo su dinero en la cuenta de ahorros. La única manera de esperar altos rendimientos es si se toma bastante riesgo. Sin embargo, riesgo significa también la posibilidad de perder, tanto como ganar.Pero, ¿qué tanto riesgo es necesario?, y ¿hay alguna manera de minimizar el riesgo mientras se maximizan las ganancias?- Markowitz básicamente cambió la manera en que los inversionistas pensamos acerca de esas preguntas.- Alteró completamente la práctica de la administración de inversiones.- Incluso el título de su artículo era innovador. Portafolio: una colección de activos en lugar de tener activos individuales.- En ese tiempo, un portafolio se refería a una carpeta de piel.- En el resto de este módulo, nos ocuparemos de la parte analítica de la teoría de portafolios, la cual puede ser resumida en dos frases: - No pain, no gain. - No ponga todo el blanquillo en una sola bolsa. **Objetivos:**- ¿Qué es la línea de asignación de capital?- ¿Qué es el radio de Sharpe?- ¿Cómo deberíamos asignar nuestro capital entre un activo riesgoso y un activo libre de riesgo?*Referencia:*- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.___ 1. Línea de asignación de capital 1.1. MotivaciónEl proceso de construcción de un portafolio tiene entonces los siguientes dos pasos:1. Escoger un portafolio de activos riesgosos.2. Decidir qué tanto de tu riqueza invertirás en el portafolio y qué tanto invertirás en activos libres de riesgo.Al paso 2 lo llamamos **decisión de asignación de activos**. Preguntas importantes:1. ¿Qué es el portafolio óptimo de activos riesgosos? - ¿Cuál es el mejor portafolio de activos riesgosos? - Es un portafolio eficiente en media-varianza.2. ¿Qué es la distribución óptima de activos? - ¿Cómo deberíamos distribuir nuestra riqueza entre el portafolo riesgoso óptimo y el activo libre de riesgo? - Concepto de **línea de asignación de capital**. - Concepto de **radio de Sharpe**. Dos suposiciones importantes:- Funciones de utilidad media-varianza.- Inversionista averso al riesgo. La idea sorprendente que saldrá de este análisis, es que cualquiera que sea la actitud del inversionista de cara al riesgo, el mejor portafolio de activos riesgosos es idéntico para todos los inversionistas.Lo que nos importará a cada uno de nosotros en particular, es simplemente la desición óptima de asignación de activos.___ 1.2. Línea de asignación de capital Sean:- $r_s$ el rendimiento del activo/portafolio riesgoso,- $r_f$ el rendimiento libre de riesgo, y- $w$ la fracción invertida en el activo riesgoso. Realizar deducción de la línea de asignación de capital en el tablero. **Tres doritos después...** Línea de asignación de capital (LAC):$E[r_p]$ se relaciona con $\sigma_p$ de manera afín. Es decir, mediante la ecuación de una recta:$$E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p.$$- La pendiente de la LAC es el radio de Sharpe $\frac{E[r_s-r_f]}{\sigma_s}=\frac{E[r_s]-r_f}{\sigma_s}$,- el cual nos dice qué tanto rendimiento obtenemos por unidad de riesgo asumido en la tenencia del activo (portafolio) riesgoso. Ahora, la pregunta es, ¿dónde sobre esta línea queremos estar?___ 1.3. Resolviendo para la asignación óptima de capitalRecapitulando de la clase pasada, tenemos las curvas de indiferencia: **queremos estar en la curva de indiferencia más alta posible, que sea tangente a la LAC**. Ver en el tablero. Analíticamente, el problema es$$\max_{w} \quad E[U(r_p)]\equiv\max_{w} \quad E[r_p]-\frac{1}{2}\gamma\sigma_p^2,$$donde los puntos $(\sigma_p,E[r_p])$ se restringen a estar en la LAC, esto es $E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p$ y $\sigma_p=w\sigma_s$. Entonces el problema anterior se puede escribir de la siguiente manera:$$\max_{w} \quad r_f+wE[r_s-r_f]-\frac{1}{2}\gamma w^2\sigma_s^2.$$ Encontrar la $w$ que maximiza la anterior expresión en el tablero. **Tres doritos después...** La solución es entonces:$$w^\ast=\frac{E[r_s]-r_f}{\gamma\sigma_s^2}.$$De manera intuitiva:- $w^\ast\propto E[r_s-r_f]$: a más exceso de rendimiento que se obtenga del activo riesgoso, más querremos invertir en él.- $w^\ast\propto \frac{1}{\gamma}$: mientras más averso al riesgo seas, menos querrás invertir en el activo riesgoso.- $w^\ast\propto \frac{1}{\sigma_s^2}$: mientras más riesgoso sea el activo, menos querrás invertir en él.___ 2. Ejemplo de asignación óptima de capital: acciones y billetes de EU Pongamos algunos números con algunos datos, para ilustrar la derivación que acabamos de hacer.En este caso, consideraremos:- **Portafolio riesgoso**: mercado de acciones de EU (representados en algún índice de mercado como el S&P500).- **Activo libre de riesgo**: billetes del departamento de tesorería de EU (T-bills).Tenemos los siguientes datos:$$E[r_{US}]=11.9\%,\quad \sigma_{US}=19.15\%, \quad r_f=1\%.$$ Recordamos que podemos escribir la expresión de la LAC como:\begin{align}E[r_p]&=r_f+\left[\frac{E[r_{US}-r_f]}{\sigma_{US}}\right]\sigma_p\\ &=0.01+\text{S.R.}\sigma_p,\end{align}donde $\text{S.R}=\frac{0.119-0.01}{0.1915}\approx0.569$ es el radio de Sharpe (¿qué es lo que es esto?).Grafiquemos la LAC con estos datos reales: ###Code # Importamos librerías que vamos a utilizar from matplotlib import pyplot as plt import pandas as pd import numpy as np # Datos erp = 0.119 sp = 0.1915 rf = 0.01 # Radio de Sharpe para este activo rs = (erp - rf) / sp # Vector de volatilidades del portafolio (sugerido: 0% a 30%) s = np.linspace(0, 0.3) # LAC er = rs * s + rf # Gráfica plt.plot(s, er, '--b', lw=3, label='LAC') # -> LAC plt.plot(0, rf, 'or', ms=10, label='$r_f$') # -> Activo libre de riesgo plt.plot(sp, erp, 'og', ms=10, label='Act. riesgoso') # -> Activo riesgoso plt.xlabel('Volatilidad') plt.ylabel('Rendimiento esperado') plt.legend() ###Output _____no_output_____ ###Markdown Bueno, y ¿en qué punto de esta línea querríamos estar?- Pues ya vimos que depende de tus preferencias.- En particular, de tu actitud de cara al riesgo, medido por tu coeficiente de aversión al riesgo.Solución al problema de asignación óptima de capital:$$\max_{w} \quad E[U(r_p)]$$$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}$$ Dado que ya tenemos datos, podemos intentar para varios coeficientes de aversión al riesgo: ###Code # Crear un DataFrame con los pesos, rendimiento # esperado y volatilidad del portafolio óptimo # entre los activos riesgoso y libre de riesgo # cuyo índice sean los coeficientes de aversión # al riesgo del 1 al 10 (enteros) g = np.arange(1, 11) w_opt = pd.DataFrame({'g': g, 'w_opt': (erp - rf) / (g * sp**2), 'w_f': 1 - (erp - rf) / (g * sp**2)}) w_opt ###Output _____no_output_____ ###Markdown Optimización media-varianzaLa **teoría de portafolios** es uno de los avances más importantes en las finanzas modernas e inversiones.- Apareció por primera vez en un [artículo corto](https://www.math.ust.hk/~maykwok/courses/ma362/07F/markowitz_JF.pdf) llamado "Portfolio Selection" en la edición de Marzo de 1952 de "the Journal of Finance".- Escrito por un desconocido estudiante de la Universidad de Chicago, llamado Harry Markowitz.- Escrito corto (sólo 14 páginas), poco texto, fácil de entender, muchas gráficas y unas cuantas referencias.- No se le prestó mucha atención hasta los 60s.Finalmente, este trabajo se convirtió en una de las más grandes ideas en finanzas, y le dió a Markowitz el Premio Nobel casi 40 años después.- Markowitz estaba incidentalmente interesado en los mercados de acciones e inversiones.- Estaba más bien interesado en entender cómo las personas tomaban sus mejores decisiones cuando se enfrentaban con "trade-offs".- Principio de conservación de la miseria. O, dirían los instructores de gimnasio: "no pain, no gain".- Si queremos más de algo, tenemos que perder en algún otro lado.- El estudio de este fenómeno era el que le atraía a Markowitz.De manera que nadie se hace rico poniendo todo su dinero en la cuenta de ahorros. La única manera de esperar altos rendimientos es si se toma bastante riesgo. Sin embargo, riesgo significa también la posibilidad de perder, tanto como ganar.Pero, ¿qué tanto riesgo es necesario?, y ¿hay alguna manera de minimizar el riesgo mientras se maximizan las ganancias?- Markowitz básicamente cambió la manera en que los inversionistas pensamos acerca de esas preguntas.- Alteró completamente la práctica de la administración de inversiones.- Incluso el título de su artículo era innovador. Portafolio: una colección de activos en lugar de tener activos individuales.- En ese tiempo, un portafolio se refería a una carpeta de piel.- En el resto de este módulo, nos ocuparemos de la parte analítica de la teoría de portafolios, la cual puede ser resumida en dos frases: - No pain, no gain. - No ponga todo el blanquillo en una sola bolsa. **Objetivos:**- ¿Qué es la línea de asignación de capital?- ¿Qué es el radio de Sharpe?- ¿Cómo deberíamos asignar nuestro capital entre un activo riesgoso y un activo libre de riesgo?*Referencia:*- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.___ 1. Línea de asignación de capital 1.1. MotivaciónEl proceso de construcción de un portafolio tiene entonces los siguientes dos pasos:1. Escoger un portafolio de activos riesgosos.2. Decidir qué tanto de tu riqueza invertirás en el portafolio y qué tanto invertirás en activos libres de riesgo.Al paso 2 lo llamamos **decisión de asignación de activos**. Preguntas importantes:1. ¿Qué es el portafolio óptimo de activos riesgosos? - ¿Cuál es el mejor portafolio de activos riesgosos? - Es un portafolio eficiente en media-varianza.2. ¿Qué es la distribución óptima de activos? - ¿Cómo deberíamos distribuir nuestra riqueza entre el portafolo riesgoso óptimo y el activo libre de riesgo? - Concepto de **línea de asignación de capital**. - Concepto de **radio de Sharpe**. Dos suposiciones importantes:- Funciones de utilidad media-varianza.- Inversionista averso al riesgo. La idea sorprendente que saldrá de este análisis, es que cualquiera que sea la actitud del inversionista de cara al riesgo, el mejor portafolio de activos riesgosos es idéntico para todos los inversionistas.Lo que nos importará a cada uno de nosotros en particular, es simplemente la desición óptima de asignación de activos.___ 1.2. Línea de asignación de capital Sean:- $r_s$ el rendimiento del activo riesgoso,- $r_f$ el rendimiento libre de riesgo, y- $w$ la fracción invertida en el activo riesgoso. Realizar deducción de la línea de asignación de capital en el tablero. **Tres doritos después...** Línea de asignación de capital (LAC):$E[r_p]$ se relaciona con $\sigma_p$ de manera afín. Es decir, mediante la ecuación de una recta:$$E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p.$$- La pendiente de la LAC es el radio de Sharpe $\frac{E[r_s-r_f]}{\sigma_s}=\frac{E[r_s]-r_f}{\sigma_s}$,- el cual nos dice qué tanto rendimiento obtenemos por unidad de riesgo asumido en la tenencia del activo (portafolio) riesgoso. Ahora, la pregunta es, ¿dónde sobre esta línea queremos estar?___ 1.3. Resolviendo para la asignación óptima de capitalRecapitulando de la clase pasada, tenemos las curvas de indiferencia: **queremos estar en la curva de indiferencia más alta posible, que sea tangente a la LAC**. Ver en el tablero. Analíticamente, el problema es$$\max_{w} \quad E[U(r_p)]\equiv\max_{w} \quad E[r_p]-\frac{1}{2}\gamma\sigma_p^2,$$donde los puntos $(\sigma_p,E[r_p])$ se restringen a estar en la LAC, esto es $E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p$ y $\sigma_p=w\sigma_s$. Entonces el problema anterior se puede escribir de la siguiente manera:$$\max_{w} \quad r_f+wE[r_s-r_f]-\frac{1}{2}\gamma w^2\sigma_s^2.$$ Encontrar la $w$ que maximiza la anterior expresión en el tablero. **Tres doritos después...** La solución es entonces:$$w^\ast=\frac{E[r_s]-r_f}{\gamma\sigma_s^2}.$$De manera intuitiva:- $w^\ast\propto E[r_s-r_f]$: a más exceso de rendimiento que se obtenga del activo riesgoso, más querremos invertir en él.- $w^\ast\propto \frac{1}{\gamma}$: mientras más averso al riesgo seas, menos querrás invertir en el activo riesgoso.- $w^\ast\propto \frac{1}{\sigma_s^2}$: mientras más riesgoso sea el activo, menos querrás invertir en él.___ 2. Ejemplo de asignación óptima de capital: acciones y billetes de EU Pongamos algunos números con algunos datos, para ilustrar la derivación que acabamos de hacer.En este caso, consideraremos:- **Portafolio riesgoso**: mercado de acciones de EU (representados en algún índice de mercado como el S&P500).- **Activo libre de riesgo**: billetes del departamento de tesorería de EU (T-bills).Tenemos los siguientes datos:$$E[r_{US}]=11.9\%,\quad \sigma_{US}=19.15\%, \quad r_f=1\%.$$ Recordamos que podemos escribir la expresión de la LAC como:\begin{align}E[r_p]&=r_f+\left[\frac{E[r_{US}-r_f]}{\sigma_{US}}\right]\sigma_p\\ &=0.01+\text{S.R.}\sigma_p,\end{align}donde $\text{S.R}=\frac{0.119-0.01}{0.1915}\approx0.569$ es el radio de Sharpe (¿qué es lo que es esto?).Grafiquemos la LAC con estos datos reales: ###Code # Importamos librerías que vamos a utilizar # Datos # Radio de Sharpe para este activo # Vector de volatilidades del portafolio (sugerido: 0% a 50%) # LAC # Gráfica ###Output _____no_output_____ ###Markdown Bueno, y ¿en qué punto de esta línea querríamos estar?- Pues ya vimos que depende de tus preferencias.- En particular, de tu actitud de cara al riesgo, medido por tu coeficiente de aversión al riesgo.Solución al problema de asignación óptima de capital:$$\max_{w} \quad E[U(r_p)]$$$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}$$ Dado que ya tenemos datos, podemos intentar para varios coeficientes de aversión al riesgo: ###Code # importar pandas # Crear un DataFrame con los pesos, rendimiento # esperado y volatilidad del portafolio óptimo # entre los activos riesgoso y libre de riesgo # cuyo índice sean los coeficientes de aversión # al riesgo del 1 al 10 (enteros) w_opt ###Output _____no_output_____ ###Markdown Optimización media-varianzaLa **teoría de portafolios** es uno de los avances más importantes en las finanzas modernas e inversiones.- Apareció por primera vez en un [artículo corto](https://www.math.ust.hk/~maykwok/courses/ma362/07F/markowitz_JF.pdf) llamado "Portfolio Selection" en la edición de Marzo de 1952 de "the Journal of Finance".- Escrito por un desconocido estudiante de la Universidad de Chicago, llamado Harry Markowitz.- Escrito corto (sólo 14 páginas), poco texto, fácil de entender, muchas gráficas y unas cuantas referencias.- No se le prestó mucha atención hasta los 60s.Finalmente, este trabajo se convirtió en una de las más grandes ideas en finanzas, y le dió a Markowitz el Premio Nobel casi 40 años después.- Markowitz estaba incidentalmente interesado en los mercados de acciones e inversiones.- Estaba más bien interesado en entender cómo las personas tomaban sus mejores decisiones cuando se enfrentaban con "trade-offs".- Principio de conservación de la miseria. O, dirían los instructores de gimnasio: "no pain, no gain".- Si queremos más de algo, tenemos que perder en algún otro lado.- El estudio de este fenómeno era el que le atraía a Markowitz.De manera que nadie se hace rico poniendo todo su dinero en la cuenta de ahorros. La única manera de esperar altos rendimientos es si se toma bastante riesgo. Sin embargo, riesgo significa también la posibilidad de perder, tanto como ganar.Pero, ¿qué tanto riesgo es necesario?, y ¿hay alguna manera de minimizar el riesgo mientras se maximizan las ganancias?- Markowitz básicamente cambió la manera en que los inversionistas pensamos acerca de esas preguntas.- Alteró completamente la práctica de la administración de inversiones.- Incluso el título de su artículo era innovador. Portafolio: una colección de activos en lugar de tener activos individuales.- En ese tiempo, un portafolio se refería a una carpeta de piel.- En el resto de este módulo, nos ocuparemos de la parte analítica de la teoría de portafolios, la cual puede ser resumida en dos frases: - No pain, no gain. - No ponga todo el blanquillo en una sola bolsa. **Objetivos:**- ¿Qué es la línea de asignación de capital?- ¿Qué es el radio de Sharpe?- ¿Cómo deberíamos asignar nuestro capital entre un activo riesgoso y un activo libre de riesgo?*Referencia:*- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.___ 1. Línea de asignación de capital 1.1. MotivaciónEl proceso de construcción de un portafolio tiene entonces los siguientes dos pasos:1. Escoger un portafolio de activos riesgosos.2. Decidir qué tanto de tu riqueza invertirás en el portafolio y qué tanto invertirás en activos libres de riesgo.Al paso 2 lo llamamos **decisión de asignación de activos**. Preguntas importantes:1. ¿Qué es el portafolio óptimo de activos riesgosos? - ¿Cuál es el mejor portafolio de activos riesgosos? - Es un portafolio eficiente en media-varianza.2. ¿Qué es la distribución óptima de activos? - ¿Cómo deberíamos distribuir nuestra riqueza entre el portafolo riesgoso óptimo y el activo libre de riesgo? - Concepto de **línea de asignación de capital**. - Concepto de **radio de Sharpe**. Dos suposiciones importantes:- Funciones de utilidad media-varianza.- Inversionista averso al riesgo. La idea sorprendente que saldrá de este análisis, es que cualquiera que sea la actitud del inversionista de cara al riesgo, el mejor portafolio de activos riesgosos es idéntico para todos los inversionistas.Lo que nos importará a cada uno de nosotros en particular, es simplemente la desición óptima de asignación de activos.___ 1.2. Línea de asignación de capital Sean:- $r_s$ el rendimiento del activo riesgoso,- $r_f$ el rendimiento libre de riesgo, y- $w$ la fracción invertida en el activo riesgoso. Realizar deducción de la línea de asignación de capital en el tablero. **Tres doritos después...** Línea de asignación de capital (LAC):$E[r_p]$ se relaciona con $\sigma_p$ de manera afín. Es decir, mediante la ecuación de una recta:$$E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p.$$- La pendiente de la LAC es el radio de Sharpe $\frac{E[r_s-r_f]}{\sigma_s}=\frac{E[r_s]-r_f}{\sigma_s}$,- el cual nos dice qué tanto rendimiento obtenemos por unidad de riesgo asumido en la tenencia del activo (portafolio) riesgoso. Ahora, la pregunta es, ¿dónde sobre esta línea queremos estar?___ 1.3. Resolviendo para la asignación óptima de capitalRecapitulando de la clase pasada, tenemos las curvas de indiferencia: **queremos estar en la curva de indiferencia más alta posible, que sea tangente a la LAC**. Ver en el tablero. Analíticamente, el problema es$$\max_{w} \quad E[U(r_p)]\equiv\max_{w} \quad E[r_p]-\frac{1}{2}\gamma\sigma_p^2,$$donde los puntos $(\sigma_p,E[r_p])$ se restringen a estar en la LAC, esto es $E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p$ y $\sigma_p=w\sigma_s$. Entonces el problema anterior se puede escribir de la siguiente manera:$$\max_{w} \quad r_f+wE[r_s-r_f]-\frac{1}{2}\gamma w^2\sigma_s^2.$$ Encontrar la $w$ que maximiza la anterior expresión en el tablero. **Tres doritos después...** La solución es entonces:$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}.$$De manera intuitiva:- $w^\ast\propto E[r_s-r_f]$: a más exceso de rendimiento que se obtenga del activo riesgoso, más querremos invertir en él.- $w^\ast\propto \frac{1}{\gamma}$: mientras más averso al riesgo seas, menos querrás invertir en el activo riesgoso.- $w^\ast\propto \frac{1}{\sigma_s^2}$: mientras más riesgoso sea el activo, menos querrás invertir en él.___ 2. Ejemplo de asignación óptima de capital: acciones y billetes de EU Pongamos algunos números con algunos datos, para ilustrar la derivación que acabamos de hacer.En este caso, consideraremos:- **Portafolio riesgoso**: mercado de acciones de EU (representados en algún índice de mercado como el S&P500).- **Activo libre de riesgo**: billetes del departamento de tesorería de EU (T-bills).Tenemos los siguientes datos:$$E[r_{US}]=11.9\%,\quad \sigma_{US}=19.15\%, \quad r_f=1\%.$$ Recordamos que podemos escribir la expresión de la LAC como:\begin{align}E[r_p]&=r_f+\left[\frac{E[r_{US}-r_f]}{\sigma_{US}}\right]\sigma_p\\ &=0.01+\text{S.R.}\sigma_p,\end{align}donde $\text{S.R}=\frac{0.119-0.01}{0.1915}\approx0.569$ es el radio de Sharpe (¿qué es lo que es esto?).Grafiquemos la LAC con estos datos reales: ###Code # Importamos librerías que vamos a utilizar import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Datos Ers = .119 ss = .1915 rf = .01 # Radio de Sharpe para este activo RS = (Ers - rf)/ss # Vector de volatilidades del portafolio (sugerido: 0% a 50%) sp = np.linspace(0,.5) # LAC Erp = rf + RS*sp # Gráfica plt.figure(figsize=(6, 4)) plt.plot(sp, Erp, lw=3, label='LAC') plt.plot(0, rf, 'ob', ms=10, label='Libre de riesgo') plt.plot(ss, Ers, 'or', ms=10, label='Portafolio/activo riesgoso') plt.legend(loc='best') plt.xlabel('Volatilidad $\sigma$') plt.ylabel('Rendimiento esperado $E[r]$') plt.grid() ###Output _____no_output_____ ###Markdown Bueno, y ¿en qué punto de esta línea querríamos estar?- Pues ya vimos que depende de tus preferencias.- En particular, de tu actitud de cara al riesgo, medido por tu coeficiente de aversión al riesgo.Solución al problema de asignación óptima de capital:$$\max_{w} \quad E[U(r_p)]$$$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}$$ Dado que ya tenemos datos, podemos intentar para varios coeficientes de aversión al riesgo: ###Code # importar pandas import pandas as pd # Crear un DataFrame con los pesos, rendimiento # esperado y volatilidad del portafolio óptimo # entre los activos riesgoso y libre de riesgo # cuyo índice sean los coeficientes de aversión # al riesgo del 1 al 10 (enteros) gamma = np.arange(1,11) dist_cap = pd.DataFrame({'$\gamma$':gamma, '$w^{\ast}$':(Ers - rf) / (gamma * ss**2)}) dist_cap g = 4.5 w_ac = (Ers - rf) / (g * ss**2) w_ac ###Output _____no_output_____ ###Markdown Optimización media-varianzaLa **teoría de portafolios** es uno de los avances más importantes en las finanzas modernas e inversiones.- Apareció por primera vez en un [artículo corto](https://www.math.ust.hk/~maykwok/courses/ma362/07F/markowitz_JF.pdf) llamado "Portfolio Selection" en la edición de Marzo de 1952 de "the Journal of Finance".- Escrito por un desconocido estudiante de la Universidad de Chicago, llamado Harry Markowitz.- Escrito corto (sólo 14 páginas), poco texto, fácil de entender, muchas gráficas y unas cuantas referencias.- No se le prestó mucha atención hasta los 60s.Finalmente, este trabajo se convirtió en una de las más grandes ideas en finanzas, y le dió a Markowitz el Premio Nobel casi 40 años después.- Markowitz estaba incidentalmente interesado en los mercados de acciones e inversiones.- Estaba más bien interesado en entender cómo las personas tomaban sus mejores decisiones cuando se enfrentaban con "trade-offs".- Principio de conservación de la miseria. O, dirían los instructores de gimnasio: "no pain, no gain".- Si queremos más de algo, tenemos que perder en algún otro lado.- El estudio de este fenómeno era el que le atraía a Markowitz.De manera que nadie se hace rico poniendo todo su dinero en la cuenta de ahorros. La única manera de esperar altos rendimientos es si se toma bastante riesgo. Sin embargo, riesgo significa también la posibilidad de perder, tanto como ganar.Pero, ¿qué tanto riesgo es necesario?, y ¿hay alguna manera de minimizar el riesgo mientras se maximizan las ganancias?- Markowitz básicamente cambió la manera en que los inversionistas pensamos acerca de esas preguntas.- Alteró completamente la práctica de la administración de inversiones.- Incluso el título de su artículo era innovador. Portafolio: una colección de activos en lugar de tener activos individuales.- En ese tiempo, un portafolio se refería a una carpeta de piel.- En el resto de este módulo, nos ocuparemos de la parte analítica de la teoría de portafolios, la cual puede ser resumida en dos frases: - No pain, no gain. - No ponga todo el blanquillo en una sola bolsa. **Objetivos:**- ¿Qué es la línea de asignación de capital?- ¿Qué es el radio de Sharpe?- ¿Cómo deberíamos asignar nuestro capital entre un activo riesgoso y un activo libre de riesgo?*Referencia:*- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.___ 1. Línea de asignación de capital 1.1. MotivaciónEl proceso de construcción de un portafolio tiene entonces los siguientes dos pasos:1. Escoger un portafolio de activos riesgosos.2. Decidir qué tanto de tu riqueza invertirás en el portafolio y qué tanto invertirás en activos libres de riesgo.Al paso 2 lo llamamos **decisión de asignación de activos**. Preguntas importantes:1. ¿Qué es el portafolio óptimo de activos riesgosos? - ¿Cuál es el mejor portafolio de activos riesgosos? - Es un portafolio eficiente en media-varianza.2. ¿Qué es la distribución óptima de activos? - ¿Cómo deberíamos distribuir nuestra riqueza entre el portafolo riesgoso óptimo y el activo libre de riesgo? - Concepto de **línea de asignación de capital**. - Concepto de **radio de Sharpe**. Dos suposiciones importantes:- Funciones de utilidad media-varianza.- Inversionista averso al riesgo. La idea sorprendente que saldrá de este análisis, es que cualquiera que sea la actitud del inversionista de cara al riesgo, el mejor portafolio de activos riesgosos es idéntico para todos los inversionistas.Lo que nos importará a cada uno de nosotros en particular, es simplemente la desición óptima de asignación de activos.___ 1.2. Línea de asignación de capital Sean:- $r_s$ el rendimiento del activo riesgoso,- $r_f$ el rendimiento libre de riesgo, y- $w$ la fracción invertida en el activo riesgoso. Realizar deducción de la línea de asignación de capital en el tablero. **Tres doritos después...** Línea de asignación de capital (LAC):$E[r_p]$ se relaciona con $\sigma_p$ de manera afín. Es decir, mediante la ecuación de una recta:$$E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p.$$- La pendiente de la LAC es el radio de Sharpe $\frac{E[r_s-r_f]}{\sigma_s}=\frac{E[r_s]-r_f}{\sigma_s}$,- el cual nos dice qué tanto rendimiento obtenemos por unidad de riesgo asumido en la tenencia del activo (portafolio) riesgoso. Ahora, la pregunta es, ¿dónde sobre esta línea queremos estar?___ 1.3. Resolviendo para la asignación óptima de capitalRecapitulando de la clase pasada, tenemos las curvas de indiferencia: **queremos estar en la curva de indiferencia más alta posible, que sea tangente a la LAC**. Ver en el tablero. Analíticamente, el problema es$$\max_{w} \quad E[U(r_p)]\equiv\max_{w} \quad E[r_p]-\frac{1}{2}\gamma\sigma_p^2,$$donde los puntos $(\sigma_p,E[r_p])$ se restringen a estar en la LAC, esto es $E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p$ y $\sigma_p=w\sigma_s$. Entonces el problema anterior se puede escribir de la siguiente manera:$$\max_{w} \quad r_f+wE[r_s-r_f]-\frac{1}{2}\gamma w^2\sigma_s^2.$$ Encontrar la $w$ que maximiza la anterior expresión en el tablero. **Tres doritos después...** La solución es entonces:$$w^\ast=\frac{E[r_s]-r_f}{\gamma\sigma_s^2}.$$De manera intuitiva:- $w^\ast\propto E[r_s-r_f]$: a más exceso de rendimiento que se obtenga del activo riesgoso, más querremos invertir en él.- $w^\ast\propto \frac{1}{\gamma}$: mientras más averso al riesgo seas, menos querrás invertir en el activo riesgoso.- $w^\ast\propto \frac{1}{\sigma_s^2}$: mientras más riesgoso sea el activo, menos querrás invertir en él.___ 2. Ejemplo de asignación óptima de capital: acciones y billetes de EU Pongamos algunos números con algunos datos, para ilustrar la derivación que acabamos de hacer.En este caso, consideraremos:- **Portafolio riesgoso**: mercado de acciones de EU (representados en algún índice de mercado como el S&P500).- **Activo libre de riesgo**: billetes del departamento de tesorería de EU (T-bills).Tenemos los siguientes datos:$$E[r_{US}]=11.9\%,\quad \sigma_{US}=19.15\%, \quad r_f=1\%.$$ Recordamos que podemos escribir la expresión de la LAC como:\begin{align}E[r_p]&=r_f+\left[\frac{E[r_{US}-r_f]}{\sigma_{US}}\right]\sigma_p\\ &=0.01+\text{S.R.}\sigma_p,\end{align}donde $\text{S.R}=\frac{0.119-0.01}{0.1915}\approx0.569$ es el radio de Sharpe (¿qué es lo que es esto?).Grafiquemos la LAC con estos datos reales: ###Code # Importamos librerías que vamos a utilizar import numpy as np from matplotlib import pyplot as plt %matplotlib inline # Datos Ers = 0.119 ss = 0.1915 rf = 0.01 # Radio de Sharpe para este activo RS = (Ers - rf) / ss # Vector de volatilidades del portafolio (sugerido: 0% a 50%) sp = np.linspace(0, 0.5) # LAC Erp = RS * sp + rf # Gráfica plt.figure(figsize=(6, 4)) plt.plot(sp, Erp, lw=1, label='LAC') plt.plot(0, rf, 'ob', ms=5, label='Libre de riesgo') plt.plot(ss, Ers, 'or', ms=5, label='Portafolio/activo riesgoso') plt.legend(loc='best') plt.xlabel('Volatilidad $\sigma$') plt.ylabel('Rendimiento esperado $E[r]$') plt.grid() ###Output _____no_output_____ ###Markdown Bueno, y ¿en qué punto de esta línea querríamos estar?- Pues ya vimos que depende de tus preferencias.- En particular, de tu actitud de cara al riesgo, medido por tu coeficiente de aversión al riesgo.Solución al problema de asignación óptima de capital:$$\max_{w} \quad E[U(r_p)]$$$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}$$ Dado que ya tenemos datos, podemos intentar para varios coeficientes de aversión al riesgo: ###Code # importar pandas import pandas as pd # Crear un DataFrame con los pesos, rendimiento # esperado y volatilidad del portafolio óptimo # entre los activos riesgoso y libre de riesgo # cuyo índice sean los coeficientes de aversión # al riesgo del 1 al 10 (enteros) gamma = np.arange(1, 11) dist_cap = pd.DataFrame({'$\gamma$': gamma, r'$w^{\ast}$': (Ers - rf) / (gamma * ss**2) }) dist_cap g = 4.5 w_ac = (Ers - rf) / (g * ss**2) w_ac ###Output _____no_output_____ ###Markdown Optimización media-varianzaLa **teoría de portafolios** es uno de los avances más importantes en las finanzas modernas e inversiones.- Apareció por primera vez en un [artículo corto](https://www.math.ust.hk/~maykwok/courses/ma362/07F/markowitz_JF.pdf) llamado "Portfolio Selection" en la edición de Marzo de 1952 de "the Journal of Finance".- Escrito por un desconocido estudiante de la Universidad de Chicago, llamado Harry Markowitz.- Escrito corto (sólo 14 páginas), poco texto, fácil de entender, muchas gráficas y unas cuantas referencias.- No se le prestó mucha atención hasta los 60s.Finalmente, este trabajo se convirtió en una de las más grandes ideas en finanzas, y le dió a Markowitz el Premio Nobel casi 40 años después.- Markowitz estaba incidentalmente interesado en los mercados de acciones e inversiones.- Estaba más bien interesado en entender cómo las personas tomaban sus mejores decisiones cuando se enfrentaban con "trade-offs".- Principio de conservación de la miseria. O, dirían los instructores de gimnasio: "no pain, no gain".- Si queremos más de algo, tenemos que perder en algún otro lado.- El estudio de este fenómeno era el que le atraía a Markowitz.De manera que nadie se hace rico poniendo todo su dinero en la cuenta de ahorros. La única manera de esperar altos rendimientos es si se toma bastante riesgo. Sin embargo, riesgo significa también la posibilidad de perder, tanto como ganar.Pero, ¿qué tanto riesgo es necesario?, y ¿hay alguna manera de minimizar el riesgo mientras se maximizan las ganancias?- Markowitz básicamente cambió la manera en que los inversionistas pensamos acerca de esas preguntas.- Alteró completamente la práctica de la administración de inversiones.- Incluso el título de su artículo era innovador. Portafolio: una colección de activos en lugar de tener activos individuales.- En ese tiempo, un portafolio se refería a una carpeta de piel.- En el resto de este módulo, nos ocuparemos de la parte analítica de la teoría de portafolios, la cual puede ser resumida en dos frases: - No pain, no gain. - No ponga todo el blanquillo en una sola bolsa. **Objetivos:**- ¿Qué es la línea de asignación de capital?- ¿Qué es el radio de Sharpe?- ¿Cómo deberíamos asignar nuestro capital entre un activo riesgoso y un activo libre de riesgo?*Referencia:*- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.___ 1. Línea de asignación de capital 1.1. MotivaciónEl proceso de construcción de un portafolio tiene entonces los siguientes dos pasos:1. Escoger un portafolio de activos riesgosos.2. Decidir qué tanto de tu riqueza invertirás en el portafolio y qué tanto invertirás en activos libres de riesgo.Al paso 2 lo llamamos **decisión de asignación de activos**. Preguntas importantes:1. ¿Qué es el portafolio óptimo de activos riesgosos? - ¿Cuál es el mejor portafolio de activos riesgosos? - Es un portafolio eficiente en media-varianza.2. ¿Qué es la distribución óptima de activos? - ¿Cómo deberíamos distribuir nuestra riqueza entre el portafolo riesgoso óptimo y el activo libre de riesgo? - Concepto de **línea de asignación de capital**. - Concepto de **radio de Sharpe**. Dos suposiciones importantes:- Funciones de utilidad media-varianza.- Inversionista averso al riesgo. La idea sorprendente que saldrá de este análisis, es que cualquiera que sea la actitud del inversionista de cara al riesgo, el mejor portafolio de activos riesgosos es idéntico para todos los inversionistas.Lo que nos importará a cada uno de nosotros en particular, es simplemente la desición óptima de asignación de activos.___ 1.2. Línea de asignación de capital Sean:- $r_s$ el rendimiento del activo riesgoso,- $r_f$ el rendimiento libre de riesgo, y- $w$ la fracción invertida en el activo riesgoso. Realizar deducción de la línea de asignación de capital en el tablero. **Tres doritos después...** Línea de asignación de capital (LAC):$E[r_p]$ se relaciona con $\sigma_p$ de manera afín. Es decir, mediante la ecuación de una recta:$$E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p.$$- La pendiente de la LAC es el radio de Sharpe $\frac{E[r_s-r_f]}{\sigma_s}=\frac{E[r_s]-r_f}{\sigma_s}$,- el cual nos dice qué tanto rendimiento obtenemos por unidad de riesgo asumido en la tenencia del activo (portafolio) riesgoso. Ahora, la pregunta es, ¿dónde sobre esta línea queremos estar?___ 1.3. Resolviendo para la asignación óptima de capitalRecapitulando de la clase pasada, tenemos las curvas de indiferencia: **queremos estar en la curva de indiferencia más alta posible, que sea tangente a la LAC**. Ver en el tablero. Analíticamente, el problema es$$\max_{w} \quad E[U(r_p)]\equiv\max_{w} \quad E[r_p]-\frac{1}{2}\gamma\sigma_p^2,$$donde los puntos $(\sigma_p,E[r_p])$ se restringen a estar en la LAC, esto es $E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p$ y $\sigma_p=w\sigma_s$. Entonces el problema anterior se puede escribir de la siguiente manera:$$\max_{w} \quad r_f+wE[r_s-r_f]-\frac{1}{2}\gamma w^2\sigma_s^2.$$ Encontrar la $w$ que maximiza la anterior expresión en el tablero. **Tres doritos después...** La solución es entonces:$$w^\ast=\frac{E[r_s]-r_f}{\gamma\sigma_s^2}.$$De manera intuitiva:- $w^\ast\propto E[r_s-r_f]$: a más exceso de rendimiento que se obtenga del activo riesgoso, más querremos invertir en él.- $w^\ast\propto \frac{1}{\gamma}$: mientras más averso al riesgo seas, menos querrás invertir en el activo riesgoso.- $w^\ast\propto \frac{1}{\sigma_s^2}$: mientras más riesgoso sea el activo, menos querrás invertir en él.___ 2. Ejemplo de asignación óptima de capital: acciones y billetes de EU Pongamos algunos números con algunos datos, para ilustrar la derivación que acabamos de hacer.En este caso, consideraremos:- **Portafolio riesgoso**: mercado de acciones de EU (representados en algún índice de mercado como el S&P500).- **Activo libre de riesgo**: billetes del departamento de tesorería de EU (T-bills).Tenemos los siguientes datos:$$E[r_{US}]=11.9\%,\quad \sigma_{US}=19.15\%, \quad r_f=1\%.$$ Recordamos que podemos escribir la expresión de la LAC como:\begin{align}E[r_p]&=r_f+\left[\frac{E[r_{US}-r_f]}{\sigma_{US}}\right]\sigma_p\\ &=0.01+\text{S.R.}\sigma_p,\end{align}donde $\text{S.R}=\frac{0.119-0.01}{0.1915}\approx0.569$ es el radio de Sharpe (¿qué es lo que es esto?).Grafiquemos la LAC con estos datos reales: ###Code # Importamos librerías que vamos a utilizar import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Datos Ers = 0.119 ss = 0.1915 rf = 0.01 # Radio de Sharpe para este activo RS = (Ers - rf) / ss # Vector de volatilidades del portafolio (sugerido: 0% a 50%) sp = np.linspace(0, 0.5) # LAC LAC = rf + RS * sp # Gráfica plt.figure(figsize=(6, 4)) plt.plot(sp, LAC, lw=2, label='LAC') plt.plot(0, rf, 'o', label='Activo libre de riesgo') plt.plot(ss, Ers, 'o', label='Activo/portafolio riesgoso') plt.xlabel('Volatilidad $\sigma$') plt.ylabel('Rendimiento esperado E[r]') plt.legend(loc='upper left', bbox_to_anchor=(1.05, 1)) ###Output _____no_output_____ ###Markdown Bueno, y ¿en qué punto de esta línea querríamos estar?- Pues ya vimos que depende de tus preferencias.- En particular, de tu actitud de cara al riesgo, medido por tu coeficiente de aversión al riesgo.Solución al problema de asignación óptima de capital:$$\max_{w} \quad E[U(r_p)]$$$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}$$ Dado que ya tenemos datos, podemos intentar para varios coeficientes de aversión al riesgo: ###Code # importar pandas import pandas as pd # Crear un DataFrame con los pesos, rendimiento # esperado y volatilidad del portafolio óptimo # entre los activos riesgoso y libre de riesgo # cuyo índice sean los coeficientes de aversión # al riesgo del 1 al 10 (enteros) g = np.arange(1, 11) w_opt = pd.DataFrame(data={'Coef. Av. Riesgo': g, 'w Opt.': (Ers - rf) / (g * ss**2)}) w_opt ###Output _____no_output_____ ###Markdown Optimización media-varianzaLa **teoría de portafolios** es uno de los avances más importantes en las finanzas modernas e inversiones.- Apareció por primera vez en un [artículo corto](https://www.math.ust.hk/~maykwok/courses/ma362/07F/markowitz_JF.pdf) llamado "Portfolio Selection" en la edición de Marzo de 1952 de "the Journal of Finance".- Escrito por un desconocido estudiante de la Universidad de Chicago, llamado Harry Markowitz.- Escrito corto (sólo 14 páginas), poco texto, fácil de entender, muchas gráficas y unas cuantas referencias.- No se le prestó mucha atención hasta los 60s.Finalmente, este trabajo se convirtió en una de las más grandes ideas en finanzas, y le dió a Markowitz el Premio Nobel casi 40 años después.- Markowitz estaba incidentalmente interesado en los mercados de acciones e inversiones.- Estaba más bien interesado en entender cómo las personas tomaban sus mejores decisiones cuando se enfrentaban con "trade-offs".- Principio de conservación de la miseria. O, dirían los instructores de gimnasio: "no pain, no gain".- Si queremos más de algo, tenemos que perder en algún otro lado.- El estudio de este fenómeno era el que le atraía a Markowitz.De manera que nadie se hace rico poniendo todo su dinero en la cuenta de ahorros. La única manera de esperar altos rendimientos es si se toma bastante riesgo. Sin embargo, riesgo significa también la posibilidad de perder, tanto como ganar.Pero, ¿qué tanto riesgo es necesario?, y ¿hay alguna manera de minimizar el riesgo mientras se maximizan las ganancias?- Markowitz básicamente cambió la manera en que los inversionistas pensamos acerca de esas preguntas.- Alteró completamente la práctica de la administración de inversiones.- Incluso el título de su artículo era innovador. Portafolio: una colección de activos en lugar de tener activos individuales.- En ese tiempo, un portafolio se refería a una carpeta de piel.- En el resto de este módulo, nos ocuparemos de la parte analítica de la teoría de portafolios, la cual puede ser resumida en dos frases: - No pain, no gain. - No ponga todo el blanquillo en una sola bolsa. **Objetivos:**- ¿Qué es la línea de asignación de capital?- ¿Qué es el radio de Sharpe?- ¿Cómo deberíamos asignar nuestro capital entre un activo riesgoso y un activo libre de riesgo?*Referencia:*- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.___ 1. Línea de asignación de capital 1.1. MotivaciónEl proceso de construcción de un portafolio tiene entonces los siguientes dos pasos:1. Escoger un portafolio de activos riesgosos.2. Decidir qué tanto de tu riqueza invertirás en el portafolio y qué tanto invertirás en activos libres de riesgo.Al paso 2 lo llamamos **decisión de asignación de activos**. Preguntas importantes:1. ¿Qué es el portafolio óptimo de activos riesgosos? - ¿Cuál es el mejor portafolio de activos riesgosos? - Es un portafolio eficiente en media-varianza.2. ¿Qué es la distribución óptima de activos? - ¿Cómo deberíamos distribuir nuestra riqueza entre el portafolo riesgoso óptimo y el activo libre de riesgo? - Concepto de **línea de asignación de capital**. - Concepto de **radio de Sharpe**. Dos suposiciones importantes:- Funciones de utilidad media-varianza.- Inversionista averso al riesgo. La idea sorprendente que saldrá de este análisis, es que cualquiera que sea la actitud del inversionista de cara al riesgo, el mejor portafolio de activos riesgosos es idéntico para todos los inversionistas.Lo que nos importará a cada uno de nosotros en particular, es simplemente la desición óptima de asignación de activos.___ 1.2. Línea de asignación de capital Sean:- $r_s$ el rendimiento del activo riesgoso,- $r_f$ el rendimiento libre de riesgo, y- $w$ la fracción invertida en el activo riesgoso. Realizar deducción de la línea de asignación de capital en el tablero. **Tres doritos después...** Línea de asignación de capital (LAC):$E[r_p]$ se relaciona con $\sigma_p$ de manera afín. Es decir, mediante la ecuación de una recta:$$E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p.$$- La pendiente de la LAC es el radio de Sharpe $\frac{E[r_s-r_f]}{\sigma_s}=\frac{E[r_s]-r_f}{\sigma_s}$,- el cual nos dice qué tanto rendimiento obtenemos por unidad de riesgo asumido en la tenencia del activo (portafolio) riesgoso. Ahora, la pregunta es, ¿dónde sobre esta línea queremos estar?___ 1.3. Resolviendo para la asignación óptima de capitalRecapitulando de la clase pasada, tenemos las curvas de indiferencia: **queremos estar en la curva de indiferencia más alta posible, que sea tangente a la LAC**. Ver en el tablero. Analíticamente, el problema es$$\max_{w} \quad E[U(r_p)]\equiv\max_{w} \quad E[r_p]-\frac{1}{2}\gamma\sigma_p^2,$$donde los puntos $(\sigma_p,E[r_p])$ se restringen a estar en la LAC, esto es $E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p$ y $\sigma_p=w\sigma_s$. Entonces el problema anterior se puede escribir de la siguiente manera:$$\max_{w} \quad r_f+wE[r_s-r_f]-\frac{1}{2}\gamma w^2\sigma_s^2.$$ Encontrar la $w$ que maximiza la anterior expresión en el tablero. **Tres doritos después...** La solución es entonces:$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}.$$De manera intuitiva:- $w^\ast\propto E[r_s-r_f]$: a más exceso de rendimiento que se obtenga del activo riesgoso, más querremos invertir en él.- $w^\ast\propto \frac{1}{\gamma}$: mientras más averso al riesgo seas, menos querrás invertir en el activo riesgoso.- $w^\ast\propto \frac{1}{\sigma_s^2}$: mientras más riesgoso sea el activo, menos querrás invertir en él.___ 2. Ejemplo de asignación óptima de capital: acciones y billetes de EU Pongamos algunos números con algunos datos, para ilustrar la derivación que acabamos de hacer.En este caso, consideraremos:- **Portafolio riesgoso**: mercado de acciones de EU (representados en algún índice de mercado como el S&P500).- **Activo libre de riesgo**: billetes del departamento de tesorería de EU (T-bills).Tenemos los siguientes datos:$$E[r_{US}]=11.9\%,\quad \sigma_{US}=19.15\%, \quad r_f=1\%.$$ Recordamos que podemos escribir la expresión de la LAC como:\begin{align}E[r_p]&=r_f+\left[\frac{E[r_{US}-r_f]}{\sigma_{US}}\right]\sigma_p\\ &=0.01+\text{S.R.}\sigma_p,\end{align}donde $\text{S.R}=\frac{0.119-0.01}{0.1915}\approx0.569$ es el radio de Sharpe (¿qué es lo que es esto?).Grafiquemos la LAC con estos datos reales: ###Code # Importamos librerías que vamos a utilizar import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Datos Ers = 0.119 ss = 0.1915 rf = 0.01 # Radio de Sharpe para este activo SR = (Ers - rf) / ss # Vector de volatilidades del portafolio (sugerido: 0% a 50%) sp = np.linspace(0, 0.5, 100) # LAC Erp = rf + SR * sp # Gráfica plt.figure(figsize=(6, 4)) plt.plot(sp, Erp, lw=3, label='LAC') plt.plot(0, rf, 'ob', ms=5, label='Libre de riesgo') plt.plot(ss, Ers, 'or', ms=5, label='Portafolio/activo riesgoso') plt.legend(loc='best') plt.xlabel('Volatilidad $\sigma$') plt.ylabel('Rendimiento esperado $E[r]$') plt.grid() ###Output _____no_output_____ ###Markdown Bueno, y ¿en qué punto de esta línea querríamos estar?- Pues ya vimos que depende de tus preferencias.- En particular, de tu actitud de cara al riesgo, medido por tu coeficiente de aversión al riesgo.Solución al problema de asignación óptima de capital:$$\max_{w} \quad E[U(r_p)]$$$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}$$ Dado que ya tenemos datos, podemos intentar para varios coeficientes de aversión al riesgo: ###Code # importar pandas import pandas as pd # Crear un DataFrame con los pesos, rendimiento # esperado y volatilidad del portafolio óptimo # entre los activos riesgoso y libre de riesgo # cuyo índice sean los coeficientes de aversión # al riesgo del 1 al 10 (enteros) gamma = np.arange(1, 11) w = (Ers - rf) / (gamma * ss**2) pd.DataFrame({'$\gamma$': gamma, '$w$': w}) ###Output _____no_output_____ ###Markdown Optimización media-varianza [Evaluación de medio término](http://cursos.iteso.mx/course/view.php?id=1480)La **teoría de portafolios** es uno de los avances más importantes en las finanzas modernas e inversiones.- Apareció por primera vez en un [artículo corto](https://www.math.ust.hk/~maykwok/courses/ma362/07F/markowitz_JF.pdf) llamado "Portfolio Selection" en la edición de Marzo de 1952 de "the Journal of Finance".- Escrito por un desconocido estudiante de la Universidad de Chicago, llamado Harry Markowitz.- Escrito corto (sólo 14 páginas), poco texto, fácil de entender, muchas gráficas y unas cuantas referencias.- No se le prestó mucha atención hasta los 60s.Finalmente, este trabajo se convirtió en una de las más grandes ideas en finanzas, y le dió a Markowitz el Premio Nobel casi 40 años después.- Markowitz estaba incidentalmente interesado en los mercados de acciones e inversiones.- Estaba más bien interesado en entender cómo las personas tomaban sus mejores decisiones cuando se enfrentaban con "trade-offs".- Principio de conservación de la miseria. O, dirían los instructores de gimnasio: "no pain, no gain".- Si queremos más de algo, tenemos que perder en algún otro lado.- El estudio de este fenómeno era el que le atraía a Markowitz.De manera que nadie se hace rico poniendo todo su dinero en la cuenta de ahorros. La única manera de esperar altos rendimientos es si se toma bastante riesgo. Sin embargo, riesgo significa también la posibilidad de perder, tanto como ganar.Pero, ¿qué tanto riesgo es necesario?, y ¿hay alguna manera de minimizar el riesgo mientras se maximizan las ganancias?- Markowitz básicamente cambió la manera en que los inversionistas pensamos acerca de esas preguntas.- Alteró completamente la práctica de la administración de inversiones.- Incluso el título de su artículo era innovador. Portafolio: una colección de activos en lugar de tener activos individuales.- En ese tiempo, un portafolio se refería a una carpeta de piel.- En el resto de este módulo, nos ocuparemos de la parte analítica de la teoría de portafolios, la cual puede ser resumida en dos frases: - No pain, no gain. - No ponga todo el blanquillo en una sola bolsa. **Objetivos:**- ¿Qué es la línea de asignación de capital?- ¿Qué es el radio de Sharpe?- ¿Cómo deberíamos asignar nuestro capital entre un activo riesgoso y un activo libre de riesgo?*Referencia:*- Notas del curso "Portfolio Selection and Risk Management", Rice University, disponible en Coursera.___ 1. Línea de asignación de capital 1.1. MotivaciónEl proceso de construcción de un portafolio tiene entonces los siguientes dos pasos:1. Escoger un portafolio de activos riesgosos.2. Decidir qué tanto de tu riqueza invertirás en el portafolio y qué tanto invertirás en activos libres de riesgo.Al paso 2 lo llamamos **decisión de asignación de activos**. Preguntas importantes:1. ¿Qué es el portafolio óptimo de activos riesgosos? - ¿Cuál es el mejor portafolio de activos riesgosos? - Es un portafolio eficiente en media-varianza.2. ¿Qué es la distribución óptima de activos? - ¿Cómo deberíamos distribuir nuestra riqueza entre el portafolo riesgoso óptimo y el activo libre de riesgo? - Concepto de **línea de asignación de capital**. - Concepto de **radio de Sharpe**. Dos suposiciones importantes:- Funciones de utilidad media-varianza.- Inversionista averso al riesgo. La idea sorprendente que saldrá de este análisis, es que cualquiera que sea la actitud del inversionista de cara al riesgo, el mejor portafolio de activos riesgosos es idéntico para todos los inversionistas.Lo que nos importará a cada uno de nosotros en particular, es simplemente la desición óptima de asignación de activos.___ 1.2. Línea de asignación de capital Sean:- $r_s$ el rendimiento del activo riesgoso,- $r_f$ el rendimiento libre de riesgo, y- $w$ la fracción invertida en el activo riesgoso. Realizar deducción de la línea de asignación de capital en el tablero. **Tres doritos después...** Línea de asignación de capital (LAC):$E[r_p]$ se relaciona con $\sigma_p$ de manera afín. Es decir, mediante la ecuación de una recta:$$E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p.$$- La pendiente de la LAC es el radio de Sharpe $\frac{E[r_s-r_f]}{\sigma_s}=\frac{E[r_s]-r_f}{\sigma_s}$,- el cual nos dice qué tanto rendimiento obtenemos por unidad de riesgo asumido en la tenencia del activo (portafolio) riesgoso. Ahora, la pregunta es, ¿dónde sobre esta línea queremos estar?___ 1.3. Resolviendo para la asignación óptima de capitalRecapitulando de la clase pasada, tenemos las curvas de indiferencia: **queremos estar en la curva de indiferencia más alta posible, que sea tangente a la LAC**. Ver en el tablero. Analíticamente, el problema es$$\max_{w} \quad E[U(r_p)]\equiv\max_{w} \quad E[r_p]-\frac{1}{2}\gamma\sigma_p^2,$$donde los puntos $(\sigma_p,E[r_p])$ se restringen a estar en la LAC, esto es $E[r_p]=r_f+\frac{E[r_s-r_f]}{\sigma_s}\sigma_p$ y $\sigma_p=w\sigma_s$. Entonces el problema anterior se puede escribir de la siguiente manera:$$\max_{w} \quad r_f+wE[r_s-r_f]-\frac{1}{2}\gamma w^2\sigma_s^2.$$ Encontrar la $w$ que maximiza la anterior expresión en el tablero. **Tres doritos después...** La solución es entonces:$$w^\ast=\frac{E[r_s]-r_f}{\gamma\sigma_s^2}.$$De manera intuitiva:- $w^\ast\propto E[r_s-r_f]$: a más exceso de rendimiento que se obtenga del activo riesgoso, más querremos invertir en él.- $w^\ast\propto \frac{1}{\gamma}$: mientras más averso al riesgo seas, menos querrás invertir en el activo riesgoso.- $w^\ast\propto \frac{1}{\sigma_s^2}$: mientras más riesgoso sea el activo, menos querrás invertir en él.___ 2. Ejemplo de asignación óptima de capital: acciones y billetes de EU Pongamos algunos números con algunos datos, para ilustrar la derivación que acabamos de hacer.En este caso, consideraremos:- **Portafolio riesgoso**: mercado de acciones de EU (representados en algún índice de mercado como el S&P500).- **Activo libre de riesgo**: billetes del departamento de tesorería de EU (T-bills).Tenemos los siguientes datos:$$E[r_{US}]=11.9\%,\quad \sigma_{US}=19.15\%, \quad r_f=1\%.$$ Recordamos que podemos escribir la expresión de la LAC como:\begin{align}E[r_p]&=r_f+\left[\frac{E[r_{US}-r_f]}{\sigma_{US}}\right]\sigma_p\\ &=0.01+\text{S.R.}\sigma_p,\end{align}donde $\text{S.R}=\frac{0.119-0.01}{0.1915}\approx0.569$ es el radio de Sharpe (¿qué es lo que es esto?).Grafiquemos la LAC con estos datos reales: ###Code # Importamos librerías que vamos a utilizar import numpy as np from matplotlib import pyplot as plt %matplotlib inline # Datos Ers = 0.119 ss = 0.1915 rf = 0.01 # Radio de Sharpe para este activo RS = (Ers - rf) / ss # Vector de volatilidades del portafolio (sugerido: 0% a 50%) sp = np.linspace(0, 0.5) # LAC Erp = RS * sp + rf # Gráfica plt.figure(figsize=(6, 4)) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.plot(0, rf, 'og', ms=8, label='Activo libre de riesgo') plt.plot(ss, Ers, 'ob', ms=8, label='Activo riesgoso') plt.plot(sp, Erp, 'r', lw=2, label='LAC') plt.grid() plt.xlabel('Volatilidad $\sigma$') plt.ylabel('Rendimiento esperado $E[r]$') plt.legend(loc='best') ###Output _____no_output_____ ###Markdown Bueno, y ¿en qué punto de esta línea querríamos estar?- Pues ya vimos que depende de tus preferencias.- En particular, de tu actitud de cara al riesgo, medido por tu coeficiente de aversión al riesgo.Solución al problema de asignación óptima de capital:$$\max_{w} \quad E[U(r_p)]$$$$w^\ast=\frac{E[r_s-r_f]}{\gamma\sigma_s^2}$$ Dado que ya tenemos datos, podemos intentar para varios coeficientes de aversión al riesgo: ###Code # importar pandas import pandas as pd # Crear un DataFrame con los pesos, rendimiento # esperado y volatilidad del portafolio óptimo # entre los activos riesgoso y libre de riesgo # cuyo índice sean los coeficientes de aversión # al riesgo del 1 al 10 (enteros) gamma = np.arange(1, 11, 1) df = pd.DataFrame({'$\gamma$': gamma, '$w^*$': (Ers - rf) / (gamma * ss**2)}) df ###Output _____no_output_____
C3/W1/assignment/C3W1_Assignment.ipynb
###Markdown Week 1: Explore the BBC News archiveWelcome! In this assignment you will be working with a variation of the [BBC News Classification Dataset](https://www.kaggle.com/c/learn-ai-bbc/overview), which contains 2225 examples of news articles with their respective categories (labels).Let's get started! ###Code import csv from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences ###Output _____no_output_____ ###Markdown Begin by looking at the structure of the csv that contains the data: ###Code with open("./bbc-text.csv", 'r') as csvfile: print(f"First line (header) looks like this:\n\n{csvfile.readline()}") print(f"Each data point looks like this:\n\n{csvfile.readline()}") ###Output _____no_output_____ ###Markdown As you can see, each data point is composed of the category of the news article followed by a comma and then the actual text of the article. Removing StopwordsOne important step when working with text data is to remove the **stopwords** from it. These are the most common words in the language and they rarely provide useful information for the classification process.Complete the `remove_stopwords` below. This function should receive a string and return another string that excludes all of the stopwords provided. ###Code # GRADED FUNCTION: remove_stopwords def remove_stopwords(sentence): # List of stopwords stopwords = ["a", "about", "above", "after", "again", "against", "all", "am", "an", "and", "any", "are", "as", "at", "be", "because", "been", "before", "being", "below", "between", "both", "but", "by", "could", "did", "do", "does", "doing", "down", "during", "each", "few", "for", "from", "further", "had", "has", "have", "having", "he", "he'd", "he'll", "he's", "her", "here", "here's", "hers", "herself", "him", "himself", "his", "how", "how's", "i", "i'd", "i'll", "i'm", "i've", "if", "in", "into", "is", "it", "it's", "its", "itself", "let's", "me", "more", "most", "my", "myself", "nor", "of", "on", "once", "only", "or", "other", "ought", "our", "ours", "ourselves", "out", "over", "own", "same", "she", "she'd", "she'll", "she's", "should", "so", "some", "such", "than", "that", "that's", "the", "their", "theirs", "them", "themselves", "then", "there", "there's", "these", "they", "they'd", "they'll", "they're", "they've", "this", "those", "through", "to", "too", "under", "until", "up", "very", "was", "we", "we'd", "we'll", "we're", "we've", "were", "what", "what's", "when", "when's", "where", "where's", "which", "while", "who", "who's", "whom", "why", "why's", "with", "would", "you", "you'd", "you'll", "you're", "you've", "your", "yours", "yourself", "yourselves" ] # Sentence converted to lowercase-only sentence = sentence.lower() ### START CODE HERE ### END CODE HERE return sentence # Test your function remove_stopwords("I am about to go to the store and get any snack") ###Output _____no_output_____ ###Markdown ***Expected Output:***```'go store get snack'``` Reading the raw dataNow you need to read the data from the csv file. To do so, complete the `parse_data_from_file` function.A couple of things to note:- You should omit the first line as it contains the headers and not data points.- There is no need to save the data points as numpy arrays, regular lists is fine.- To read from csv files use [`csv.reader`](https://docs.python.org/3/library/csv.htmlcsv.reader) by passing the appropriate arguments.- `csv.reader` returns an iterable that returns each row in every iteration. So the label can be accessed via row[0] and the text via row[1].- Use the `remove_stopwords` function in each sentence. ###Code def parse_data_from_file(filename): sentences = [] labels = [] with open(filename, 'r') as csvfile: ### START CODE HERE reader = csv.reader(None, delimiter=None) ### END CODE HERE return sentences, labels # Test your function sentences, labels = parse_data_from_file("./bbc-text.csv") print(f"There are {len(sentences)} sentences in the dataset.\n") print(f"First sentence has {len(sentences[0].split())} words (after removing stopwords).\n") print(f"There are {len(labels)} labels in the dataset.\n") print(f"The first 5 labels are {labels[:5]}") ###Output _____no_output_____ ###Markdown ***Expected Output:***```There are 2225 sentences in the dataset.First sentence has 436 words (after removing stopwords).There are 2225 labels in the dataset.The first 5 labels are ['tech', 'business', 'sport', 'sport', 'entertainment']``` Using the TokenizerNow it is time to tokenize the sentences of the dataset. Complete the `fit_tokenizer` below. This function should receive the list of sentences as input and return a [Tokenizer](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer) that has been fitted to those sentences. You should also define the "Out of Vocabulary" token as ``. ###Code def fit_tokenizer(sentences): ### START CODE HERE # Instantiate the Tokenizer class by passing in the oov_token argument tokenizer = None # Fit on the sentences ### END CODE HERE return tokenizer tokenizer = fit_tokenizer(sentences) word_index = tokenizer.word_index print(f"Vocabulary contains {len(word_index)} words\n") print("<OOV> token included in vocabulary" if "<OOV>" in word_index else "<OOV> token NOT included in vocabulary") ###Output _____no_output_____ ###Markdown ***Expected Output:***```Vocabulary contains 29714 words token included in vocabulary``` ###Code def get_padded_sequences(tokenizer, sentences): ### START CODE HERE # Convert sentences to sequences sequences = None # Pad the sequences using the post padding strategy padded_sequences = None ### END CODE HERE return padded_sequences padded_sequences = get_padded_sequences(tokenizer, sentences) print(f"First padded sequence looks like this: \n\n{padded_sequences[0]}\n") print(f"Numpy array of all sequences has shape: {padded_sequences.shape}\n") print(f"This means there are {padded_sequences.shape[0]} sequences in total and each one has a size of {padded_sequences.shape[1]}") ###Output _____no_output_____ ###Markdown ***Expected Output:***```First padded sequence looks like this: [ 96 176 1157 ... 0 0 0]Numpy array of all sequences has shape: (2225, 2438)This means there are 2225 sequences in total and each one has a size of 2438``` ###Code def tokenize_labels(labels): ### START CODE HERE # Instantiate the Tokenizer class # No need to pass additional arguments since you will be tokenizing the labels label_tokenizer = None # Fit the tokenizer to the labels # Save the word index label_word_index = None # Save the sequences label_sequences = None ### END CODE HERE return label_sequences, label_word_index label_sequences, label_word_index = tokenize_labels(labels) print(f"Vocabulary of labels looks like this {label_word_index}\n") print(f"First ten sequences {label_sequences[:10]}\n") ###Output _____no_output_____ ###Markdown Week 1: Explore the BBC News archiveWelcome! In this assignment you will be working with a variation of the [BBC News Classification Dataset](https://www.kaggle.com/c/learn-ai-bbc/overview), which contains 2225 examples of news articles with their respective categories (labels).Let's get started! ###Code import csv from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences ###Output _____no_output_____ ###Markdown Begin by looking at the structure of the csv that contains the data: ###Code with open("./bbc-text.csv", 'r') as csvfile: print(f"First line (header) looks like this:\n\n{csvfile.readline()}") print(f"Each data point looks like this:\n\n{csvfile.readline()}") ###Output _____no_output_____ ###Markdown As you can see, each data point is composed of the category of the news article followed by a comma and then the actual text of the article. Removing StopwordsOne important step when working with text data is to remove the **stopwords** from it. These are the most common words in the language and they rarely provide useful information for the classification process.Complete the `remove_stopwords` below. This function should receive a string and return another string that excludes all of the stopwords provided. ###Code # GRADED FUNCTION: remove_stopwords def remove_stopwords(sentence): """ Removes a list of stopwords Args: sentence (string): sentence to remove the stopwords from Returns: sentence (string): lowercase sentence without the stopwords """ # List of stopwords stopwords = ["a", "about", "above", "after", "again", "against", "all", "am", "an", "and", "any", "are", "as", "at", "be", "because", "been", "before", "being", "below", "between", "both", "but", "by", "could", "did", "do", "does", "doing", "down", "during", "each", "few", "for", "from", "further", "had", "has", "have", "having", "he", "he'd", "he'll", "he's", "her", "here", "here's", "hers", "herself", "him", "himself", "his", "how", "how's", "i", "i'd", "i'll", "i'm", "i've", "if", "in", "into", "is", "it", "it's", "its", "itself", "let's", "me", "more", "most", "my", "myself", "nor", "of", "on", "once", "only", "or", "other", "ought", "our", "ours", "ourselves", "out", "over", "own", "same", "she", "she'd", "she'll", "she's", "should", "so", "some", "such", "than", "that", "that's", "the", "their", "theirs", "them", "themselves", "then", "there", "there's", "these", "they", "they'd", "they'll", "they're", "they've", "this", "those", "through", "to", "too", "under", "until", "up", "very", "was", "we", "we'd", "we'll", "we're", "we've", "were", "what", "what's", "when", "when's", "where", "where's", "which", "while", "who", "who's", "whom", "why", "why's", "with", "would", "you", "you'd", "you'll", "you're", "you've", "your", "yours", "yourself", "yourselves" ] # Sentence converted to lowercase-only sentence = sentence.lower() ### START CODE HERE ### END CODE HERE return sentence # Test your function remove_stopwords("I am about to go to the store and get any snack") ###Output _____no_output_____ ###Markdown ***Expected Output:***```'go store get snack'``` Reading the raw dataNow you need to read the data from the csv file. To do so, complete the `parse_data_from_file` function.A couple of things to note:- You should omit the first line as it contains the headers and not data points.- There is no need to save the data points as numpy arrays, regular lists is fine.- To read from csv files use [`csv.reader`](https://docs.python.org/3/library/csv.htmlcsv.reader) by passing the appropriate arguments.- `csv.reader` returns an iterable that returns each row in every iteration. So the label can be accessed via row[0] and the text via row[1].- Use the `remove_stopwords` function in each sentence. ###Code def parse_data_from_file(filename): """ Extracts sentences and labels from a CSV file Args: filename (string): path to the CSV file Returns: sentences, labels (list of string, list of string): tuple containing lists of sentences and labels """ sentences = [] labels = [] with open(filename, 'r') as csvfile: ### START CODE HERE reader = csv.reader(None, delimiter=None) ### END CODE HERE return sentences, labels # Test your function sentences, labels = parse_data_from_file("./bbc-text.csv") print(f"There are {len(sentences)} sentences in the dataset.\n") print(f"First sentence has {len(sentences[0].split())} words (after removing stopwords).\n") print(f"There are {len(labels)} labels in the dataset.\n") print(f"The first 5 labels are {labels[:5]}") ###Output _____no_output_____ ###Markdown ***Expected Output:***```There are 2225 sentences in the dataset.First sentence has 436 words (after removing stopwords).There are 2225 labels in the dataset.The first 5 labels are ['tech', 'business', 'sport', 'sport', 'entertainment']``` Using the TokenizerNow it is time to tokenize the sentences of the dataset. Complete the `fit_tokenizer` below. This function should receive the list of sentences as input and return a [Tokenizer](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer) that has been fitted to those sentences. You should also define the "Out of Vocabulary" token as ``. ###Code def fit_tokenizer(sentences): """ Instantiates the Tokenizer class Args: sentences (list): lower-cased sentences without stopwords Returns: tokenizer (object): an instance of the Tokenizer class containing the word-index dictionary """ ### START CODE HERE # Instantiate the Tokenizer class by passing in the oov_token argument tokenizer = None # Fit on the sentences ### END CODE HERE return tokenizer tokenizer = fit_tokenizer(sentences) word_index = tokenizer.word_index print(f"Vocabulary contains {len(word_index)} words\n") print("<OOV> token included in vocabulary" if "<OOV>" in word_index else "<OOV> token NOT included in vocabulary") ###Output _____no_output_____ ###Markdown ***Expected Output:***```Vocabulary contains 29714 words token included in vocabulary``` ###Code def get_padded_sequences(tokenizer, sentences): """ Generates an array of token sequences and pads them to the same length Args: tokenizer (object): Tokenizer instance containing the word-index dictionary sentences (list of string): list of sentences to tokenize and pad Returns: padded_sequences (array of int): tokenized sentences padded to the same length """ ### START CODE HERE # Convert sentences to sequences sequences = None # Pad the sequences using the post padding strategy padded_sequences = None ### END CODE HERE return padded_sequences padded_sequences = get_padded_sequences(tokenizer, sentences) print(f"First padded sequence looks like this: \n\n{padded_sequences[0]}\n") print(f"Numpy array of all sequences has shape: {padded_sequences.shape}\n") print(f"This means there are {padded_sequences.shape[0]} sequences in total and each one has a size of {padded_sequences.shape[1]}") ###Output _____no_output_____ ###Markdown ***Expected Output:***```First padded sequence looks like this: [ 96 176 1157 ... 0 0 0]Numpy array of all sequences has shape: (2225, 2438)This means there are 2225 sequences in total and each one has a size of 2438``` ###Code def tokenize_labels(labels): """ Tokenizes the labels Args: labels (list of string): labels to tokenize Returns: label_sequences, label_word_index (list of string, dictionary): tokenized labels and the word-index """ ### START CODE HERE # Instantiate the Tokenizer class # No need to pass additional arguments since you will be tokenizing the labels label_tokenizer = None # Fit the tokenizer to the labels # Save the word index label_word_index = None # Save the sequences label_sequences = None ### END CODE HERE return label_sequences, label_word_index label_sequences, label_word_index = tokenize_labels(labels) print(f"Vocabulary of labels looks like this {label_word_index}\n") print(f"First ten sequences {label_sequences[:10]}\n") ###Output _____no_output_____ ###Markdown Week 1: Explore the BBC News archiveWelcome! In this assignment you will be working with a variation of the [BBC News Classification Dataset](https://www.kaggle.com/c/learn-ai-bbc/overview), which contains 2225 examples of news articles with their respective categories (labels).Let's get started! ###Code import csv from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences ###Output _____no_output_____ ###Markdown Begin by looking at the structure of the csv that contains the data: ###Code with open("./bbc-text.csv", 'r') as csvfile: print(f"First line (header) looks like this:\n\n{csvfile.readline()}") print(f"Each data point looks like this:\n\n{csvfile.readline()}") ###Output _____no_output_____ ###Markdown As you can see, each data point is composed of the category of the news article followed by a comma and then the actual text of the article. Removing StopwordsOne important step when working with text data is to remove the **stopwords** from it. These are the most common words in the language and they rarely provide useful information for the classification process.Complete the `remove_stopwords` below. This function should receive a string and return another string that excludes all of the stopwords provided. ###Code # GRADED FUNCTION: remove_stopwords def remove_stopwords(sentence): # List of stopwords stopwords = ["a", "about", "above", "after", "again", "against", "all", "am", "an", "and", "any", "are", "as", "at", "be", "because", "been", "before", "being", "below", "between", "both", "but", "by", "could", "did", "do", "does", "doing", "down", "during", "each", "few", "for", "from", "further", "had", "has", "have", "having", "he", "he'd", "he'll", "he's", "her", "here", "here's", "hers", "herself", "him", "himself", "his", "how", "how's", "i", "i'd", "i'll", "i'm", "i've", "if", "in", "into", "is", "it", "it's", "its", "itself", "let's", "me", "more", "most", "my", "myself", "nor", "of", "on", "once", "only", "or", "other", "ought", "our", "ours", "ourselves", "out", "over", "own", "same", "she", "she'd", "she'll", "she's", "should", "so", "some", "such", "than", "that", "that's", "the", "their", "theirs", "them", "themselves", "then", "there", "there's", "these", "they", "they'd", "they'll", "they're", "they've", "this", "those", "through", "to", "too", "under", "until", "up", "very", "was", "we", "we'd", "we'll", "we're", "we've", "were", "what", "what's", "when", "when's", "where", "where's", "which", "while", "who", "who's", "whom", "why", "why's", "with", "would", "you", "you'd", "you'll", "you're", "you've", "your", "yours", "yourself", "yourselves" ] # Sentence converted to lowercase-only sentence = sentence.lower() ### START CODE HERE ### END CODE HERE return sentence # Test your function remove_stopwords("I am about to go to the store and get any snack") ###Output _____no_output_____ ###Markdown ***Expected Output:***```'go store get snack'``` Reading the raw dataNow you need to read the data from the csv file. To do so, complete the `parse_data_from_file` function.A couple of things to note:- You should omit the first line as it contains the headers and not data points.- There is no need to save the data points as numpy arrays, regular lists is fine.- To read from csv files use [`csv.reader`](https://docs.python.org/3/library/csv.htmlcsv.reader) by passing the appropriate arguments.- `csv.reader` returns an iterable that returns each row in every iteration. So the label can be accessed via row[0] and the text via row[1].- Use the `remove_stopwords` function in each sentence. ###Code def parse_data_from_file(filename): sentences = [] labels = [] with open(filename, 'r') as csvfile: ### START CODE HERE reader = csv.reader(None, delimiter=None) ### END CODE HERE return sentences, labels # Test your function sentences, labels = parse_data_from_file("./bbc-text.csv") print(f"There are {len(sentences)} sentences in the dataset.\n") print(f"First sentence has {len(sentences[0].split())} words (after removing stopwords).\n") print(f"There are {len(labels)} labels in the dataset.\n") print(f"The first 5 labels are {labels[:5]}") ###Output _____no_output_____ ###Markdown ***Expected Output:***```There are 2225 sentences in the dataset.First sentence has 436 words (after removing stopwords).There are 2225 labels in the dataset.The first 5 labels are ['tech', 'business', 'sport', 'sport', 'entertainment']``` Using the TokenizerNow it is time to tokenize the sentences of the dataset. Complete the `fit_tokenizer` below. This function should receive the list of sentences as input and return a [Tokenizer](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer) that has been fitted to those sentences. You should also define the "Out of Vocabulary" token as ``. ###Code def fit_tokenizer(sentences): ### START CODE HERE # Instantiate the Tokenizer class by passing in the oov_token argument tokenizer = None # Fit on the sentences ### END CODE HERE return tokenizer tokenizer = fit_tokenizer(sentences) word_index = tokenizer.word_index print(f"Vocabulary contains {len(word_index)} words\n") print("<OOV> token included in vocabulary" if "<OOV>" in word_index else "<OOV> token NOT included in vocabulary") ###Output _____no_output_____ ###Markdown ***Expected Output:***```Vocabulary contains 29714 words token included in vocabulary``` ###Code def get_padded_sequences(tokenizer, sentences): ### START CODE HERE # Convert sentences to sequences sequences = None # Pad the sequences using the post padding strategy padded_sequences = None ### END CODE HERE return padded_sequences padded_sequences = get_padded_sequences(tokenizer, sentences) print(f"First padded sequence looks like this: \n\n{padded_sequences[0]}\n") print(f"Numpy array of all sequences has shape: {padded_sequences.shape}\n") print(f"This means there are {padded_sequences.shape[0]} sequences in total and each one has a size of {padded_sequences.shape[1]}") ###Output _____no_output_____ ###Markdown ***Expected Output:***```First padded sequence looks like this: [ 96 176 1157 ... 0 0 0]Numpy array of all sequences has shape: (2225, 2438)This means there are 2225 sequences in total and each one has a size of 2438``` ###Code def tokenize_labels(labels): ### START CODE HERE # Instantiate the Tokenizer class # No need to pass additional arguments since you will be tokenizing the labels label_tokenizer = None # Fit the tokenizer to the labels # Save the word index label_word_index = None # Save the sequences label_sequences = None ### END CODE HERE return label_sequences, label_word_index label_sequences, label_word_index = tokenize_labels(labels) print(f"Vocabulary of labels looks like this {label_word_index}\n") print(f"First ten sequences {label_sequences[:10]}\n") ###Output _____no_output_____ ###Markdown Week 1: Explore the BBC News archiveWelcome! In this assignment you will be working with a variation of the [BBC News Classification Dataset](https://www.kaggle.com/c/learn-ai-bbc/overview), which contains 2225 examples of news articles with their respective categories (labels).Let's get started! ###Code import csv from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences ###Output _____no_output_____ ###Markdown Begin by looking at the structure of the csv that contains the data: ###Code with open("./bbc-text.csv", 'r') as csvfile: print(f"First line (header) looks like this:\n\n{csvfile.readline()}") print(f"Each data point looks like this:\n\n{csvfile.readline()}") ###Output _____no_output_____ ###Markdown As you can see, each data point is composed of the category of the news article followed by a comma and then the actual text of the article. Removing StopwordsOne important step when working with text data is to remove the **stopwords** from it. These are the most common words in the language and they rarely provide useful information for the classification process.Complete the `remove_stopwords` below. This function should receive a string and return another string that excludes all of the stopwords provided. ###Code # GRADED FUNCTION: remove_stopwords def remove_stopwords(sentence): # List of stopwords stopwords = ["a", "about", "above", "after", "again", "against", "all", "am", "an", "and", "any", "are", "as", "at", "be", "because", "been", "before", "being", "below", "between", "both", "but", "by", "could", "did", "do", "does", "doing", "down", "during", "each", "few", "for", "from", "further", "had", "has", "have", "having", "he", "he'd", "he'll", "he's", "her", "here", "here's", "hers", "herself", "him", "himself", "his", "how", "how's", "i", "i'd", "i'll", "i'm", "i've", "if", "in", "into", "is", "it", "it's", "its", "itself", "let's", "me", "more", "most", "my", "myself", "nor", "of", "on", "once", "only", "or", "other", "ought", "our", "ours", "ourselves", "out", "over", "own", "same", "she", "she'd", "she'll", "she's", "should", "so", "some", "such", "than", "that", "that's", "the", "their", "theirs", "them", "themselves", "then", "there", "there's", "these", "they", "they'd", "they'll", "they're", "they've", "this", "those", "through", "to", "too", "under", "until", "up", "very", "was", "we", "we'd", "we'll", "we're", "we've", "were", "what", "what's", "when", "when's", "where", "where's", "which", "while", "who", "who's", "whom", "why", "why's", "with", "would", "you", "you'd", "you'll", "you're", "you've", "your", "yours", "yourself", "yourselves" ] # Sentence converted to lowercase-only sentence = sentence.lower() ### START CODE HERE ### END CODE HERE return sentence # Test your function remove_stopwords("I am about to go to the store and get any snack") ###Output _____no_output_____ ###Markdown ***Expected Output:***```'go store get snack'``` Reading the raw dataNow you need to read the data from the csv file. To do so, complete the `parse_data_from_file` function.A couple of things to note:- You should omit the first line as it contains the headers and not data points.- There is no need to save the data points as numpy arrays, regular lists is fine.- To read from csv files use [`csv.reader`](https://docs.python.org/3/library/csv.htmlcsv.reader) by passing the appropriate arguments.- `csv.reader` returns an iterable that returns each row in every iteration. So the label can be accessed via row[0] and the text via row[1].- Use the `remove_stopwords` function in each sentence. ###Code def parse_data_from_file(filename): sentences = [] labels = [] with open(filename, 'r') as csvfile: ### START CODE HERE reader = csv.reader(None, delimiter=None) ### END CODE HERE return sentences, labels # Test your function sentences, labels = parse_data_from_file("./bbc-text.csv") print(f"There are {len(sentences)} sentences in the dataset.\n") print(f"First sentence has {len(sentences[0].split())} words (after removing stopwords).\n") print(f"There are {len(labels)} labels in the dataset.\n") print(f"The first 5 labels are {labels[:5]}") ###Output _____no_output_____ ###Markdown ***Expected Output:***```There are 2225 sentences in the dataset.First sentence has 436 words (after removing stopwords).There are 2225 labels in the dataset.The first 5 labels are ['tech', 'business', 'sport', 'sport', 'entertainment']``` Using the TokenizerNow it is time to tokenize the sentences of the dataset. Complete the `fit_tokenizer` below. This function should receive the list of sentences as input and return a [Tokenizer](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer) that has been fitted to those sentences. You should also define the "Out of Vocabulary" token as ``. ###Code def fit_tokenizer(sentences): ### START CODE HERE # Instantiate the Tokenizer class by passing in the oov_token argument tokenizer = None # Fit on the sentences ### END CODE HERE return tokenizer tokenizer = fit_tokenizer(sentences) word_index = tokenizer.word_index print(f"Vocabulary contains {len(word_index)} words\n") print("<OOV> token included in vocabulary" if "<OOV>" in word_index else "<OOV> token NOT included in vocabulary") ###Output _____no_output_____ ###Markdown ***Expected Output:***```Vocabulary contains 29714 words token included in vocabulary``` ###Code def get_padded_sequences(tokenizer, sentences): ### START CODE HERE # Convert sentences to sequences sequences = None # Pad the sequences using the post padding strategy padded_sequences = None ### END CODE HERE return padded_sequences padded_sequences = get_padded_sequences(tokenizer, sentences) print(f"First padded sequence looks like this: \n\n{padded_sequences[0]}\n") print(f"Numpy array of all sequences has shape: {padded_sequences.shape}\n") print(f"This means there are {padded_sequences.shape[0]} sequences in total and each one has a size of {padded_sequences.shape[1]}") ###Output _____no_output_____ ###Markdown ***Expected Output:***```First padded sequence looks like this: [ 96 176 1157 ... 0 0 0]Numpy array of all sequences has shape: (2225, 2438)This means there are 2225 sequences in total and each one has a size of 2438``` ###Code def tokenize_labels(labels): ### START CODE HERE # Instantiate the Tokenizer class # No need to pass additional arguments since you will be tokenizing the labels label_tokenizer = None # Fit the tokenizer to the labels # Save the word index label_word_index = None # Save the sequences label_sequences = None ### END CODE HERE return label_sequences, label_word_index label_sequences, label_word_index = tokenize_labels(labels) print(f"Vocabulary of labels looks like this {label_word_index}\n") print(f"First ten sequences {label_sequences[:10]}\n") ###Output _____no_output_____
code/AutomaticDataLabeling.ipynb
###Markdown Automatic Data Labeling for Sentiment Analysis Let us see how can we use automatic data labeling for building a sentiment classifier. This consists of four major steps:1. **Loading Data**: train_labelled.txt, test_labelled.txt are the two files (from files/ folder) we will use here. The source of this data is mentioned in the slides. In the train_labelled.txt, I am only going to use the text, and discard the labels. I am going to keep the labels for test data, as we need something to evaluate our approach.2. **Writing Labeling Functions**: We write Python programs that take as input a data point and assign labels (or abstain) using heuristics, pattern matching, or third-party models.3. **Combining Labeling Function Outputs with the Label Model**: We model the outputs of the labeling functions over the training set using a novel, theoretically-grounded [modeling approach](https://arxiv.org/abs/1605.07723), which estimates the accuracies and correlations of the labeling functions using only their agreements and disagreements, and then uses this to reweight and combine their outputs, which we then use as _probabilistic_ training labels.4. **Training a Classifier**: We train a classifier that can predict labels for *any* YouTube comment (not just the ones labeled by the labeling functions) using the probabilistic training labels from step 3.(Text is adapted from original Snorkel tutorial on [spam classification](https://github.com/snorkel-team/snorkel-tutorials/blob/master/spam/01_spam_tutorial.ipynb)) Task: Sentiment Classification 1. Loading Data ###Code #reads train/test files def read_data(filepath): texts = [] labels = [] for line in open(filepath): sentence, label = line.strip().split("\t") labels.append(int(label)) texts.append(sentence) return texts, labels train_texts, discard = read_data("../files/train_labelled.txt") test_texts, test_labels = read_data("../files/test_labelled.txt") discard = None #training labels are discarded. We won't use them #convert to dataframe for ease of use later. from pandas import DataFrame df_train = DataFrame (train_texts,columns=['text']) df_test = DataFrame(test_texts,columns=['text']) df_test['label'] = test_labels ###Output _____no_output_____ ###Markdown 2. Writing Labeling Functions (LFs) a) Exploring the training set for initial ideas We'll start by looking at 20 random data points from the `train` set to generate some ideas for LFs. ###Code import random random.sample(train_texts,10) random.sample(train_texts,10) ###Output _____no_output_____ ###Markdown Using a list of positive/negative opinion words is a good starting point for writing the labeling functions. I will use the "Opinion Lexicon" from [here](https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html) for this purpose. This contains a list of English positive and negative opinion words or sentiment words (around 6800 words in total). Details about the data are in the link. negative-words.txt, positive-words.txt in files/ contain these files. b) Writing a few LFs Labeling functions in Snorkel are created with the[`@labeling_function` decorator](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.labeling_function.html).The [decorator](https://realpython.com/primer-on-python-decorators/) can be applied to _any Python function_ that returns a label for a single data point. ###Code #load positive and negative words lists. def getsentimentwords(filepath): mylist = [] for line in open(filepath, encoding="utf-8"): if line and not line.startswith(";"): mylist.append(line.strip()) return mylist positives = getsentimentwords("../resources/positive-words.txt") negatives = getsentimentwords("../resources/negative-words.txt") #load the vader lexicon: def getvader(filepath): mydict = {} for line in open(filepath, encoding="utf-8"): temp = line.split("\t") if temp[0].isalpha(): mydict[temp[0]] = float(temp[1]) return mydict myvader = getvader("../resources/vader_lexicon.txt") #load pos/neg emotions words def getemotions(filepath): mylist = [] for line in open(filepath, encoding="utf-8"): mylist.extend(line.strip().split()) return mylist myposemotions = getemotions("../resources/posemotions.txt") mynegemotions = getemotions("../resources/negemotions.txt") print(len(positives)) print(len(negatives)) print(len(myvader)) from snorkel.labeling import labeling_function import re POS=1 NEG=0 ABSTAIN=-1 #a simple labeling function checking if a sentence has positive words @labeling_function() def postive(x): poswords = 0 temp = x.text.lower().split() for word in temp: if word in positives: poswords +=1 if poswords > 0: return POS else: return ABSTAIN #a simple labeling function checking if a sentence has negative words @labeling_function() def negative(x): negwords = 0 temp = x.text.lower().split() for word in temp: if word in negatives: negwords +=1 if negwords > 0: return NEG else: return ABSTAIN #Look up the word's mean-sentiment rating in Vader Lexicon #https://github.com/cjhutto/vaderSentiment/blob/master/vaderSentiment/vader_lexicon.txt #if overall score for a sentence is positive, return POS, else, NEG. if 0, return ABSTAIN. @labeling_function() def vaderlex(x): temp = x.text.lower().split() sentiment = 0 for word in temp: if word in myvader: sentiment += myvader[word] if sentiment > 0: return POS elif sentiment <0: return NEG else: return ABSTAIN #Look up the word's mean-sentiment rating in Vader Lexicon #https://github.com/cjhutto/vaderSentiment/blob/master/vaderSentiment/vader_lexicon.txt #if there are more positive words (>1.5 rating) it is POS, else if less than -1.5, NEG. #finally, if POS=NEG, return ABSTAIN. @labeling_function() def vaderlex2(x): temp = x.text.lower().split() poswords = 0 negwords = 0 for word in temp: if word in myvader: sentiment = myvader[word] if sentiment > 1.5: poswords +=1 elif sentiment <-1.5: negwords += 1 if poswords > negwords: return POS elif negwords > poswords: return NEG else: return ABSTAIN #a simple labeling function checking if a sentence has positive emotion words @labeling_function() def posemo(x): poswords = 0 temp = x.text.lower().split() for word in temp: if word in myposemotions: poswords +=1 if poswords > 0: return POS else: return ABSTAIN #a simple labeling function checking if a sentence has negative emotion words @labeling_function() def negemo(x): poswords = 0 temp = x.text.lower().split() for word in temp: if word in mynegemotions: poswords +=1 if poswords > 0: return POS else: return ABSTAIN #More function ideas: https://medium.com/@datamonsters/sentiment-analysis-tools-overview-part-1-positive-and-negative-words-databases-ae35431a470c ###Output _____no_output_____ ###Markdown To apply one or more LFs that we've written to a collection of data points, we use an[`LFApplier`](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.LFApplier.html).Because our data points are represented with a Pandas DataFrame in this tutorial, we use the[`PandasLFApplier`](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.PandasLFApplier.html).Correspondingly, a single data point `x` that's passed into our LFs will be a [Pandas `Series` object](https://pandas.pydata.org/pandas-docs/stable/reference/series.html).It's important to note that these LFs will work for any object with an attribute named `text`, not just Pandas objects.Snorkel has several other appliers for different data point collection types which you can browse in the [API documentation](https://snorkel.readthedocs.io/en/master/packages/labeling.html).The output of the `apply(...)` method is a ***label matrix***, a fundamental concept in Snorkel.It's a NumPy array `L` with one column for each LF and one row for each data point, where `L[i, j]` is the label that the `j`th labeling function output for the `i`th data point.We'll create a label matrix for the `train` set. ###Code from snorkel.labeling import PandasLFApplier from pandas import DataFrame lfs = [postive,negative,vaderlex,vaderlex2,posemo,negemo] applier = PandasLFApplier(lfs=lfs) L_train = applier.apply(df=df_train) ###Output 100%|██████████| 2000/2000 [00:02<00:00, 924.68it/s] ###Markdown c) Evaluate performance on training set Lots of statistics about labeling functions &mdash; like coverage &mdash; are useful when building any Snorkel application.So Snorkel provides tooling for common LF analyses using the[`LFAnalysis` utility](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.LFAnalysis.html).We report the following summary statistics for multiple LFs at once:* **Polarity**: The set of unique labels this LF outputs (excluding abstains)* **Coverage**: The fraction of the dataset the LF labels* **Overlaps**: The fraction of the dataset where this LF and at least one other LF label* **Conflicts**: The fraction of the dataset where this LF and at least one other LF label and disagree* **Correct**: The number of data points this LF labels correctly (if gold labels are provided)* **Incorrect**: The number of data points this LF labels incorrectly (if gold labels are provided)* **Empirical Accuracy**: The empirical accuracy of this LF (if gold labels are provided)For *Correct*, *Incorrect*, and *Empirical Accuracy*, we don't want to penalize the LF for data points where it abstained.We calculate these statistics only over those data points where the LF output a label.**Note that in our current setup, we can't compute these statistics because we don't have any ground-truth labels (other than in the test set, which we cannot look at). Not to worry—Snorkel's `LabelModel` will estimate them without needing any ground-truth labels in the next step!** ###Code from snorkel.labeling import LFAnalysis lfs = [postive,negative, vaderlex, vaderlex2, posemo, negemo] LFAnalysis(L=L_train, lfs=lfs).lf_summary() ###Output _____no_output_____ ###Markdown 4. Combining Labeling Function Outputs with the Label Model This tutorial demonstrates just a handful of the types of LFs that one might write for this task.One of the key goals of Snorkel is _not_ to replace the effort, creativity, and subject matter expertise required to come up with these labeling functions, but rather to make it faster to write them, since **in Snorkel the labeling functions are assumed to be noisy, i.e. innaccurate, overlapping, etc.**Said another way: the LF abstraction provides a flexible interface for conveying a huge variety of supervision signals, and the `LabelModel` is able to denoise these signals, reducing the need for painstaking manual fine-tuning. Once we perform some LFs analysis and finalize our list, we can now apply these once again with `LFApplier` to get the label matrices.The Pandas format provides an easy interface that many practitioners are familiar with, but it is also less optimized for scale.For larger datasets, more compute-intensive LFs, or larger LF sets, you may decide to use one of the other data formatsthat Snorkel supports natively, such as Dask DataFrames or PySpark DataFrames, and their corresponding applier objects.For more info, check out the [Snorkel API documentation](https://snorkel.readthedocs.io/en/master/packages/labeling.html). ###Code applier = PandasLFApplier(lfs=lfs) L_train = applier.apply(df=df_train) L_test = applier.apply(df=df_test) LFAnalysis(L=L_train, lfs=lfs).lf_summary() ###Output _____no_output_____ ###Markdown We see that our labeling functions vary in coverage, how much they overlap/conflict with one another, and almost certainly their accuracies as well. Our goal is now to convert the labels from our LFs into a single _noise-aware_ probabilistic (or confidence-weighted) label per data point.A simple baseline for doing this is to take the majority vote on a per-data point basis: if more LFs voted SPAM than HAM, label it SPAM (and vice versa).We can test this with the[`MajorityLabelVoter` baseline model](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.model.baselines.MajorityLabelVoter.htmlsnorkel.labeling.model.baselines.MajorityLabelVoter). However, as we can see from the summary statistics of our LFs in the previous section, they have varying properties and should not be treated identically. In addition to having varied accuracies and coverages, LFs may be correlated, resulting in certain signals being overrepresented in a majority-vote-based model. To handle these issues appropriately, we will instead use a more sophisticated Snorkel `LabelModel` to combine the outputs of the LFs.This model will ultimately produce a single set of noise-aware training labels, which are probabilistic or confidence-weighted labels. We will then use these labels to train a classifier for our task. For more technical details of this overall approach, see our [NeurIPS 2016](https://arxiv.org/abs/1605.07723) and [AAAI 2019](https://arxiv.org/abs/1810.02840) papers. For more info on the API, see the [`LabelModel` documentation](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.model.label_model.LabelModel.htmlsnorkel.labeling.model.label_model.LabelModel).Note that no gold labels are used during the training process.The only information we need is the label matrix, which contains the output of the LFs on our training set.The `LabelModel` is able to learn weights for the labeling functions using only the label matrix as input.We also specify the `cardinality`, or number of classes. ###Code from snorkel.labeling.model import MajorityLabelVoter from snorkel.labeling.model import LabelModel majority_model = MajorityLabelVoter() preds_train = majority_model.predict(L=L_train) label_model = LabelModel(cardinality=2, verbose=True) label_model.fit(L_train=L_train, n_epochs=500, log_freq=100, seed=123) Y_test = df_test.label.values majority_acc = majority_model.score(L=L_test, Y=Y_test, tie_break_policy="random")["accuracy"] print(f"{'Majority Vote Accuracy:':<25} {majority_acc * 100:.1f}%") label_model_acc = label_model.score(L=L_test, Y=Y_test, tie_break_policy="random")["accuracy"] print(f"{'Label Model Accuracy:':<25} {label_model_acc * 100:.1f}%") ###Output Majority Vote Accuracy: 71.7% Label Model Accuracy: 71.6% ###Markdown The majority vote model or more sophisticated `LabelModel` could in principle be used directly as a classifier if the outputs of our labeling functions were made available at test time. (In this case, because of my poor LFs, LabelModel does poorly compared to MajorityLabel model, but usually, that is not the case).Anyway, these models (i.e. these re-weighted combinations of our labeling function's votes) will abstain on the data points that our labeling functions don't cover (and additionally, may require slow or unavailable features to execute at test time).In the next section, we will instead use the outputs of the `LabelModel` as training labels to train a discriminative classifier **which can generalize beyond the labeling function outputs** to see if we can improve performance further.This classifier will also only need the text of the comment to make predictions, making it much more suitable for inference over unseen comments.For more information on the properties of the label model, see the [Snorkel documentation](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.model.label_model.LabelModel.htmlsnorkel.labeling.model.label_model.LabelModel). Filtering out unlabeled data points As we saw earlier, some of the data points in our `train` set received no labels from any of our LFs.These data points convey no supervision signal and tend to hurt performance, so we filter them out before training using a[built-in utility](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.filter_unlabeled_dataframe.htmlsnorkel.labeling.filter_unlabeled_dataframe). ###Code from snorkel.labeling import filter_unlabeled_dataframe df_train_filtered, probs_train_filtered = filter_unlabeled_dataframe( X=df_train, y=probs_train, L=L_train ) print(df_train.shape) print(df_train_filtered.shape) ###Output (2000, 1) (1419, 1) ###Markdown 5. Training a Classifier In this final section of the tutorial, we'll use the probabilistic training labels we generated in the last section to train a classifier for our task.**The output of the Snorkel `LabelModel` is just a set of labels which can be used with most popular libraries for performing supervised learning, such as TensorFlow, Keras, PyTorch, Scikit-Learn, Ludwig, and XGBoost.**In this tutorial, we use the well-known library [Scikit-Learn](https://scikit-learn.org).**Note that typically, Snorkel is used (and really shines!) with much more complex, training data-hungry models, but we will use Logistic Regression here for simplicity of exposition.** Featurization For simplicity and speed, we use a simple "bag of n-grams" feature representation: each data point is represented by a one-hot vector marking which words or 2-word combinations are present in the comment text. ###Code from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(ngram_range=(1, 5)) X_train = vectorizer.fit_transform(df_train_filtered.text.tolist()) X_test = vectorizer.transform(df_test.text.tolist()) print(X_train.shape) print(X_test.shape) ###Output (1419, 61454) (1000, 61454) ###Markdown Scikit-Learn Classifier As we saw in Section 4, the `LabelModel` outputs probabilistic (float) labels.If the classifier we are training accepts target labels as floats, we can train on these labels directly (see describe the properties of this type of "noise-aware" loss in our [NeurIPS 2016 paper](https://arxiv.org/abs/1605.07723)).If we want to use a library or model that doesn't accept probabilistic labels (such as Scikit-Learn), we can instead replace each label distribution with the label of the class that has the maximum probability.This can easily be done using the[`probs_to_preds` helper method](https://snorkel.readthedocs.io/en/master/packages/_autosummary/utils/snorkel.utils.probs_to_preds.htmlsnorkel.utils.probs_to_preds).We do note, however, that this transformation is lossy, as we no longer have values for our confidence in each label. ###Code from snorkel.utils import probs_to_preds preds_train_filtered = probs_to_preds(probs=probs_train_filtered) ###Output _____no_output_____ ###Markdown We then use these labels to train a classifier as usual. ###Code from sklearn.linear_model import LogisticRegression from sklearn.svm import LinearSVC Y_test = df_test.label.values for classifier in [LogisticRegression(C=1e3, solver="liblinear"), LinearSVC()]: classifier.fit(X=X_train, y=preds_train_filtered) print("Performance for ", type(classifier).__name__), print(f"Test Accuracy: {classifier.score(X=X_test, y=test_labels) * 100:.1f}%") #print(preds_train_filtered) #print(test_labels) ###Output Performance for LogisticRegression Test Accuracy: 61.6% Performance for LinearSVC Test Accuracy: 61.8% ###Markdown This obviously looks worser than the majority labeler, but as you will see in the withLabeledTrainingData notebook, the performance is much better if we use sbert features instead of Bag of Ngrams. Let us save this new labeled dataset created by snorkel, and use it later to compare with other approaches. ###Code #Collect the new dataset and save it. auto_labeled_data = DataFrame( {'text': df_train_filtered.text.tolist(), 'sentiment': preds_train_filtered, }) auto_labeled_data.to_csv("../files/snorkellabeled_train.csv", sep="\t", index=False, header=False) ###Output _____no_output_____ ###Markdown Automatic Data Labeling for Sentiment Analysis Let us see how can we use automatic data labeling for building a sentiment classifier. This consists of four major steps:1. **Loading Data**: train_labelled.txt, test_labelled.txt are the two files (from files/ folder) we will use here. The source of this data is mentioned in the slides. In the train_labelled.txt, I am only going to use the text, and discard the labels. I am going to keep the labels for test data, as we need something to evaluate our approach.2. **Writing Labeling Functions**: We write Python programs that take as input a data point and assign labels (or abstain) using heuristics, pattern matching, or third-party models.3. **Combining Labeling Function Outputs with the Label Model**: We model the outputs of the labeling functions over the training set using a novel, theoretically-grounded [modeling approach](https://arxiv.org/abs/1605.07723), which estimates the accuracies and correlations of the labeling functions using only their agreements and disagreements, and then uses this to reweight and combine their outputs, which we then use as _probabilistic_ training labels.4. **Training a Classifier**: We train a classifier that can predict labels for *any* YouTube comment (not just the ones labeled by the labeling functions) using the probabilistic training labels from step 3.(Text in this notebook is adapted from original Snorkel tutorial on [spam classification](https://github.com/snorkel-team/snorkel-tutorials/tree/master/spam)) Task: Sentiment Classification 1. Loading Data ###Code #reads train/test files def read_data(filepath): texts = [] labels = [] for line in open(filepath): sentence, label = line.strip().split("\t") labels.append(int(label)) texts.append(sentence) return texts, labels train_texts, discard = read_data("../files/train_labelled.txt") test_texts, test_labels = read_data("../files/test_labelled.txt") discard = None #training labels are discarded. We won't use them #convert to dataframe for ease of use later. from pandas import DataFrame df_train = DataFrame (train_texts,columns=['text']) df_test = DataFrame(test_texts,columns=['text']) df_test['label'] = test_labels ###Output _____no_output_____ ###Markdown 2. Writing Labeling Functions (LFs) a) Exploring the training set for initial ideas We'll start by looking at 20 random data points from the `train` set to generate some ideas for LFs. ###Code import random random.sample(train_texts,10) random.sample(train_texts,10) ###Output _____no_output_____ ###Markdown Using a list of positive/negative opinion words is a good starting point for writing the labeling functions. I will use the "Opinion Lexicon" from [here](https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html) for this purpose. This contains a list of English positive and negative opinion words or sentiment words (around 6800 words in total). Details about the data are in the link. negative-words.txt, positive-words.txt in files/ contain these files. b) Writing a few LFs Labeling functions in Snorkel are created with the[`@labeling_function` decorator](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.labeling_function.html).The [decorator](https://realpython.com/primer-on-python-decorators/) can be applied to _any Python function_ that returns a label for a single data point. ###Code #load positive and negative words lists. def getsentimentwords(filepath): mylist = [] for line in open(filepath, encoding="utf-8"): if line and not line.startswith(";"): mylist.append(line.strip()) return mylist positives = getsentimentwords("../resources/positive-words.txt") negatives = getsentimentwords("../resources/negative-words.txt") #load the vader lexicon: def getvader(filepath): mydict = {} for line in open(filepath, encoding="utf-8"): temp = line.split("\t") if temp[0].isalpha(): mydict[temp[0]] = float(temp[1]) return mydict myvader = getvader("../resources/vader_lexicon.txt") #load pos/neg emotions words def getemotions(filepath): mylist = [] for line in open(filepath, encoding="utf-8"): mylist.extend(line.strip().split()) return mylist myposemotions = getemotions("../resources/posemotions.txt") mynegemotions = getemotions("../resources/negemotions.txt") print(len(positives)) print(len(negatives)) print(len(myvader)) from snorkel.labeling import labeling_function import re POS=1 NEG=0 ABSTAIN=-1 #a simple labeling function checking if a sentence has positive words @labeling_function() def postive(x): poswords = 0 temp = x.text.lower().split() for word in temp: if word in positives: poswords +=1 if poswords > 0: return POS else: return ABSTAIN #a simple labeling function checking if a sentence has negative words @labeling_function() def negative(x): negwords = 0 temp = x.text.lower().split() for word in temp: if word in negatives: negwords +=1 if negwords > 0: return NEG else: return ABSTAIN #Look up the word's mean-sentiment rating in Vader Lexicon #https://github.com/cjhutto/vaderSentiment/blob/master/vaderSentiment/vader_lexicon.txt #if overall score for a sentence is positive, return POS, else, NEG. if 0, return ABSTAIN. @labeling_function() def vaderlex(x): temp = x.text.lower().split() sentiment = 0 for word in temp: if word in myvader: sentiment += myvader[word] if sentiment > 0: return POS elif sentiment <0: return NEG else: return ABSTAIN #Look up the word's mean-sentiment rating in Vader Lexicon #https://github.com/cjhutto/vaderSentiment/blob/master/vaderSentiment/vader_lexicon.txt #if there are more positive words (>1.5 rating) it is POS, else if less than -1.5, NEG. #finally, if POS=NEG, return ABSTAIN. @labeling_function() def vaderlex2(x): temp = x.text.lower().split() poswords = 0 negwords = 0 for word in temp: if word in myvader: sentiment = myvader[word] if sentiment > 1.5: poswords +=1 elif sentiment <-1.5: negwords += 1 if poswords > negwords: return POS elif negwords > poswords: return NEG else: return ABSTAIN #a simple labeling function checking if a sentence has positive emotion words @labeling_function() def posemo(x): poswords = 0 temp = x.text.lower().split() for word in temp: if word in myposemotions: poswords +=1 if poswords > 0: return POS else: return ABSTAIN #a simple labeling function checking if a sentence has negative emotion words @labeling_function() def negemo(x): poswords = 0 temp = x.text.lower().split() for word in temp: if word in mynegemotions: poswords +=1 if poswords > 0: return POS else: return ABSTAIN #More function ideas: https://medium.com/@datamonsters/sentiment-analysis-tools-overview-part-1-positive-and-negative-words-databases-ae35431a470c ###Output _____no_output_____ ###Markdown To apply one or more LFs that we've written to a collection of data points, we use an[`LFApplier`](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.LFApplier.html).Because our data points are represented with a Pandas DataFrame in this tutorial, we use the[`PandasLFApplier`](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.PandasLFApplier.html).Correspondingly, a single data point `x` that's passed into our LFs will be a [Pandas `Series` object](https://pandas.pydata.org/pandas-docs/stable/reference/series.html).It's important to note that these LFs will work for any object with an attribute named `text`, not just Pandas objects.Snorkel has several other appliers for different data point collection types which you can browse in the [API documentation](https://snorkel.readthedocs.io/en/master/packages/labeling.html).The output of the `apply(...)` method is a ***label matrix***, a fundamental concept in Snorkel.It's a NumPy array `L` with one column for each LF and one row for each data point, where `L[i, j]` is the label that the `j`th labeling function output for the `i`th data point.We'll create a label matrix for the `train` set. ###Code from snorkel.labeling import PandasLFApplier from pandas import DataFrame lfs = [postive,negative,vaderlex,vaderlex2,posemo,negemo] applier = PandasLFApplier(lfs=lfs) L_train = applier.apply(df=df_train) ###Output 100%|██████████| 2000/2000 [00:02<00:00, 962.05it/s] ###Markdown c) Evaluate performance on training set Lots of statistics about labeling functions &mdash; like coverage &mdash; are useful when building any Snorkel application.So Snorkel provides tooling for common LF analyses using the[`LFAnalysis` utility](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.LFAnalysis.html).We report the following summary statistics for multiple LFs at once:* **Polarity**: The set of unique labels this LF outputs (excluding abstains)* **Coverage**: The fraction of the dataset the LF labels* **Overlaps**: The fraction of the dataset where this LF and at least one other LF label* **Conflicts**: The fraction of the dataset where this LF and at least one other LF label and disagree ###Code from snorkel.labeling import LFAnalysis lfs = [postive,negative, vaderlex, vaderlex2, posemo, negemo] LFAnalysis(L=L_train, lfs=lfs).lf_summary() ###Output _____no_output_____ ###Markdown 4. Combining Labeling Function Outputs with the Label Model This tutorial demonstrates just a handful of the types of LFs that one might write for this task.One of the key goals of Snorkel is _not_ to replace the effort, creativity, and subject matter expertise required to come up with these labeling functions, but rather to make it faster to write them, since **in Snorkel the labeling functions are assumed to be noisy, i.e. innaccurate, overlapping, etc.**Said another way: the LF abstraction provides a flexible interface for conveying a huge variety of supervision signals, and the `LabelModel` is able to denoise these signals, reducing the need for painstaking manual fine-tuning. Once we perform some LFs analysis and finalize our list, we can now apply these once again with `LFApplier` to get the label matrices.The Pandas format provides an easy interface that many practitioners are familiar with, but it is also less optimized for scale.For larger datasets, more compute-intensive LFs, or larger LF sets, you may decide to use one of the other data formatsthat Snorkel supports natively, such as Dask DataFrames or PySpark DataFrames, and their corresponding applier objects.For more info, check out the [Snorkel API documentation](https://snorkel.readthedocs.io/en/master/packages/labeling.html). ###Code applier = PandasLFApplier(lfs=lfs) L_train = applier.apply(df=df_train) L_test = applier.apply(df=df_test) LFAnalysis(L=L_train, lfs=lfs).lf_summary() ###Output _____no_output_____ ###Markdown Our goal is now to convert the labels from our LFs into a single _noise-aware_ probabilistic (or confidence-weighted) label per data point.A simple baseline for doing this is to take the majority vote on a per-data point basis: if more LFs voted SPAM than HAM, label it SPAM (and vice versa).We can test this with the[`MajorityLabelVoter` baseline model](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.model.baselines.MajorityLabelVoter.htmlsnorkel.labeling.model.baselines.MajorityLabelVoter). However, as we can see from the summary statistics of our LFs in the previous section, they have varying properties and should not be treated identically. In addition to having varied accuracies and coverages, LFs may be correlated, resulting in certain signals being overrepresented in a majority-vote-based model. To handle these issues appropriately, we will instead use a more sophisticated Snorkel `LabelModel` to combine the outputs of the LFs.This model will ultimately produce a single set of noise-aware training labels, which are probabilistic or confidence-weighted labels. We will then use these labels to train a classifier for our task. For more technical details of this overall approach, see our [NeurIPS 2016](https://arxiv.org/abs/1605.07723) and [AAAI 2019](https://arxiv.org/abs/1810.02840) papers. For more info on the API, see the [`LabelModel` documentation](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.model.label_model.LabelModel.htmlsnorkel.labeling.model.label_model.LabelModel).Note that no gold labels are used during the training process.The only information we need is the label matrix, which contains the output of the LFs on our training set.The `LabelModel` is able to learn weights for the labeling functions using only the label matrix as input.We also specify the `cardinality`, or number of classes. ###Code from snorkel.labeling.model import MajorityLabelVoter from snorkel.labeling.model import LabelModel majority_model = MajorityLabelVoter() preds_train = majority_model.predict(L=L_train) label_model = LabelModel(cardinality=2, verbose=True) label_model.fit(L_train=L_train, n_epochs=500, log_freq=100, seed=123) Y_test = df_test.label.values majority_acc = majority_model.score(L=L_test, Y=Y_test, tie_break_policy="random")["accuracy"] print(f"{'Majority Vote Accuracy:':<25} {majority_acc * 100:.1f}%") label_model_acc = label_model.score(L=L_test, Y=Y_test, tie_break_policy="random")["accuracy"] print(f"{'Label Model Accuracy:':<25} {label_model_acc * 100:.1f}%") ###Output Majority Vote Accuracy: 71.7% Label Model Accuracy: 71.6% ###Markdown The majority vote model or more sophisticated `LabelModel` could in principle be used directly as a classifier if the outputs of our labeling functions were made available at test time.However, these models (i.e. these re-weighted combinations of our labeling function's votes) will abstain on the data points that our labeling functions don't cover (and additionally, may require slow or unavailable features to execute at test time).In the next section, we will instead use the outputs of the `LabelModel` as training labels to train a discriminative classifier **which can generalize beyond the labeling function outputs** to see if we can improve performance further.This classifier will also only need the text of the comment to make predictions, making it much more suitable for inference over unseen comments.For more information on the properties of the label model, see the [Snorkel documentation](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.model.label_model.LabelModel.htmlsnorkel.labeling.model.label_model.LabelModel). Let's briefly confirm that the labels the `LabelModel` produces are indeed probabilistic in nature.The following histogram shows the confidences we have that each data point has the label Positive.The points we are least certain about will have labels close to 0.5. ###Code import matplotlib.pyplot as plt %matplotlib inline def plot_probabilities_histogram(Y): plt.hist(Y, bins=10) plt.xlabel("Probability of Positive") plt.ylabel("Number of data points") plt.show() probs_train = label_model.predict_proba(L=L_train) plot_probabilities_histogram(probs_train[:, POS]) ###Output _____no_output_____ ###Markdown Filtering out unlabeled data points As we saw earlier, some of the data points in our `train` set received no labels from any of our LFs.These data points convey no supervision signal and tend to hurt performance, so we filter them out before training using a[built-in utility](https://snorkel.readthedocs.io/en/master/packages/_autosummary/labeling/snorkel.labeling.filter_unlabeled_dataframe.htmlsnorkel.labeling.filter_unlabeled_dataframe). ###Code from snorkel.labeling import filter_unlabeled_dataframe df_train_filtered, probs_train_filtered = filter_unlabeled_dataframe( X=df_train, y=probs_train, L=L_train ) print(df_train.shape) print(df_train_filtered.shape) ###Output (2000, 1) (1419, 1) ###Markdown So, almost 600/2000 datapoints are unlabeled with these LFs I wrote! Ideally, LFs we write should cover as many data points as possible, so that we will have as much training data as possible in the next step! 5. Training a Classifier In this final section of the tutorial, we'll use the probabilistic training labels we generated in the last section to train a classifier for our task.**The output of the Snorkel `LabelModel` is just a set of labels which can be used with most popular libraries for performing supervised learning, such as TensorFlow, Keras, PyTorch, Scikit-Learn, Ludwig, and XGBoost.**In this tutorial, we use the well-known library [Scikit-Learn](https://scikit-learn.org).**Note that typically, Snorkel is used (and really shines!) with much more complex, training data-hungry models, but we will use Logistic Regression here for simplicity of exposition.** Featurization For simplicity and speed, we use a simple "bag of words" feature representation: each data point is represented by a one-hot vector marking which words or 2-word combinations are present in the comment text.Note: In the video, I used bag of n-grams (1,5) instead of bag-of-words, and hence, you see a difference in performance due to that! Just using bag of words is much better here!! ###Code from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() X_train = vectorizer.fit_transform(df_train_filtered.text.tolist()) X_test = vectorizer.transform(df_test.text.tolist()) vectorizer_n = CountVectorizer(ngram_range=(1, 5)) X_train_ngrams = vectorizer_n.fit_transform(df_train_filtered.text.tolist()) X_test_ngrams = vectorizer_n.transform(df_test.text.tolist()) print(X_train.shape) print(X_test.shape) ###Output (1419, 3812) (1000, 3812) ###Markdown Scikit-Learn Classifier As we saw in Section 4, the `LabelModel` outputs probabilistic (float) labels.If the classifier we are training accepts target labels as floats, we can train on these labels directly (see describe the properties of this type of "noise-aware" loss in our [NeurIPS 2016 paper](https://arxiv.org/abs/1605.07723)).If we want to use a library or model that doesn't accept probabilistic labels (such as Scikit-Learn), we can instead replace each label distribution with the label of the class that has the maximum probability.This can easily be done using the[`probs_to_preds` helper method](https://snorkel.readthedocs.io/en/master/packages/_autosummary/utils/snorkel.utils.probs_to_preds.htmlsnorkel.utils.probs_to_preds).We do note, however, that this transformation is lossy, as we no longer have values for our confidence in each label. ###Code from snorkel.utils import probs_to_preds preds_train_filtered = probs_to_preds(probs=probs_train_filtered) ###Output _____no_output_____ ###Markdown We then use these labels to train a classifier as usual. ###Code from sklearn.linear_model import LogisticRegression from sklearn.svm import LinearSVC Y_test = df_test.label.values for classifier in [LogisticRegression(C=1e3, solver="liblinear"), LinearSVC()]: classifier.fit(X=X_train, y=preds_train_filtered) print("Performance for ", type(classifier).__name__, "Bag of words") print(f"Test Accuracy: {classifier.score(X=X_test, y=test_labels) * 100:.1f}%") classifier.fit(X=X_train_ngrams, y=preds_train_filtered) print("Performance for ", type(classifier).__name__, "Bag of ngrams") print(f"Test Accuracy: {classifier.score(X=X_test_ngrams, y=test_labels) * 100:.1f}%") #print(preds_train_filtered) #print(test_labels) ###Output Performance for LogisticRegression Bag of words Test Accuracy: 69.4% Performance for LogisticRegression Bag of ngrams Test Accuracy: 61.6% Performance for LinearSVC Bag of words Test Accuracy: 69.0% Performance for LinearSVC Bag of ngrams Test Accuracy: 61.8% ###Markdown **We observe an additional boost in accuracy over the `LabelModel` by multiple points! This is in part because the discriminative model generalizes beyond the labeling function's labels and makes good predictions on all data points, not just the ones covered by labeling functions.By using the label model to transfer the domain knowledge encoded in our LFs to the discriminative model,we were able to generalize beyond the noisy labeling heuristics**. ###Code #Collect the new dataset and save it. auto_labeled_data = DataFrame( {'text': df_train_filtered.text.tolist(), 'sentiment': preds_train_filtered, }) auto_labeled_data.to_csv("../files/snorkellabeled_train.csv", sep="\t", index=False, header=False) ###Output _____no_output_____
1_Basic_Python/additional_content/01_jupyter_introduction.ipynb
###Markdown Introduction to Jupyter notebooks BackgroundThe Jupyter Notebook is an interactive web application that allows viewing, creation and documentation of live code.Notebook applications include data transformation, visualisation, modelling and machine learning. **What is Jupyter?**The *Jupyter Project* is an open source effort that evolved from the IPython project to support interactive data science and computing. Besides `Python`, it also supports many different programming languages including `R` and `Julia`. *(If you're familiar with the `R` programming language, Jupyter Notebook can be compared to R Markdown)*.Jupyter is an open source platform that contains a suite of tools including:* **Jupyter Notebook**: ***A browser-based interactive development environment (IDE) that allows users to write and run e.g. `python` codes in individual cells where the output is displayed under each executed cell.**** **JupyterLab**: A browser-based application that allows you to access multiple Jupyter Notebook files as well as other code and data files. * **Jupyter Hub**: A multi-person version of Jupyter Notebook and Lab that can be run on a server.In this tutorial, we aim at introducing you as much as necessary about jupyter notebook so that you could use it as your "code editor" to start playing around with Data Cube functions that you will learn in later tutorials. Optional (if needed): **How to install Jupyter Notebook?** 1. Install Jupyter NotebookWe recommend installing the classic Jupyter Notebook using the conda package manager. Either the miniconda or the miniforge conda distributions include a minimal conda installation.1. Download Miniconda: https://docs.conda.io/en/latest/miniconda.html2. Follow the Installation Instruction of Miniconda:https://conda.io/projects/conda/en/latest/user-guide/install/index.html3. Install the notebook with:`conda install -c conda-forge notebook`4. Run Jupyter Notebook with:`jupyter notebook` **Components of Jupyter Notebook**1. ***Jupyter Notebook IDE***: The application that launches in a web browser like Firefox or Safari and is the environment where you write and run your code.2. ***Jupyter Notebook Files***(`.ipynb`): The file format that you can use to store code and markdown text for individual projects and workflows.3. ***Kernels***: A kernel runs your code in a specific programming language. In this tutorial, Python kernel is used within the Jupyter Notebook IDE. **Jupyter Notebook User Interface**After you create a new notebook file (.ipynb), you will be presented with **notebook name**, **menu bar**, **tool bar** and a **code cell** as default starting cell.![figure of notebook user interface](https://jupyter-notebook.readthedocs.io/en/stable/_images/blank-notebook-ui.png)* **Notebook Name**: if you click at the notebook name, you could rename the file.* **Menu Bar**: presents all functions and settings of the notebook file.* **Tool Bar**: presents the most used tools as icons.* **Code Cell**: it is the default type of cell when you create a new cell; if you want to transfer it to a **markdown cell**, you could use the drop down box in tool bar or a keyboard shortcut. ***The default/usual settings of keyboard shortcuts of some most used functions are listed below:*****Function** | **Keyboard Shortcut** | **Menu/Tool Bar**:------ |:----------|:--------**Create new Cell** | `esc`+`a`(insert new cell above); `esc`+`b`(insert new cell below) | Insert -> Insert Cell Above; Insert -> Insert Cell Below**Copy Cell** | `c` | Copy Key in Toolbar**Paste Cell** | `v` | Paste Key in Toolbar**Edit Cell** | `Enter` | **Run Cell** | `ctrl`+`enter` | Cell -> Run Cell**Switch to Markdown Cell** | `esc`+`m` | Select 'Markdown' in Toolbar**Switch to Code Cell** | `esc`+`y` | Select 'Code' in Toobar**Delete Cell** | double hit `d` | Edit -> Delete Cells**Move to next/former Cell** | $\downarrow$ / $\uparrow$ | Click on the cell you want to work on | ***Tip: you can change the keyboard shortcuts of functions in `menu bar --> help --> keyboard shortcuts`. Note that this function might not be available if you work in e.g. JupyterLab*** **Code Cells in Jupyter Notebook**When you run a Code cell, the outputs will be displayed under the executed cell. For instance: ###Code print("Hello World!") ###Output _____no_output_____
1537_800a39ea-13c1-4ced-97ec-9585ff3d0e1c_cf_Noteboo_HnqRmZt.ipynb
###Markdown Introduction ApproachFirst of all, I noticed that the time series data is irregular and has missing data. So, I resampled and imputed the data. I tried different machine learning models from Sklearn but that didn't work well. After that I switched to deep learning models where I tried neural network and LSTMs with different sequence length but that was also not giving very good results.After this I decided not to impute data, since it adds bias that the missing data is linearly interpolated. After deep learning models didn't work, I switched to XGBoost, LightGBM and Catboost which gave me an RMSE around ~ 34.Now, in order to further improve it, I looked into different methods for irregular timeseries like:1. https://www.youtube.com/watch?v=E4NMZyfao2c2. Research papers3. A very nice blog post: https://www.notion.so/Corrupt-sparse-irregular-and-ugly-Deep-learning-on-time-series-887b823df439417bb8428a3474d939b3But most of these were too complex to try in the short time span. During exploration, I also got the idea of adding holidays to data.I added more time based features based on https://scikit-learn.org/stable/auto_examples/applications/plot_cyclical_feature_engineering.html and retuned my models. These periodic features help model the periodicity of time.For neural networks, I modified layers, different hyperparameters and for catboost and lightgbm, I had used hyperopt to search for hyperparams. After that, I finally got the best result as 33.02. I am really interested in knowing about 32.02 solution and trying some of the methods mentioned in the blog. My neural networks were trained in different conda environment so those cells have not been run in this notebook. Read Dataset ###Code train_df = pd.read_csv('dataset/train.csv') test_df = pd.read_csv('dataset/test.csv') sample_df = pd.read_csv('dataset/sample.csv') combined_df = pd.concat([train_df, test_df]) ###Output _____no_output_____ ###Markdown Add Time Difference ###Code combined_df.date = pd.to_datetime(combined_df.date) combined_df.hour = pd.to_timedelta(combined_df.hour, unit='h') combined_df.index = combined_df.date combined_df.index = combined_df.index + combined_df.hour combined_df.drop(['date','hour'], axis=1, inplace=True) combined_df['prev_time_difference'] = combined_df.index.to_series().diff()/ np.timedelta64(1, 'h') combined_df = combined_df.dropna(subset=['prev_time_difference']) combined_df.head() ###Output _____no_output_____ ###Markdown Add Holidays ###Code combined_df['is_holiday'] = 0 karnataka_holidays = holidays.IN(subdiv='KA', years=[2018,2019,2020,2021,2022]) for index, row in combined_df.iterrows(): if index in karnataka_holidays: combined_df.loc[index, 'is_holiday'] = 1 ###Output _____no_output_____ ###Markdown Add Time Features ###Code combined_df["day"] = combined_df.index.day combined_df["week"] = combined_df.index.isocalendar().week combined_df["month"] = combined_df.index.month combined_df["quarter"] = combined_df.index.quarter combined_df["year"] = combined_df.index.year combined_df["hour"] = combined_df.index.hour combined_df["dayofyear"] = combined_df.index.dayofyear combined_df['day_of_week'] = combined_df.index.day_of_week.astype(int) combined_df["is_month_start"] = combined_df.index.is_month_start.astype(int) combined_df["is_month_end"] = combined_df.index.is_month_end.astype(int) combined_df["is_quarter_start"] = combined_df.index.is_quarter_start.astype(int) combined_df["is_quarter_end"] = combined_df.index.is_quarter_end.astype(int) combined_df["is_year_start"] = combined_df.index.is_year_start.astype(int) combined_df["is_year_end"] = combined_df.index.is_year_end.astype(int) combined_df["is_leap_year"] = combined_df.index.is_leap_year.astype(int) combined_df["days_in_month"] = combined_df.index.days_in_month combined_df['is_weekend'] = np.where(combined_df['day_of_week'].isin([5,6]),1,0) combined_df.head() ###Output _____no_output_____ ###Markdown Adding More Time Related Periodic Features ###Code from sklearn.preprocessing import FunctionTransformer from sklearn.preprocessing import SplineTransformer from sklearn.preprocessing import PolynomialFeatures def sin_transformer(period): return FunctionTransformer(lambda x: np.sin(x / period * 2 * np.pi)) def cos_transformer(period): return FunctionTransformer(lambda x: np.cos(x / period * 2 * np.pi)) def periodic_spline_transformer(period, n_splines=None, degree=3): if n_splines is None: n_splines = period n_knots = n_splines + 1 # periodic and include_bias is True return SplineTransformer( degree=degree, n_knots=n_knots, knots=np.linspace(0, period, n_knots).reshape(n_knots, 1), extrapolation="periodic", include_bias=True, ) combined_df["sin_week"] = sin_transformer(7).fit_transform(combined_df['week']) combined_df["sin_month"] = sin_transformer(12).fit_transform(combined_df['month']) combined_df["sin_quarter"] = sin_transformer(4).fit_transform(combined_df['quarter']) combined_df["sin_hour"] = sin_transformer(24).fit_transform(combined_df['hour']) combined_df["sin_dayofyear"] = sin_transformer(365).fit_transform(combined_df['dayofyear']) combined_df['sin_day_of_week'] = sin_transformer(7).fit_transform(combined_df['day_of_week']) combined_df["cos_week"] = cos_transformer(7).fit_transform(combined_df['week']) combined_df["cos_month"] = cos_transformer(12).fit_transform(combined_df['month']) combined_df["cos_quarter"] = cos_transformer(4).fit_transform(combined_df['quarter']) combined_df["cos_hour"] = cos_transformer(24).fit_transform(combined_df['hour']) combined_df["cos_dayofyear"] = cos_transformer(365).fit_transform(combined_df['dayofyear']) combined_df['cos_day_of_week'] = cos_transformer(7).fit_transform(combined_df['day_of_week']) combined_df.head() spline_week= periodic_spline_transformer(7, n_splines=3).fit_transform(combined_df['week'].to_numpy().reshape(-1,1)) spline_month= periodic_spline_transformer(12, n_splines=6).fit_transform(combined_df['month'].to_numpy().reshape(-1,1)) spline_quarter= periodic_spline_transformer(4, n_splines=2, degree=2).fit_transform(combined_df['quarter'].to_numpy().reshape(-1,1)) spline_hour= periodic_spline_transformer(24, n_splines=12).fit_transform(combined_df['hour'].to_numpy().reshape(-1,1)) spline_dayofyear= periodic_spline_transformer(365, n_splines=182).fit_transform(combined_df['dayofyear'].to_numpy().reshape(-1,1)) spline_day_of_week= periodic_spline_transformer(7, n_splines=3).fit_transform(combined_df['day_of_week'].to_numpy().reshape(-1,1)) for i in range(spline_week.shape[1]): combined_df[f"spline_week_{i}"] = spline_week[:,i] for i in range(spline_month.shape[1]): combined_df[f"spline_month_{i}"] = spline_month[:,i] for i in range(spline_quarter.shape[1]): combined_df[f"spline_quarter_{i}"] = spline_quarter[:,i] for i in range(spline_hour.shape[1]): combined_df[f"spline_hour_{i}"] = spline_hour[:,i] for i in range(spline_dayofyear.shape[1]): combined_df[f"spline_dayofyear_{i}"] = spline_dayofyear[:,i] for i in range(spline_day_of_week.shape[1]): combined_df[f"spline_day_of_week_{i}"] = spline_day_of_week[:,i] combined_df.head() ###Output _____no_output_____ ###Markdown Train Test Split ###Code '''train_data = combined_df[~combined_df.demand.isna()] test_data = combined_df[combined_df.demand.isna()] features = combined_df.columns.tolist() features.remove('demand') target = ['demand'] X = train_data[features] y = train_data[target] #max_demand = y.max().values[0] #y = y / max_demand print(X.shape, y.shape)''' features = combined_df.columns.tolist() features.remove('demand') target = ['demand'] train_data = combined_df.loc[:'2020-08-01'] X_train = train_data[features].values y_train = train_data[target].values val_data = combined_df.loc['2020-08-01':'2020-12-27'] X_val = val_data[features].values y_val = val_data[target].values test_data = combined_df[combined_df.demand.isna()] X_test = test_data[features].values fig, axs = plt.subplots(1,1, figsize=(10,3)) train_data.demand.plot() val_data.demand.plot() ###Output _____no_output_____ ###Markdown Model ###Code from catboost import CatBoostRegressor, metrics, Pool, cv from sklearn.preprocessing import StandardScaler params = { 'n_estimators':50000, 'learning_rate': 0.0005, #'random_seed': 11, 'eval_metric':"MSLE", 'max_depth': 8, 'use_best_model':True, 'early_stopping_rounds':500, } train_pool = Pool(X_train, y_train) validate_pool = Pool(X_val, y_val) model = CatBoostRegressor(**params).fit(train_pool, eval_set=validate_pool, verbose=10) feature_importances = model.get_feature_importance(train_pool) feature_names = features for score, name in sorted(zip(feature_importances, feature_names), reverse=True): print('{}: {}'.format(name, score)) X_test = test_data[features].values y_test = model.predict(X_test) sample_df.demand = y_test sample_df.to_csv('submission_new_catboost.csv', index=False) ###Output _____no_output_____ ###Markdown ML Models using Sklearn ###Code from sklearn.ensemble import BaggingRegressor, RandomForestRegressor ,AdaBoostRegressor, GradientBoostingRegressor, VotingRegressor from sklearn.linear_model import LinearRegression, Ridge, Lasso, BayesianRidge, ElasticNet, LassoLars, PassiveAggressiveRegressor from sklearn.svm import SVR, NuSVR from sklearn.neighbors import KNeighborsRegressor, RadiusNeighborsRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.preprocessing import StandardScaler, MinMaxScaler from sklearn.metrics import mean_squared_error regs = [RandomForestRegressor(), AdaBoostRegressor(), GradientBoostingRegressor(), LinearRegression(), Ridge(), Lasso(), BayesianRidge(), ElasticNet(), LassoLars(), PassiveAggressiveRegressor(), SVR(), KNeighborsRegressor(), DecisionTreeRegressor()] for reg in regs: reg.fit(X_train, y_train.ravel()) y_pred = reg.predict(X_val) print("{} : {}".format( reg.__class__, mean_squared_error(y_val.ravel(), y_pred.ravel(), squared=False))) regs = [ ('rf',RandomForestRegressor()), ('gb',GradientBoostingRegressor()), ('ab',AdaBoostRegressor()), ] reg = VotingRegressor(regs) reg.fit(X_train, y_train.ravel()) y_pred = reg.predict(X_val) mean_squared_error(y_val.ravel(), y_pred.ravel(), squared=False) ###Output _____no_output_____ ###Markdown Light GBM ###Code from lightgbm import LGBMRegressor from hyperopt import hp, fmin, tpe, STATUS_OK, Trials from tqdm.notebook import tqdm def hyperopt_objective(args): n_jobs = 2 default_params = {"seed": 42} params = { "boosting": args["boosting"], "learning_rate": args["learning_rate"], "num_iterations": int(args["num_iterations"]), "num_leaves": int(args["num_leaves"]), "max_depth": int(args["max_depth"]), "min_data_in_leaf": int(args["min_data_in_leaf"]), "min_sum_hessian_in_leaf": args["min_sum_hessian_in_leaf"], "bagging_fraction": args["bagging_fraction"], "bagging_freq": int(args["bagging_freq"]), "feature_fraction": args["feature_fraction"], "extra_trees": args["extra_trees"], "lambda_l1": args["lambda_l1"], "lambda_l2": args["lambda_l2"], "path_smooth": args["path_smooth"], "max_bin": int(args["max_bin"]), } default_params.update(params) model = LGBMRegressor(**default_params) eval_set = [(X_val, y_val)] model.fit(X_train, y_train, eval_set=eval_set, early_stopping_rounds=10, verbose=False, eval_metric='rmse') best_rmse = min(model.evals_result_['valid_0']['rmse']) return best_rmse '''space = { "boosting": hp.pchoice("boosting", [(0.75, "gbdt"), (0.25, "dart")]), "learning_rate": 10 ** hp.uniform("learning_rate", -2, 0), "num_iterations": hp.quniform("num_iterations", 1, 1000, 1), "num_leaves": 2 ** hp.uniform("num_leaves", 1, 8), "max_depth": -1, "min_data_in_leaf": 2 * 10 ** hp.uniform("min_data_in_leaf", 0, 2), "min_sum_hessian_in_leaf": hp.uniform("min_sum_hessian_in_leaf", 1e-4, 1e-2), "bagging_fraction": hp.uniform("bagging_fraction", 0.5, 1.0), "bagging_freq": hp.qlognormal("bagging_freq", 0.0, 1.0, 1), "feature_fraction": hp.uniform("feature_fraction", 0.5, 1.0), "extra_trees": hp.pchoice("extra_trees", [(0.75, False), (0.25, True)]), "lambda_l1": hp.lognormal("lambda_l1", 0.0, 1.0), "lambda_l2": hp.lognormal("lambda_l2", 0.0, 1.0), "path_smooth": hp.lognormal("path_smooth", 0.0, 1.0), "max_bin": 2 ** hp.quniform("max_bin", 6, 10, 1) - 1, } trials = Trials() best = fmin(fn=hyperopt_objective, space=space, algo=tpe.suggest, max_evals=500, trials=trials)''' n_jobs = 2 params = { 'n_estimators':50000, 'learning_rate': 0.0005, #'seed': 11, 'eval_metric':"MSLE", 'max_depth': 6, 'use_best_model':True, 'early_stopping_rounds':500, } model = LGBMRegressor(**params) eval_set = [(X_val, y_val)] model.fit(X_train, y_train, eval_set=eval_set, early_stopping_rounds=10, verbose=True, eval_metric='rmse') y_test = model.predict(X_test) sample_df.demand = y_test sample_df.to_csv('submission_lgbm.csv', index=False) ###Output _____no_output_____ ###Markdown Neural Network ###Code import torch import os.path as osp import torch.nn as nn import torch.optim as optim from torch.utils.data import Dataset, DataLoader from tqdm.notebook import tqdm from sklearn.preprocessing import StandardScaler train_scaler = StandardScaler().fit(X_train) target_scaler = StandardScaler().fit(y_train) X_train = train_scaler.transform(X_train) y_train = target_scaler.transform(y_train) X_val = train_scaler.transform(X_val) y_val = target_scaler.transform(y_val) class TimeSeriesDataset(Dataset): def __init__(self, X, y): self.X = X self.y = y def __len__(self): return self.X.__len__() def __getitem__(self, idx): return np.array(self.X[idx], dtype=float), np.array(self.y[idx], dtype='float') class TimeSeriesModel(nn.Module): def __init__(self, num_features): super(TimeSeriesModel, self).__init__() self.linear1 = nn.Linear(num_features, 128) self.linear2 = nn.Linear(128,64) self.linear3 = nn.Linear(64,16) self.linear4 = nn.Linear(16,1) self.dropout = nn.Dropout(0.5) self.activation1 = nn.ReLU() def forward(self, x): x = self.dropout(self.linear1(x)) x = self.activation1(x) x = self.dropout(self.linear2(x)) x = self.activation1(x) x = self.dropout(self.linear3(x)) x = self.activation1(x) x = self.linear4(x) return x # Hyperparameters n_epochs = 1000 n_epochs_stop = 30 input_size = X_train.shape[1] output_size = 1 batch_size = 64 device = 'cuda' if torch.cuda.is_available() else 'cpu' model_dir = 'models' train_dataset = TimeSeriesDataset(X_train, y_train) val_dataset = TimeSeriesDataset(X_val, y_val) train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=False) val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False) model = TimeSeriesModel(num_features=input_size).cuda() criterion = torch.nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001) best_loss = np.inf epochs_no_improve = 0 model_name = 'nn_model_new' for epochs in range(1, n_epochs+1): train_loss = 0 model.train() for data, target in train_loader: optimizer.zero_grad() data = torch.Tensor(np.array(data)).to(device) target = target.reshape(-1,1) output = model(data.float()) loss = criterion(output, target.float().to(device)) if type(criterion) == torch.nn.modules.loss.MSELoss: loss = torch.sqrt(loss) loss.backward() optimizer.step() #scheduler.step() train_loss += loss.item() train_loss /= len(train_loader) model.eval() val_loss = 0 with torch.no_grad(): for data, target in val_loader: data = torch.Tensor(np.array(data)).to(device) target = target.reshape(-1,1) output = model(data.float()) loss = criterion(output, target.float().to(device)) if type(criterion) == torch.nn.modules.loss.MSELoss: loss = torch.sqrt(loss) val_loss += loss.item() val_loss /= len(val_loader) # early stopping if val_loss < best_loss: best_loss = val_loss torch.save(model.state_dict(), osp.join(model_dir, '{}.pt'.format(model_name))) epochs_no_improve = 0 else: epochs_no_improve += 1 if epochs_no_improve == n_epochs_stop: #print("Early stopping.") break print(f'Epoch {epochs} train loss: {round(train_loss,8)} val loss: {round(val_loss,8)}') print('best loss: {}'.format(best_loss)) model = TimeSeriesModel(num_features=input_size).cuda() model.load_state_dict(torch.load('models/{}.pt'.format(model_name))) model.eval() predictions = [] true = [] with torch.no_grad(): for data, target in val_loader: data = torch.Tensor(np.array(data)).to(device) output = model(data.float()) predictions.extend(output.squeeze().tolist()) true.extend(target.squeeze().tolist()) from sklearn.metrics import mean_squared_error true = target_scaler.inverse_transform(np.array(true).reshape(-1,1)).flatten() predictions = target_scaler.inverse_transform(np.array(predictions).reshape(-1,1)).flatten() print(mean_squared_error(true, predictions, squared=False)) X_test = train_scaler.transform(X_test) test_dataset = TimeSeriesDataset(X_test, y_train) test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False) predictions = [] true = [] with torch.no_grad(): for data, target in test_loader: data = torch.Tensor(np.array(data)).to(device) output = model(data.float()) predictions.extend(output.squeeze().tolist()) true.extend(target.squeeze().tolist()) predictions = target_scaler.inverse_transform(np.array(predictions).reshape(-1,1)).flatten() sample_df.demand = predictions sample_df.to_csv('submission_nn.csv', index=False) ###Output _____no_output_____ ###Markdown LSTM ###Code class TimeSeriesDataset(Dataset): def __init__(self, X, y, seq_len): self.X = X self.y = y self.seq_len = seq_len def __len__(self): return self.X.__len__() - self.seq_len def __getitem__(self, idx): return np.array(self.X[idx:idx+self.seq_len]), np.array(self.y[idx+self.seq_len]) class Model_LSTM(nn.Module): def __init__(self, num_features, hidden_units, timesteps, lstm_layers=1): super().__init__() self.num_features = num_features # this is the number of features self.hidden_units = hidden_units self.num_layers = lstm_layers self.seq_len = timesteps # self.proj_size = 64 dense1 = 1024 dense2 = 512 self.lstm = nn.LSTM( input_size=num_features, hidden_size=hidden_units, batch_first=True, num_layers=self.num_layers, dropout = 0.2, # proj_size = self.proj_size ) # self.lstm_linear = nn.Linear(in_features=self.hidden_units*self.num_layers, out_features=1024) self.dropout = nn.Dropout(0.3) self.lstm_linear = nn.Linear(self.seq_len*self.hidden_units,dense1) self.linear_mid = nn.Linear(dense1,dense2) self.linear_out = nn.Linear(dense2,1) def forward(self, x): lstm_out, (hn, _) = self.lstm(x) ## all lstm layer hidden states lstm_out = lstm_out.reshape(lstm_out.shape[0], -1) out0 = self.lstm_linear(lstm_out) out0 = self.dropout(out0) out1 = self.linear_mid(out0) out1 = self.dropout(out1) out = self.linear_out(out1) return out # Hyperparameters n_epochs = 100 n_epochs_stop = 30 input_size = X_train.shape[1] output_size = 1 batch_size = 256 device = 'cuda' if torch.cuda.is_available() else 'cpu' model_dir = 'models' sequence_length = 30 train_dataset = TimeSeriesDataset(X_train, y_train, seq_len=sequence_length) val_dataset = TimeSeriesDataset(X_val, y_val, seq_len=sequence_length) train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=False) val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False) model = Model_LSTM(num_features=input_size, hidden_units=128, timesteps=sequence_length, lstm_layers=3).cuda() criterion = torch.nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.0001) best_loss = np.inf epochs_no_improve = 0 model_name = 'nn_model_lstm' for epochs in range(1, n_epochs+1): train_loss = 0 model.train() for data, target in train_loader: optimizer.zero_grad() data = torch.Tensor(np.array(data)).to(device) target = target.reshape(-1,1) output = model(data.float()) loss = criterion(output, target.float().to(device)) if type(criterion) == torch.nn.modules.loss.MSELoss: loss = torch.sqrt(loss) loss.backward() optimizer.step() #scheduler.step() train_loss += loss.item() train_loss /= len(train_loader) model.eval() val_loss = 0 with torch.no_grad(): for data, target in val_loader: data = torch.Tensor(np.array(data)).to(device) target = target.reshape(-1,1) output = model(data.float()) loss = criterion(output, target.float().to(device)) if type(criterion) == torch.nn.modules.loss.MSELoss: loss = torch.sqrt(loss) val_loss += loss.item() val_loss /= len(val_loader) # early stopping if val_loss < best_loss: best_loss = val_loss torch.save(model.state_dict(), osp.join(model_dir, '{}.pt'.format(model_name))) epochs_no_improve = 0 else: epochs_no_improve += 1 if epochs_no_improve == n_epochs_stop: #print("Early stopping.") break print(f'Epoch {epochs} train loss: {round(train_loss,8)} val loss: {round(val_loss,8)}') print('best loss: {}'.format(best_loss)) model = Model_LSTM(num_features=input_size, hidden_units=128, timesteps=sequence_length, lstm_layers=3).cuda() model.load_state_dict(torch.load('models/{}.pt'.format(model_name))) model.eval() predictions = [] true = [] with torch.no_grad(): for data, target in val_loader: data = torch.Tensor(np.array(data)).to(device) output = model(data.float()) predictions.extend(output.squeeze().tolist()) true.extend(target.squeeze().tolist()) from sklearn.metrics import mean_squared_error true = target_scaler.inverse_transform(np.array(true).reshape(-1,1)).flatten() predictions = target_scaler.inverse_transform(np.array(predictions).reshape(-1,1)).flatten() print(mean_squared_error(true, predictions, squared=False)) test_dataset = TimeSeriesDataset(X_test, y_train, seq_len=sequence_length) test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False) predictions = [] true = [] with torch.no_grad(): for data, target in test_loader: data = torch.Tensor(np.array(data)).to(device) output = model(data.float()) predictions.extend(output.squeeze().tolist()) true.extend(target.squeeze().tolist()) predictions = target_scaler.inverse_transform(np.array(predictions).reshape(-1,1)).flatten() sample_df.demand = predictions sample_df.to_csv('submission_lstm.csv', index=False) ###Output _____no_output_____
notebooks/student-admissions/StudentAdmissions.ipynb
###Markdown Predicting Student Admissions with Neural NetworksIn this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:- GRE Scores (Test)- GPA Scores (Grades)- Class rank (1-4)The dataset originally came from here: http://www.ats.ucla.edu/ Loading the dataTo load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:- https://pandas.pydata.org/pandas-docs/stable/- https://docs.scipy.org/ ###Code # Importing pandas and numpy import pandas as pd import numpy as np # Reading the csv file into a pandas DataFrame data = pd.read_csv('student_data.csv') # Printing out the first 10 rows of our data data[:10] ###Output _____no_output_____ ###Markdown Plotting the dataFirst let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank. ###Code # Importing matplotlib import matplotlib.pyplot as plt %matplotlib inline # Function to help us plot def plot_points(data): X = np.array(data[["gre","gpa"]]) y = np.array(data["admit"]) admitted = X[np.argwhere(y==1)] rejected = X[np.argwhere(y==0)] plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k') plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k') plt.xlabel('Test (GRE)') plt.ylabel('Grades (GPA)') # Plotting the points plot_points(data) plt.show() ###Output _____no_output_____ ###Markdown Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank. ###Code # Separating the ranks data_rank1 = data[data["rank"]==1] data_rank2 = data[data["rank"]==2] data_rank3 = data[data["rank"]==3] data_rank4 = data[data["rank"]==4] # Plotting the graphs plot_points(data_rank1) plt.title("Rank 1") plt.show() plot_points(data_rank2) plt.title("Rank 2") plt.show() plot_points(data_rank3) plt.title("Rank 3") plt.show() plot_points(data_rank4) plt.title("Rank 4") plt.show() ###Output _____no_output_____ ###Markdown This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it. TODO: One-hot encoding the rankUse the `get_dummies` function in pandas in order to one-hot encode the data.Hint: To drop a column, it's suggested that you use `one_hot_data`[.drop( )](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html). ###Code # TODO: Make dummy variables for rank and concat existing columns one_hot_data = pd.concat([data, pd.get_dummies(data['rank'], prefix='rank')], axis = 1) # TODO: Drop the previous rank column one_hot_data = one_hot_data.drop('rank', axis = 1) # Print the first 10 rows of our data one_hot_data[:10] ###Output _____no_output_____ ###Markdown TODO: Scaling the dataThe next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800. ###Code # Making a copy of our data processed_data = one_hot_data[:] # TODO: Scale the columns processed_data['gre'] = processed_data['gre']/800 processed_data['gpa'] = processed_data['gpa']/4 # Printing the first 10 rows of our procesed data processed_data[:10] ###Output _____no_output_____ ###Markdown Splitting the data into Training and Testing In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data. ###Code sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False) train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample) print("Number of training samples is", len(train_data)) print("Number of testing samples is", len(test_data)) print(train_data[:10]) print(test_data[:10]) ###Output Number of training samples is 360 Number of testing samples is 40 admit gre gpa rank_1 rank_2 rank_3 rank_4 351 0 0.775 0.214375 0 0 1 0 373 1 0.775 0.210625 1 0 0 0 161 0 0.800 0.218750 0 1 0 0 140 0 0.800 0.245625 0 1 0 0 398 0 0.875 0.228125 0 1 0 0 244 0 0.675 0.190000 1 0 0 0 31 0 0.950 0.209375 0 0 1 0 350 1 0.975 0.250000 0 1 0 0 9 0 0.875 0.245000 0 1 0 0 388 0 0.800 0.198125 0 1 0 0 admit gre gpa rank_1 rank_2 rank_3 rank_4 13 0 0.875 0.192500 0 1 0 0 27 1 0.650 0.233750 0 0 0 1 42 1 0.750 0.196875 0 1 0 0 43 0 0.625 0.206875 0 0 1 0 46 1 0.725 0.216250 0 1 0 0 47 0 0.625 0.185625 0 0 0 1 48 0 0.550 0.155000 0 0 0 1 57 0 0.475 0.183750 0 0 1 0 70 0 0.800 0.250000 0 0 1 0 77 1 1.000 0.250000 0 0 1 0 ###Markdown Splitting the data into features and targets (labels)Now, as a final step before the training, we'll split the data into features (X) and targets (y). ###Code features = train_data.drop('admit', axis=1) targets = train_data['admit'] features_test = test_data.drop('admit', axis=1) targets_test = test_data['admit'] print(features[:10]) print(targets[:10]) ###Output gre gpa rank_1 rank_2 rank_3 rank_4 351 0.775 0.214375 0 0 1 0 373 0.775 0.210625 1 0 0 0 161 0.800 0.218750 0 1 0 0 140 0.800 0.245625 0 1 0 0 398 0.875 0.228125 0 1 0 0 244 0.675 0.190000 1 0 0 0 31 0.950 0.209375 0 0 1 0 350 0.975 0.250000 0 1 0 0 9 0.875 0.245000 0 1 0 0 388 0.800 0.198125 0 1 0 0 351 0 373 1 161 0 140 0 398 0 244 0 31 0 350 1 9 0 388 0 Name: admit, dtype: int64 ###Markdown Training the 2-layer Neural NetworkThe following function trains the 2-layer neural network. First, we'll write some helper functions. ###Code # Activation (sigmoid) function def sigmoid(x): return 1 / (1 + np.exp(-x)) def sigmoid_prime(x): return sigmoid(x) * (1-sigmoid(x)) def error_formula(y, output): return - y*np.log(output) - (1 - y) * np.log(1-output) ###Output _____no_output_____ ###Markdown TODO: Backpropagate the errorNow it's your turn to shine. Write the error term. Remember that this is given by the equation $$ (y-\hat{y}) \sigma'(x) $$ ###Code # TODO: Write the error term formula def error_term_formula(x, y, output): return ((y - output) * sigmoid_prime(x)) # Neural Network hyperparameters epochs = 1000 learnrate = 0.5 # Training function def train_nn(features, targets, epochs, learnrate): # Use to same seed to make debugging easier np.random.seed(42) n_records, n_features = features.shape last_loss = None # Initialize weights weights = np.random.normal(scale=1 / n_features**.5, size=n_features) for e in range(epochs): del_w = np.zeros(weights.shape) for x, y in zip(features.values, targets): # Loop through all records, x is the input, y is the target # Activation of the output unit # Notice we multiply the inputs and the weights here # rather than storing h as a separate variable output = sigmoid(np.dot(x, weights)) # The error, the target minus the network output error = error_formula(y, output) # The error term error_term = error_term_formula(x, y, output) # The gradient descent step, the error times the gradient times the inputs del_w += error_term * x # Update the weights here. The learning rate times the # change in weights, divided by the number of records to average weights += learnrate * del_w / n_records # Printing out the mean square error on the training set if e % (epochs / 10) == 0: out = sigmoid(np.dot(features, weights)) loss = np.mean((out - targets) ** 2) print("Epoch:", e) if last_loss and last_loss < loss: print("Train loss: ", loss, " WARNING - Loss Increasing") else: print("Train loss: ", loss) last_loss = loss print("=========") print("Finished training!") return weights weights = train_nn(features, targets, epochs, learnrate) ###Output Epoch: 0 Train loss: 0.28190852290524643 ========= Epoch: 100 Train loss: 0.21024745981794477 ========= Epoch: 200 Train loss: 0.20732667162332336 ========= Epoch: 300 Train loss: 0.206135150422786 ========= Epoch: 400 Train loss: 0.20546814868661892 ========= Epoch: 500 Train loss: 0.20505123547673298 ========= Epoch: 600 Train loss: 0.2047566001670719 ========= Epoch: 700 Train loss: 0.20452362328510365 ========= Epoch: 800 Train loss: 0.20432310399233436 ========= Epoch: 900 Train loss: 0.20414066026284275 ========= Finished training! ###Markdown Calculating the Accuracy on the Test Data ###Code # Calculate accuracy on test data test_out = sigmoid(np.dot(features_test, weights)) predictions = test_out > 0.5 accuracy = np.mean(predictions == targets_test) print("Prediction accuracy: {:.3f}".format(accuracy)) ###Output Prediction accuracy: 0.650
tests/test_plotVectorField.ipynb
###Markdown test 1: lines are properly centered ###Code # small coordinate grid so that pixels can be visualized x = np.linspace(-1.5,1.5,5) xx, yy = np.meshgrid(x,x) # random retardance, orientation rotates azimuthally. retardance = np.random.randn(*xx.shape) orientation = np.arctan2(xx,yy)%np.pi fig1 = plt.figure(figsize=(18,5)) # create a figure with the default size ax1= fig1.add_subplot(131) pltOrder.plotVectorField(retardance,orientation,spacing=1,window=1,colorOrient=True,linewidth=0.1,linelength=1,clim=[-1, 1]); ax2 = fig1.add_subplot(132) plt.imshow(retardance,cmap='gray', vmin=-1, vmax=1); plt.title('retardance') plt.colorbar(ax=ax2); ax3=fig1.add_subplot(133) im=plt.imshow(orientation, cmap='hsv'); # Need to adapt this look up table to represent orientation. plt.title('slow axis') plt.colorbar(ax=ax3); ###Output _____no_output_____ ###Markdown test 2: line spacing and window can be controlled separately. ###Code x = np.linspace(-1.5,1.5,11) xx, yy = np.meshgrid(x,x) xyextent=[-1.5,1.5,-1.5,1.5] # retardance: increases from 0 to full-wave (2pi) over radius of 1, and then stays constant retardance = 2*np.pi*np.sqrt(xx**2+yy**2) retardance[retardance >2*np.pi] = 2*np.pi # orientation rotates azimuthally. orientation = np.arctan2(xx,yy)%np.pi # transmission is assumed to be zero beyond radius 1. transmission = np.sqrt(xx**2+yy**2)<=1 transmission = transmission.astype('float32') fig2 = plt.figure(figsize=(18,5)) ax=plt.subplot(131) plt.imshow(retardance,cmap='gray',extent=xyextent); plt.colorbar(ax=ax); ax=plt.subplot(132) plt.imshow(orientation,cmap='hsv',extent=xyextent); plt.colorbar(ax=ax); ax=plt.subplot(133) plt.imshow(transmission,cmap='gray',extent=xyextent); plt.colorbar(ax=ax); fig3 = plt.figure(figsize=(18,5)) ax=plt.subplot(131) pltOrder.plotVectorField(retardance,orientation,spacing=1,window=1,colorOrient=True,linewidth=0.1,linelength=1); plt.title('spacing=1, window=1') ax=plt.subplot(132) pltOrder.plotVectorField(retardance,orientation,spacing=3,window=1,colorOrient=True,linewidth=0.1,linelength=1); plt.title('spacing=3, window=1') ax=plt.subplot(133) pltOrder.plotVectorField(retardance,orientation,spacing=1,window=3,colorOrient=True,linewidth=0.1,linelength=1); plt.title('spacing=1, window=3') ###Output _____no_output_____ ###Markdown test 3: lines can be scaled and masked separately ###Code x = np.linspace(-1.5,1.5,21) xx, yy = np.meshgrid(x,x) xyextent=[-1.5,1.5,-1.5,1.5] # retardance: increases from 0 to full-wave (2pi) over radius of 1, and then stays constant retardance = 2*np.pi*np.sqrt(xx**2+yy**2) retardance[retardance >2*np.pi] = 2*np.pi # orientation rotates azimuthally. orientation = np.arctan2(xx,yy)%np.pi # transmission is assumed to be zero beyond radius 1. transmission = np.sqrt(xx**2+yy**2)<=1 transmission = transmission.astype('float32') fig2 = plt.figure(figsize=(18,5)) ax=plt.subplot(131) plt.imshow(retardance,cmap='gray',extent=xyextent); plt.colorbar(ax=ax); ax=plt.subplot(132) plt.imshow(orientation,cmap='hsv',extent=xyextent); plt.colorbar(ax=ax); ax=plt.subplot(133) plt.imshow(transmission,cmap='gray',extent=xyextent); plt.colorbar(ax=ax); fig3 = plt.figure(figsize=(18,18)) ax=plt.subplot(221) pltOrder.plotVectorField(transmission,orientation,spacing=1,window=1,colorOrient=False,linewidth=0.1,linelength=1, cmapImage='viridis'); plt.title('image transmission, lines orientation') ax=plt.subplot(222) pltOrder.plotVectorField(transmission,orientation,anisotropy=retardance,spacing=1,window=1,colorOrient=True,linewidth=0.1,linelength=0.5, cmapImage='gray'); plt.title('previous with lenth$\propto$retardance and lines colored') ax=plt.subplot(223) pltOrder.plotVectorField(transmission,orientation,anisotropy=retardance,spacing=1,window=7,colorOrient=True,linewidth=0.1,linelength=0.5, cmapImage='gray'); plt.title('same as previous, window = 7 '); ax=plt.subplot(224) pltOrder.plotVectorField(transmission,orientation,anisotropy=retardance,threshold = transmission.astype('bool'), \ spacing=1,window=7,colorOrient=True,linewidth=0.1,linelength=0.5, cmapImage='gray'); plt.title('same as previous, mask = transmission '); ###Output _____no_output_____
Energy_sandbox.ipynb
###Markdown ft_ids.items() ###Code np.random.rand(100) pogoda["datetime_d"] = pd.to_datetime(pogoda["datetime_d"]) with open("pogoda.pkl", "wb") as f: pickle.dump(pds[0],f) import pydeck as pdk UK_ACCIDENTS_DATA = 'https://raw.githubusercontent.com/visgl/deck.gl-data/master/examples/3d-heatmap/heatmap-data.csv' df = pd.read_csv(UK_ACCIDENTS_DATA) df["lat"] = 51 df.iloc[0] pogoda["lng"] = pogoda.lon def_date= pd.to_datetime('2020-07-01 00:00:00') pogoda["check"] = pogoda["winddirection"].apply(lambda x: random.randint(0,255)) check_df = pogoda[(pogoda.datetime_d == def_date)&(pogoda.lon<60)&(pogoda.lat<60)&(pogoda.lon>40)&(pogoda.lat>40)] df[:10] import numpy as np import colorsys def _get_colors(num_colors): colors=[] for i in np.arange(0., 360., 360. / num_colors): hue = i/360. lightness = (50 + np.random.rand() * 10)/100. saturation = (90 + np.random.rand() * 10)/100. colors.append(colorsys.hls_to_rgb(hue, lightness, saturation)) return colors _get_colors(5)[0][0] def color_funct(x): with open("tttt.txt","w") as f: f.write(str(x)) return [155,0,144,255] """ ColumnLayer =========== Real estate values for select properties in Taipei. Data is from 2012-2013. The height of a column indicates increasing price per unit area, and the color indicates distance from a subway stop. The real estate valuation data set from UC Irvine's Machine Learning repository, viewable here: https://archive.ics.uci.edu/ml/datasets/Real+estate+valuation+data+set """ import pandas as pd import pydeck as pdk DATA_URL = "https://raw.githubusercontent.com/ajduberstein/geo_datasets/master/housing.csv" df = pd.read_csv(DATA_URL) df = check_df view = pdk.data_utils.compute_view(df[["lng", "lat"]]) view.pitch = 75 view.bearing = 60 view_state = pdk.ViewState( longitude=47.415, latitude=46.2323, zoom=6, min_zoom=5, max_zoom=15, pitch=40.5, bearing=-27.36) column_layer = pdk.Layer( "ColumnLayer", data=df, get_position=["lng", "lat"], get_elevation=["temperature"], elevation_scale=1000, radius=20000, get_fill_color= ["check",0,0], pickable=True#, # auto_highlight=True, ) tooltip = { "html": "<b>{mrt_distance}</b> meters away from an MRT station, costs <b>{price_per_unit_area}</b> NTD/sqm", "style": {"background": "grey", "color": "white", "font-family": '"Helvetica Neue", Arial', "z-index": "10000"}, } r = pdk.Deck( column_layer, initial_view_state=view, #tooltip=tooltip, #map_provider="mapbox", map_style=pdk.map_styles.LIGHT #pdk.map_styles.SATELLITE, ) r.to_html("column_layer.html") f = lambda x=0: 1 f() """ ColumnLayer =========== Real estate values for select properties in Taipei. Data is from 2012-2013. The height of a column indicates increasing price per unit area, and the color indicates distance from a subway stop. The real estate valuation data set from UC Irvine's Machine Learning repository, viewable here: https://archive.ics.uci.edu/ml/datasets/Real+estate+valuation+data+set """ import pandas as pd import pydeck as pdk DATA_URL = "https://raw.githubusercontent.com/ajduberstein/geo_datasets/master/housing.csv" df = pd.read_csv(DATA_URL) view = pdk.data_utils.compute_view(df[["lng", "lat"]]) view.pitch = 75 view.bearing = 60 column_layer = pdk.Layer( "ColumnLayer", data=df, get_position=["lng", "lat"], get_elevation="price_per_unit_area", elevation_scale=100, radius=50, get_fill_color=["mrt_distance * 10", "mrt_distance", "mrt_distance * 10", 140], pickable=True, auto_highlight=True, ) # Set the viewport location view_state = pdk.ViewState( longitude=-1.415, latitude=52.2323, zoom=6, min_zoom=5, max_zoom=15, pitch=40.5, bearing=-27.36) tooltip = { "html": "<b>{mrt_distance}</b> meters away from an MRT station, costs <b>{price_per_unit_area}</b> NTD/sqm", "style": {"background": "grey", "color": "white", "font-family": '"Helvetica Neue", Arial', "z-index": "10000"}, } r = pdk.Deck( column_layer, initial_view_state=view, # tooltip=tooltip, #map_provider="mapbox", map_style=pdk.map_styles.LIGHT, ) res = r.to_html("column_layer.html") r.to_html().data import pydeck as pdk UK_ACCIDENTS_DATA = 'https://raw.githubusercontent.com/visgl/deck.gl-data/master/examples/3d-heatmap/heatmap-data.csv' layer = pdk.Layer( 'HexagonLayer', # `type` positional argument is here UK_ACCIDENTS_DATA, get_position=['lng', 'lat'], auto_highlight=True, elevation_scale=50, pickable=True, elevation_range=[0, 3000], extruded=True, coverage=1) # Set the viewport location view_state = pdk.ViewState( longitude=-1.415, latitude=52.2323, zoom=6, min_zoom=5, max_zoom=15, pitch=40.5, bearing=-27.36) f = 0 def filter_by_viewport(c): global f f=c print(str(c)) # Combined all of it and render a viewport r = pdk.Deck(layers=[layer], initial_view_state=view_state) r.deck_widget.on_click(filter_by_viewport) r.to_html('hexagon-example.html') f from ipywidgets import HTML text = HTML(value='Move the viewport') layer = pdk.Layer( 'ScatterplotLayer', df, pickable=True, get_position=['lng', 'lat'], get_fill_color=[255, 0, 0], get_radius=100 ) r = pdk.Deck(layer, initial_view_state= pdk.data_utils.compute_view(df)) def filter_by_bbox(row, west_lng, east_lng, north_lat, south_lat): return west_lng < row['lng'] < east_lng and south_lat < row['lat'] < north_lat def filter_by_viewport(widget_instance, payload): try: west_lng, north_lat = payload['data']['nw'] east_lng, south_lat = payload['data']['se'] filtered_df = df[df.apply(lambda row: filter_by_bbox(row, west_lng, east_lng, north_lat, south_lat), axis=1)] text.value = 'Points in viewport: %s' % int(filtered_df.count()['lng']) except Exception as e: text.value = 'Error: %s' % e r.deck_widget.on_click(filter_by_viewport) display(text) r.show() !pip install ###Output _____no_output_____ ###Markdown ft_ids.items() ###Code np.random.rand(100) pogoda["datetime_d"] = pd.to_datetime(pogoda["datetime_d"]) with open("pogoda.pkl", "wb") as f: pickle.dump(pds[0],f) import pydeck as pdk UK_ACCIDENTS_DATA = 'https://raw.githubusercontent.com/visgl/deck.gl-data/master/examples/3d-heatmap/heatmap-data.csv' df = pd.read_csv(UK_ACCIDENTS_DATA) df["lat"] = 51 df.iloc[0] pogoda["lng"] = pogoda.lon def_date= pd.to_datetime('2020-07-01 00:00:00') pogoda["check"] = pogoda["winddirection"].apply(lambda x: random.randint(0,255)) check_df = pogoda[(pogoda.datetime_d == def_date)&(pogoda.lon<60)&(pogoda.lat<60)&(pogoda.lon>40)&(pogoda.lat>40)] df[:10] import numpy as np import colorsys def _get_colors(num_colors): colors=[] for i in np.arange(0., 360., 360. / num_colors): hue = i/360. lightness = (50 + np.random.rand() * 10)/100. saturation = (90 + np.random.rand() * 10)/100. colors.append(colorsys.hls_to_rgb(hue, lightness, saturation)) return colors _get_colors(5)[0][0] def color_funct(x): with open("tttt.txt","w") as f: f.write(str(x)) return [155,0,144,255] """ ColumnLayer =========== Real estate values for select properties in Taipei. Data is from 2012-2013. The height of a column indicates increasing price per unit area, and the color indicates distance from a subway stop. The real estate valuation data set from UC Irvine's Machine Learning repository, viewable here: https://archive.ics.uci.edu/ml/datasets/Real+estate+valuation+data+set """ import pandas as pd import pydeck as pdk DATA_URL = "https://raw.githubusercontent.com/ajduberstein/geo_datasets/master/housing.csv" df = pd.read_csv(DATA_URL) df = check_df view = pdk.data_utils.compute_view(df[["lng", "lat"]]) view.pitch = 75 view.bearing = 60 view_state = pdk.ViewState( longitude=47.415, latitude=46.2323, zoom=6, min_zoom=5, max_zoom=15, pitch=40.5, bearing=-27.36) column_layer = pdk.Layer( "ColumnLayer", data=df, get_position=["lng", "lat"], get_elevation=["temperature"], elevation_scale=1000, radius=20000, get_fill_color= ["check",0,0], pickable=True#, # auto_highlight=True, ) tooltip = { "html": "<b>{mrt_distance}</b> meters away from an MRT station, costs <b>{price_per_unit_area}</b> NTD/sqm", "style": {"background": "grey", "color": "white", "font-family": '"Helvetica Neue", Arial', "z-index": "10000"}, } r = pdk.Deck( column_layer, initial_view_state=view, #tooltip=tooltip, #map_provider="mapbox", map_style=pdk.map_styles.LIGHT #pdk.map_styles.SATELLITE, ) r.to_html("column_layer.html") f = lambda x=0: 1 """ ColumnLayer =========== Real estate values for select properties in Taipei. Data is from 2012-2013. The height of a column indicates increasing price per unit area, and the color indicates distance from a subway stop. The real estate valuation data set from UC Irvine's Machine Learning repository, viewable here: https://archive.ics.uci.edu/ml/datasets/Real+estate+valuation+data+set """ import pandas as pd import pydeck as pdk DATA_URL = "https://raw.githubusercontent.com/ajduberstein/geo_datasets/master/housing.csv" df = pd.read_csv(DATA_URL) view = pdk.data_utils.compute_view(df[["lng", "lat"]]) view.pitch = 75 view.bearing = 60 column_layer = pdk.Layer( "ColumnLayer", data=df, get_position=["lng", "lat"], get_elevation="price_per_unit_area", elevation_scale=100, radius=50, get_fill_color=["mrt_distance * 10", "mrt_distance", "mrt_distance * 10", 140], pickable=True, auto_highlight=True, ) # Set the viewport location view_state = pdk.ViewState( longitude=-1.415, latitude=52.2323, zoom=6, min_zoom=5, max_zoom=15, pitch=40.5, bearing=-27.36) tooltip = { "html": "<b>{mrt_distance}</b> meters away from an MRT station, costs <b>{price_per_unit_area}</b> NTD/sqm", "style": {"background": "grey", "color": "white", "font-family": '"Helvetica Neue", Arial', "z-index": "10000"}, } r = pdk.Deck( column_layer, initial_view_state=view, # tooltip=tooltip, #map_provider="mapbox", map_style=pdk.map_styles.LIGHT, ) res = r.to_html("column_layer.html") r.to_html().data import pydeck as pdk UK_ACCIDENTS_DATA = 'https://raw.githubusercontent.com/visgl/deck.gl-data/master/examples/3d-heatmap/heatmap-data.csv' layer = pdk.Layer( 'HexagonLayer', # `type` positional argument is here UK_ACCIDENTS_DATA, get_position=['lng', 'lat'], auto_highlight=True, elevation_scale=50, pickable=True, elevation_range=[0, 3000], extruded=True, coverage=1) # Set the viewport location view_state = pdk.ViewState( longitude=-1.415, latitude=52.2323, zoom=6, min_zoom=5, max_zoom=15, pitch=40.5, bearing=-27.36) f = 0 def filter_by_viewport(c): global f f=c print(str(c)) # Combined all of it and render a viewport r = pdk.Deck(layers=[layer], initial_view_state=view_state) r.deck_widget.on_click(filter_by_viewport) r.to_html('hexagon-example.html') f from ipywidgets import HTML text = HTML(value='Move the viewport') layer = pdk.Layer( 'ScatterplotLayer', df, pickable=True, get_position=['lng', 'lat'], get_fill_color=[255, 0, 0], get_radius=100 ) r = pdk.Deck(layer, initial_view_state= pdk.data_utils.compute_view(df)) def filter_by_bbox(row, west_lng, east_lng, north_lat, south_lat): return west_lng < row['lng'] < east_lng and south_lat < row['lat'] < north_lat def filter_by_viewport(widget_instance, payload): try: west_lng, north_lat = payload['data']['nw'] east_lng, south_lat = payload['data']['se'] filtered_df = df[df.apply(lambda row: filter_by_bbox(row, west_lng, east_lng, north_lat, south_lat), axis=1)] text.value = 'Points in viewport: %s' % int(filtered_df.count()['lng']) except Exception as e: text.value = 'Error: %s' % e r.deck_widget.on_click(filter_by_viewport) display(text) r.show() !pip install ###Output _____no_output_____
CausalModelling.ipynb
###Markdown Difference-in Difference on GDP_PC_PPP ###Code GDPPCGP = SDGData[SDGData['econ_cat'] != 'Others'] GDPPCGP = GDPPCGP[['year','econ_cat', 'GDP_PC_PPP']].set_index(['year','econ_cat']).unstack('econ_cat').dropna() GDPPCGP['pa_clim_acc'] = 0 GDPPCGP['pa_clim_acc'] = GDPPCGP.pa_clim_acc.where(GDPPCGP.index>2015,1) group_mean_check = GDPPCGP.groupby('pa_clim_acc').mean() group_mean_check GDPPCMod = SDGData.loc[SDGData['econ_cat'] != 'Others', ['year','econ_cat', 'GDP_PC_PPP','CO2GDP' ]].dropna() GDPPCMod['pa_clim_acc'] = 0 GDPPCMod['pa_clim_acc'] = GDPPCMod.pa_clim_acc.where(GDPPCMod.year>2015,1) conditions = [ (GDPPCMod['econ_cat'] == 'Developed')] choices = [0] GDPPCMod['econ_cat'] = np.select(conditions, choices, default=1) model = 'GDP_PC_PPP ~ CO2GDP + year + econ_cat + pa_clim_acc + econ_cat * pa_clim_acc' GDPPC_Model = smf.ols(formula=model, data=GDPPCMod) GDPPC_Mod_result = GDPPC_Model.fit() print(GDPPC_Mod_result.summary()) print(GDPPC_Mod_result.summary().as_latex()) ###Output \begin{center} \begin{tabular}{lclc} \toprule \textbf{Dep. Variable:} & GDP\_PC\_PPP & \textbf{ R-squared: } & 0.997 \\ \textbf{Model:} & OLS & \textbf{ Adj. R-squared: } & 0.996 \\ \textbf{Method:} & Least Squares & \textbf{ F-statistic: } & 1283. \\ \textbf{Date:} & Wed, 11 May 2022 & \textbf{ Prob (F-statistic):} & 2.37e-24 \\ \textbf{Time:} & 08:09:55 & \textbf{ Log-Likelihood: } & -203.94 \\ \textbf{No. Observations:} & 26 & \textbf{ AIC: } & 419.9 \\ \textbf{Df Residuals:} & 20 & \textbf{ BIC: } & 427.4 \\ \textbf{Df Model:} & 5 & \textbf{ } & \\ \bottomrule \end{tabular} \begin{tabular}{lcccccc} & \textbf{coef} & \textbf{std err} & \textbf{t} & \textbf{P$> |$t$|$} & \textbf{[0.025} & \textbf{0.975]} \\ \midrule \textbf{Intercept} & -2.608e+05 & 1.4e+05 & -1.861 & 0.078 & -5.53e+05 & 3.16e+04 \\ \textbf{CO2GDP} & -1.421e+05 & 1.37e+04 & -10.388 & 0.000 & -1.71e+05 & -1.14e+05 \\ \textbf{year} & 162.6057 & 68.389 & 2.378 & 0.028 & 19.949 & 305.263 \\ \textbf{econ\_cat} & -2.022e+04 & 705.584 & -28.650 & 0.000 & -2.17e+04 & -1.87e+04 \\ \textbf{pa\_clim\_acc} & -2139.4676 & 577.782 & -3.703 & 0.001 & -3344.700 & -934.235 \\ \textbf{econ\_cat:pa\_clim\_acc} & 4251.8178 & 815.927 & 5.211 & 0.000 & 2549.825 & 5953.811 \\ \bottomrule \end{tabular} \begin{tabular}{lclc} \textbf{Omnibus:} & 10.240 & \textbf{ Durbin-Watson: } & 1.778 \\ \textbf{Prob(Omnibus):} & 0.006 & \textbf{ Jarque-Bera (JB): } & 8.583 \\ \textbf{Skew:} & -1.141 & \textbf{ Prob(JB): } & 0.0137 \\ \textbf{Kurtosis:} & 4.647 & \textbf{ Cond. No. } & 2.05e+06 \\ \bottomrule \end{tabular} %\caption{OLS Regression Results} \end{center} Warnings: \newline [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. \newline [2] The condition number is large, 2.05e+06. This might indicate that there are \newline strong multicollinearity or other numerical problems. ###Markdown Difference-in Difference on CO2GDP ###Code CO2GDP_Mod = SDGData[SDGData['econ_cat'] != 'Others'] CO2GDP_Mod = CO2GDP_Mod[['year','econ_cat', 'CO2GDP', 'GDP_PEMP']].set_index(['year','econ_cat']).unstack('econ_cat').dropna() CO2GDP_Mod['pa_clim_acc'] = 0 CO2GDP_Mod['pa_clim_acc'] = CO2GDP_Mod.pa_clim_acc.where(CO2GDP_Mod.index>2015,1) group_mean_check = CO2GDP_Mod.groupby('pa_clim_acc').mean() group_mean_check CO2GDP_Mod = SDGData.loc[SDGData['econ_cat'] != 'Others', ['year','econ_cat', 'CO2GDP','GDP_PEMP']].dropna() CO2GDP_Mod['pa_clim_acc'] = 0 CO2GDP_Mod['pa_clim_acc'] = CO2GDP_Mod.pa_clim_acc.where(GDPPCMod.year>2015,1) conditions = [ (CO2GDP_Mod['econ_cat'] == 'Developed')] choices = [0] CO2GDP_Mod['econ_cat'] = np.select(conditions, choices, default=1) model = 'GDP_PEMP ~ CO2GDP + year + econ_cat + pa_clim_acc + econ_cat * pa_clim_acc' CO2GDP_Model = smf.ols(formula=model, data=CO2GDP_Mod) CO2GDP_Mod_reults = CO2GDP_Model.fit() print(CO2GDP_Mod_reults.summary()) print(CO2GDP_Mod_reults.summary().as_latex()) ###Output \begin{center} \begin{tabular}{lclc} \toprule \textbf{Dep. Variable:} & GDP\_PEMP & \textbf{ R-squared: } & 0.999 \\ \textbf{Model:} & OLS & \textbf{ Adj. R-squared: } & 0.999 \\ \textbf{Method:} & Least Squares & \textbf{ F-statistic: } & 6077. \\ \textbf{Date:} & Wed, 11 May 2022 & \textbf{ Prob (F-statistic):} & 4.30e-31 \\ \textbf{Time:} & 08:29:01 & \textbf{ Log-Likelihood: } & -205.20 \\ \textbf{No. Observations:} & 26 & \textbf{ AIC: } & 422.4 \\ \textbf{Df Residuals:} & 20 & \textbf{ BIC: } & 429.9 \\ \textbf{Df Model:} & 5 & \textbf{ } & \\ \bottomrule \end{tabular} \begin{tabular}{lcccccc} & \textbf{coef} & \textbf{std err} & \textbf{t} & \textbf{P$> |$t$|$} & \textbf{[0.025} & \textbf{0.975]} \\ \midrule \textbf{Intercept} & -3.132e+05 & 1.47e+05 & -2.129 & 0.046 & -6.2e+05 & -6265.156 \\ \textbf{CO2GDP} & -8.33e+04 & 1.44e+04 & -5.802 & 0.000 & -1.13e+05 & -5.33e+04 \\ \textbf{year} & 207.6724 & 71.783 & 2.893 & 0.009 & 57.936 & 357.408 \\ \textbf{econ\_cat} & -4.977e+04 & 740.597 & -67.204 & 0.000 & -5.13e+04 & -4.82e+04 \\ \textbf{pa\_clim\_acc} & -299.6750 & 606.453 & -0.494 & 0.627 & -1564.714 & 965.364 \\ \textbf{econ\_cat:pa\_clim\_acc} & 959.7292 & 856.415 & 1.121 & 0.276 & -826.722 & 2746.180 \\ \bottomrule \end{tabular} \begin{tabular}{lclc} \textbf{Omnibus:} & 6.613 & \textbf{ Durbin-Watson: } & 0.875 \\ \textbf{Prob(Omnibus):} & 0.037 & \textbf{ Jarque-Bera (JB): } & 6.403 \\ \textbf{Skew:} & 0.450 & \textbf{ Prob(JB): } & 0.0407 \\ \textbf{Kurtosis:} & 5.258 & \textbf{ Cond. No. } & 2.05e+06 \\ \bottomrule \end{tabular} %\caption{OLS Regression Results} \end{center} Warnings: \newline [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. \newline [2] The condition number is large, 2.05e+06. This might indicate that there are \newline strong multicollinearity or other numerical problems. ###Markdown Difference-in Difference on AtmCO2 ###Code AtmCO2_Mod = SDGData[SDGData['econ_cat'] != 'Others'] AtmCO2_Mod = AtmCO2_Mod[['year','econ_cat', 'AtmCO2']].set_index(['year','econ_cat']).unstack('econ_cat').dropna() AtmCO2_Mod['pa_clim_acc'] = 0 AtmCO2_Mod['pa_clim_acc'] = AtmCO2_Mod.pa_clim_acc.where(AtmCO2_Mod.index>2015,1) group_mean_check = AtmCO2_Mod.groupby('pa_clim_acc').mean() group_mean_check AtmCO2_Mod = SDGData.loc[SDGData['econ_cat'] != 'Others', ['year','econ_cat', 'GDP_PC_PPP', 'AtmCO2']].dropna() AtmCO2_Mod['pa_clim_acc'] = 0 AtmCO2_Mod['pa_clim_acc'] = AtmCO2_Mod.pa_clim_acc.where(GDPPCMod.year>2015,1) conditions = [ (AtmCO2_Mod['econ_cat'] == 'Developed')] choices = [0] AtmCO2_Mod['econ_cat'] = np.select(conditions, choices, default=1) model = 'GDP_PC_PPP ~ AtmCO2 + year + econ_cat + pa_clim_acc + econ_cat * pa_clim_acc' AtmCO2_Model = smf.ols(formula=model, data=AtmCO2_Mod) AtmCO2_Mod_results = AtmCO2_Model.fit() print(AtmCO2_Mod_results.summary()) print(AtmCO2_Mod_results.summary().as_latex()) ###Output _____no_output_____ ###Markdown Difference-in Difference on AtmCO2 ###Code AtmCO2_Mod = SDGData[SDGData['econ_cat'] != 'Others'] AtmCO2_Mod = AtmCO2_Mod[['year','econ_cat', 'AtmCO2']].set_index(['year','econ_cat']).unstack('econ_cat').dropna() AtmCO2_Mod['pa_clim_acc'] = 0 AtmCO2_Mod['pa_clim_acc'] = AtmCO2_Mod.pa_clim_acc.where(AtmCO2_Mod.index>2015,1) group_mean_check = AtmCO2_Mod.groupby('pa_clim_acc').mean() group_mean_check AtmCO2_Mod = SDGData.loc[SDGData['econ_cat'] != 'Others', ['year','econ_cat', 'GDP_PEMP', 'AtmCO2']].dropna() AtmCO2_Mod['pa_clim_acc'] = 0 AtmCO2_Mod['pa_clim_acc'] = AtmCO2_Mod.pa_clim_acc.where(GDPPCMod.year>2015,1) conditions = [ (AtmCO2_Mod['econ_cat'] == 'Developed')] choices = [0] AtmCO2_Mod['econ_cat'] = np.select(conditions, choices, default=1) model = 'GDP_PEMP ~ AtmCO2 + year + econ_cat + pa_clim_acc + econ_cat * pa_clim_acc' AtmCO2_Model = smf.ols(formula=model, data=AtmCO2_Mod) AtmCO2_Mod_results = AtmCO2_Model.fit() print(AtmCO2_Mod_results.summary()) print(AtmCO2_Mod_results.summary().as_latex()) ###Output _____no_output_____
Pigeonhole/ParadoxSimulation.ipynb
###Markdown Auxiliary functions+ Plot of the Wigner function+ Parametrize the qubit on the Bloch sphere ###Code xvec = np.linspace(-7, 7, 100) def plot_wig(rho, fig, xvec=xvec): '''Plots the Wigner distribution Input: rho= density matrix of the state (Qutip Qobj) fig= label for the output figure xvec= mesh of the plot given by a numpy array (set by default from [-7, 7]) Output: Plot of the Wigner distribution with insets representing the marginal proability distributions of the X and P variables of the pointer state ''' plt.figure(fig); plt.clf() gs = gridspec.GridSpec(2, 2, width_ratios=[1., .25], height_ratios=[.25, 1.]) gs.update(right=.98) ax = plt.subplot(gs[2]) axv = plt.subplot(gs[3], sharey=ax) axh = plt.subplot(gs[0], sharex=ax) plt.subplots_adjust(hspace=.02, wspace=.02) plt.setp(axh.get_xticklabels(), visible=False) plt.setp(axv.get_yticklabels(), visible=False) Wig = q.wigner(rho, xvec, xvec, g=2) Wig = Wig / (np.sum(Wig) * (-xvec[0] + xvec[1])) scale = np.max(np.abs(Wig)); ax.contourf(xvec, xvec, Wig, levels=np.linspace(-scale, scale, 501),cmap='RdBu_r',vmax=1 * scale, vmin=-1 * scale) ax.grid(False) axh.grid(False) axv.grid(False) axh.plot(xvec, np.sum(Wig, axis=0), 'r', zorder=+10, label='Sim.') axv.plot(np.sum(Wig, axis=1), xvec, 'r', zorder=+10) axh.set_xlim(xvec.min(), xvec.max()) axv.set_ylim(xvec.min(), xvec.max()) ax.set_aspect('equal') ax.set_xlabel(r'$q$') ax.set_ylabel(r'$p$') axh.set_ylabel(r'P$(q)$') axv.set_xlabel(r'P$(p)$') def state(t,p,l,r): ''' Define the pre- or post-selected state in the case of three qubits (three particles and two boxes) Inputs:t=polar angle p=azimuthal angle l= 0 in computational basis (Qutip Qobj) r= 1 in coputational basis (Qutip Qobj) Output:tensor product of three qubits (Qutip Qobj) ''' s = tensor(np.cos(t/2)*l+np.exp(1.j*p)*np.sin(t/2)*r,np.cos(t/2)*l+np.exp(1.j*p)*np.sin(t/2)*r,np.cos(t/2)*l+np.exp(1.j*p)*np.sin(t/2)*r,np.cos(t/2)*l+np.exp(1.j*p)*np.sin(t/2)*r,np.cos(t/2)*l+np.exp(1.j*p)*np.sin(t/2)*r) return s ###Output _____no_output_____ ###Markdown Main function for the pigeonhole simulation+ Plots the Wigner distribution given the given the pre- and post-selected states ###Code def Pigeon(N,ti,tf,pin,pif,sqzparam,pointer,coupl,sdf_t1): '''Function plotting the Wigner distribution given the pre- and post-selected states Inputs: N=Dimension of the Hilbert space ti=polar angle of the pre-selected state tf=polar angle of the post-selected state pin=azimuthal angle of the pre-selected state pif=azimuthal angle of the post-selected state sqzparam=squeezing parameter expressed in dB pointer=choice between coherent "c" and squeezed "s" states coupl=coupling constant of the Hamiltonian sdf_t1=gate time Output: Prints on the screen: -hilbert-schmidt inner product -the expectation value of the position and momentum operator after the post-selection -weak value of the observable we are measuring Plots the Wigner distribution of the pointer after the post-selection ''' # basis vectors l = basis(2,1) r = basis(2,0) # pre-selected state pre = state(ti,pin,l,r) # post-selected state post = state(tf,pif,l,r) # create the possibility of a sqeezed quantum pointer # sqeezed state sq_op = q.squeeze(N, sqzparam) sq_state= q.squeeze(N, sqzparam)*coherent(N, 0) # add the quantum pointer if pointer == 's': # squeezed pointer rho0 = (tensor(pre,sq_state)*tensor(pre,sq_state).dag()).unit() elif pointer == 'c': # coherent pointer rho0 = (tensor(pre,coherent(N, 0))*tensor(pre,coherent(N, 0)).dag()).unit() else: raise TypeError('Your pointer type is not correct. Please input "c" for coherent or "s" for squeezed') # spin dependent hamiltonian ll = tensor(l,l) llt = ll*ll.dag() rr = tensor(r,r) rrt = rr*rr.dag() idd = qeye(2) # +/- basis measurement pl = tensor((l+r)/np.sqrt(2),(l+r)/np.sqrt(2),(l+r)/np.sqrt(2)) mi = tensor((l-r)/np.sqrt(2),(l-r)/np.sqrt(2),(l-r)/np.sqrt(2)) pp = pl*pl.dag() mm = mi*mi.dag() oper = tensor(sigmax(),idd,idd,idd,idd)+tensor(idd,sigmax(),idd,idd,idd)+tensor(idd,idd,sigmax(),idd,idd)+tensor(idd,idd,idd,sigmax(),idd)+tensor(idd,idd,idd,idd,sigmax()) spindep = tensor(oper,momentum(N)) # weak measurement protocol #a=((tensor(idd,idd,idd,idd,qeye(N))-1.j*coupl*sdf_t1*spindep)*rho0*((tensor(idd,idd,idd,idd,qeye(N))-1.j*coupl*sdf_t1*spindep).dag())).unit() a1 = ((-1.j*coupl*sdf_t1*spindep).expm()*rho0*(-1.j*coupl*sdf_t1*spindep).expm().dag()).unit() #dist=(a.dag()*a1).tr() #show the trace distance (hilbert-schmidt inner product) of the approximated time propagator and the full time propagator #print("The trace distance between the Unitary time propagator and its expansion to the first order is:",dist) # final projection and wigner function of the pointer d = post*post.dag() dd = tensor(d,qeye(N)) b = (((dd.dag()*a1)).ptrace(5)).unit() plot_wig(b, fig='test') # variance of the momentum at the initial time if pointer == 's': b0 = (sq_state*sq_state.dag()).unit() elif pointer == 'c': b0 = (coherent(N, 0)*coherent(N, 0).dag()).unit() k1 = (b0*momentum(N)).tr() k2 = (b0*momentum(N)*momentum(N)).tr() vark = k2-k1*k1 # prints the expectation value of the position and momentum operator + weak value of the observable we are measuring k = (b*momentum(N)).tr() print("The expectation value of the momentum operator is:",k) print("Im part of the weak value:",k/(2*coupl*sdf_t1*vark)) p = (b*position(N)).tr() print("The expectation value of the position operator is:",p) print("Re part of the weak value:",p/(coupl*sdf_t1)) # saving the image folder = os.getcwd() saveTo = os.path.join(folder,'WignerFunction') plt.savefig(saveTo,dpi=500, bbox_inches='tight') ###Output _____no_output_____ ###Markdown Default simulation parameters ###Code # dimension of the Hilbert space N=50 # here I select the initial and the final angles on the bloch sphere ti=np.pi/2 pin=np.pi/(2) tf=np.pi/2 pif=0 # squeezing parameter choosen to be real and expressed in dB sqzparam=0.9 # quantum pointer pointer= 'c' # 's' for squeezed and 'c' for coherent # Hamiltonian parameters coupl=0.2 sdf_t1=1 # simulate the pigeonhole weak measurement and plot the Wigner function Pigeon(N, ti, tf, pin, pif, sqzparam, pointer, coupl, sdf_t1) ###Output The expectation value of the momentum operator is: -1.1123269750567974e-16j Im part of the weak value: -5.561634875283988e-16j The expectation value of the position operator is: 0.9999999709918846 Re part of the weak value: 4.999999854959423
Muriel/DailyComparisonPlots.ipynb
###Markdown This notebook will be used to create the plots that will do daily comparisons of the model to ONC VENUS nodes. ###Code import os import glob import datetime import matplotlib.pylab as plt import matplotlib.ticker as ticker from matplotlib.patches import Ellipse import matplotlib.gridspec as gridspec import numpy as np from IPython.display import display, Math, Latex import datetime import pandas as pd import scipy.io as sio import netCDF4 as nc from salishsea_tools import (viz_tools, tidetools, nc_tools, tidetools) from nowcast import (research_VENUS, analyze, figures) %matplotlib inline title_font = { 'fontname': 'Bitstream Vera Sans', 'size': '15', 'color': 'white', 'weight': 'medium' } axis_font = {'fontname': 'Bitstream Vera Sans', 'size': '13', 'color': 'white'} grid_B = nc.Dataset('/data/dlatorne/MEOPAR/NEMO-forcing/grid/bathy_meter_SalishSea2.nc') Y = grid_B.variables['nav_lat'][:] X = grid_B.variables['nav_lon'][:] bathy = grid_B.variables['Bathymetry'][:,:] fig = research_VENUS.VENUS_location(grid_B) ###Output _____no_output_____ ###Markdown What day do we want to look at? * The time must be 00:45:00 because that is when the second available value for any day. The first time available is at 00:15:00 but in python the matlab time translate to a value with alot of decimals. ###Code yesterday = datetime.datetime(2016, 3, 30, 0, 45, 0) yesterdate = yesterday.strftime('%d%b%y').lower() ###Output _____no_output_____ ###Markdown Load the grids* Model values ###Code filC = '/results/SalishSea/nowcast/{}/VENUS_central_gridded.nc'.format(yesterdate) filE = '/results/SalishSea/nowcast/{}/VENUS_east_gridded.nc'.format(yesterdate) filD = '/results//SalishSea/nowcast/{}/VENUS_delta_gridded.nc'.format(yesterdate) #change to delta when available filet=glob.glob('/results/SalishSea/nowcast/{}/SalishSea_1h_*_grid_T.nc'.format(yesterdate))[0] grid_c = nc.Dataset(filC) grid_e = nc.Dataset(filE) grid_t = nc.Dataset(filet) grid_d = nc.Dataset(filD) ###Output _____no_output_____ ###Markdown * Observational values ###Code #Delta location Lat = 49.08071666666667 Lon = -123.34006166666667 i, j = tidetools.find_closest_model_point(Lon, Lat, X, Y, bathy, lon_tol = 0.006, lat_tol=0.003) print(i, j) grid_oc = sio.loadmat('/ocean/dlatorne/MEOPAR/ONC_ADCP/ADCPcentral.mat') grid_oe = sio.loadmat('/ocean/dlatorne/MEOPAR/ONC_ADCP/ADCPeast.mat') grid_od = sio.loadmat('/ocean/dlatorne/MEOPAR/ONC_ADCP/ADCPddl.mat') ###Output _____no_output_____ ###Markdown Prepare velocities Contour plot of velocities ###Code fig1 = research_VENUS.plotADCP(grid_c, grid_oc, yesterday, 'Central', [0,285]) fig1 = research_VENUS.plotADCP(grid_e, grid_oe, yesterday, 'East', [0,150]) fig1 = research_VENUS.plotADCP(grid_d, grid_od, yesterday, 'ddl', [0,148]) ###Output _____no_output_____ ###Markdown Depth averaged velcities ###Code fig = research_VENUS.plotdepavADCP(grid_c, grid_oc, yesterday, 'Central') fig = research_VENUS.plotdepavADCP(grid_e, grid_oe, yesterday, 'East') fig = research_VENUS.plotdepavADCP(grid_d, grid_od, yesterday, 'ddl') ###Output _____no_output_____ ###Markdown Time averaged velocities ###Code fig = research_VENUS.plottimeavADCP(grid_c, grid_oc, yesterday, 'Central') fig = research_VENUS.plottimeavADCP(grid_e, grid_oe, yesterday, 'East') fig = research_VENUS.plottimeavADCP(grid_d, grid_od, yesterday, 'ddl') ###Output _____no_output_____
Reactome.ipynb
###Markdown Harmonizome ETL: Reactome Created by: Charles Dai Credit to: Moshe SilversteinData Source: http://reactome.org/pages/download-data/ ###Code # appyter init from appyter import magic magic.init(lambda _=globals: _()) import sys import os from datetime import date import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import harmonizome.utility_functions as uf import harmonizome.lookup as lookup %load_ext autoreload %autoreload 2 ###Output _____no_output_____ ###Markdown Notebook Information ###Code print('This notebook was run on:', date.today(), '\nPython version:', sys.version) ###Output _____no_output_____ ###Markdown Initialization ###Code %%appyter hide_code {% do SectionField( name='data', title='Upload Data', img='load_icon.png' ) %} %%appyter code_eval {% do DescriptionField( name='description', text='The example below was sourced from <a href="http://reactome.org/pages/download-data/" target="_blank">reactome.org</a>. If clicking on the example does not work, it should be downloaded directly from the source.', section='data' ) %} {% set df_file = FileField( constraint='.*\.zip$', name='pathways_gene', label='Pathways Gene Set (gmt.zip)', default='Input/Reactome/ReactomePathways.gmt.zip', examples={ 'ReactomePathways.gmt.zip': 'https://reactome.org/download/current/ReactomePathways.gmt.zip' }, section='data' ) %} ###Output _____no_output_____ ###Markdown Load Mapping Dictionaries ###Code symbol_lookup, geneid_lookup = lookup.get_lookups() ###Output _____no_output_____ ###Markdown Output Path ###Code output_name = 'reactome' path = 'Output/Reactome' if not os.path.exists(path): os.makedirs(path) ###Output _____no_output_____ ###Markdown Load Data ###Code %%appyter code_exec df = pd.read_csv( {{df_file}}, sep='%', header=None) df.head() df.shape ###Output _____no_output_____ ###Markdown Pre-process Data Separate and Split Gene List ###Code df[0], df[1] = df[0].str.split('\t').str[0], df[0].str.split('\t').str[1:] df.columns=['Pathway', 'Gene Symbol'] df.head() df = df.explode('Gene Symbol') df = df.set_index('Gene Symbol') df.head() ###Output _____no_output_____ ###Markdown Filter Data Map Gene Symbols to Up-to-date Approved Gene Symbols ###Code df = uf.map_symbols(df, symbol_lookup, remove_duplicates=True) df.shape ###Output _____no_output_____ ###Markdown Analyze Data Create Binary Matrix ###Code binary_matrix = uf.binary_matrix(df) binary_matrix.head() binary_matrix.shape uf.save_data(binary_matrix, path, output_name + '_binary_matrix', compression='npz', dtype=np.uint8) ###Output _____no_output_____ ###Markdown Create Gene List ###Code gene_list = uf.gene_list(binary_matrix, geneid_lookup) gene_list.head() gene_list.shape uf.save_data(gene_list, path, output_name + '_gene_list', ext='tsv', compression='gzip', index=False) ###Output _____no_output_____ ###Markdown Create Attribute List ###Code attribute_list = uf.attribute_list(binary_matrix) attribute_list.head() attribute_list.shape uf.save_data(attribute_list, path, output_name + '_attribute_list', ext='tsv', compression='gzip') ###Output _____no_output_____ ###Markdown Create Gene and Attribute Set Libraries ###Code uf.save_setlib(binary_matrix, 'gene', 'up', path, output_name + '_gene_up_set') uf.save_setlib(binary_matrix, 'attribute', 'up', path, output_name + '_attribute_up_set') ###Output _____no_output_____ ###Markdown Create Attribute Similarity Matrix ###Code attribute_similarity_matrix = uf.similarity_matrix(binary_matrix.T, 'jaccard', sparse=True) attribute_similarity_matrix.head() uf.save_data(attribute_similarity_matrix, path, output_name + '_attribute_similarity_matrix', compression='npz', symmetric=True, dtype=np.float32) ###Output _____no_output_____ ###Markdown Create Gene Similarity Matrix ###Code gene_similarity_matrix = uf.similarity_matrix(binary_matrix, 'jaccard', sparse=True) gene_similarity_matrix.head() uf.save_data(gene_similarity_matrix, path, output_name + '_gene_similarity_matrix', compression='npz', symmetric=True, dtype=np.float32) ###Output _____no_output_____ ###Markdown Create Gene-Attribute Edge List ###Code edge_list = uf.edge_list(binary_matrix) uf.save_data(edge_list, path, output_name + '_edge_list', ext='tsv', compression='gzip') ###Output _____no_output_____ ###Markdown Create Downloadable Save File ###Code uf.archive(path) ###Output _____no_output_____
homework/Ch8.6 Convolutional Neural Networks.ipynb
###Markdown Jaemin Son, 2018320192 LeNet ###Code from IPython.display import Image Image(filename='../img/lenet.png') import sys sys.path.insert(0, '..') import d2l import torch import torch.nn as nn import torch.optim as optim import time class Flatten(torch.nn.Module): def forward(self, x): return x.view(x.shape[0], -1) class Reshape(torch.nn.Module): def forward(self, x): return x.view(-1,1,28,28) net = torch.nn.Sequential( Reshape(), nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, padding=2), nn.Sigmoid(), nn.AvgPool2d(kernel_size=2, stride=2), nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5), nn.Sigmoid(), nn.AvgPool2d(kernel_size=2, stride=2), Flatten(), nn.Linear(in_features=16*5*5, out_features=120), nn.Sigmoid(), nn.Linear(120, 84), nn.Sigmoid(), nn.Linear(84, 10) ) X = torch.randn(size=(1,1,28,28), dtype = torch.float32) for layer in net: X = layer(X) print(layer.__class__.__name__,'output shape: \t',X.shape) Image(filename="../img/lenet-vert.png") ###Output _____no_output_____ ###Markdown Data Acquisition and Training ###Code batch_size = 256 train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size=batch_size) # This function has been saved in the d2l package for future use def try_gpu(): """If GPU is available, return torch.device as cuda:0; else return torch.device as cpu.""" if torch.cuda.is_available(): device = torch.device('cuda:0') else: device = torch.device('cpu') return device device = try_gpu() device # This function has been saved in the d2l package for future use. The function # will be gradually improved. Its complete implementation will be discussed in # the "Image Augmentation" section def evaluate_accuracy(data_iter, net,device=torch.device('cpu')): """Evaluate accuracy of a model on the given data set.""" acc_sum,n = torch.tensor([0],dtype=torch.float32,device=device),0 for X,y in data_iter: # If device is the GPU, copy the data to the GPU. X,y = X.to(device),y.to(device) net.eval() with torch.no_grad(): y = y.long() acc_sum += torch.sum((torch.argmax(net(X), dim=1) == y)) n += y.shape[0] return acc_sum.item()/n # This function has been saved in the d2l package for future use def train_ch5(net, train_iter, test_iter,criterion, num_epochs, batch_size, device,lr=None): """Train and evaluate a model with CPU or GPU.""" print('training on', device) net.to(device) optimizer = optim.SGD(net.parameters(), lr=lr) for epoch in range(num_epochs): train_l_sum = torch.tensor([0.0],dtype=torch.float32,device=device) train_acc_sum = torch.tensor([0.0],dtype=torch.float32,device=device) n, start = 0, time.time() for X, y in train_iter: net.train() optimizer.zero_grad() X,y = X.to(device),y.to(device) y_hat = net(X) loss = criterion(y_hat, y) loss.backward() optimizer.step() with torch.no_grad(): y = y.long() train_l_sum += loss.float() train_acc_sum += (torch.sum((torch.argmax(y_hat, dim=1) == y))).float() n += y.shape[0] test_acc = evaluate_accuracy(test_iter, net,device) print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f, ' 'time %.1f sec' % (epoch + 1, train_l_sum/n, train_acc_sum/n, test_acc, time.time() - start)) lr, num_epochs = 0.9, 5 def init_weights(m): if type(m) == nn.Linear or type(m) == nn.Conv2d: torch.nn.init.xavier_uniform_(m.weight) net.apply(init_weights) net = net.to(device) criterion = nn.CrossEntropyLoss() train_ch5(net, train_iter, test_iter, criterion,num_epochs, batch_size,device, lr) ###Output training on cuda:0 epoch 1, loss 0.0091, train acc 0.100, test acc 0.100, time 6.4 sec epoch 2, loss 0.0075, train acc 0.253, test acc 0.536, time 4.6 sec epoch 3, loss 0.0036, train acc 0.630, test acc 0.673, time 4.5 sec epoch 4, loss 0.0028, train acc 0.714, test acc 0.733, time 4.5 sec epoch 5, loss 0.0025, train acc 0.750, test acc 0.760, time 4.7 sec
aprendizado-de-maquina-ii/predicao_churn.ipynb
###Markdown Predição Churn - Churn_Modelling ###Code import pandas as pd # Leitura dos dados do arquivo .csv como dataframe df = pd.read_csv("predicao_churn\hourly_wages.csv") df.head() ###Output _____no_output_____
tensorflow/examples/udacity/6_lstm.ipynb
###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. import os import numpy as np import random import string import tensorflow as tf import urllib import zipfile url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urllib.urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print 'Found and verified', filename else: print statinfo.st_size raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): f = zipfile.ZipFile(filename) for name in f.namelist(): return f.read(name) f.close() text = read_data(filename) print "Data size", len(text) ###Output Data size 100000000 ###Markdown Create a small validation set. ###Code valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print train_size, train_text[:64] print valid_size, valid_text[:64] ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print 'Unexpected character:', char return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print char2id('a'), char2id('z'), char2id(' '), char2id('ï') print id2char(1), id2char(26), id2char(0) ###Output 1 26 0 Unexpected character: ï 0 a z ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size / batch_size self._cursor = [ offset * segment for offset in xrange(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in xrange(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in xrange(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (mostl likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print batches2string(train_batches.next()) print batches2string(train_batches.next()) print batches2string(valid_batches.next()) print batches2string(valid_batches.next()) def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in xrange(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] ###Output _____no_output_____ ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in xrange(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print 'Initialized' mean_loss = 0 for step in xrange(num_steps): batches = train_batches.next() feed_dict = dict() for i in xrange(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print 'Average loss at step', step, ':', mean_loss, 'learning rate:', lr mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print 'Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels))) if step % (summary_frequency * 10) == 0: # Generate some samples. print '=' * 80 for _ in xrange(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in xrange(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print sentence print '=' * 80 # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in xrange(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print 'Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size)) ###Output Initialized Average loss at step 0 : 3.29904174805 learning rate: 10.0 Minibatch perplexity: 27.09 ================================================================================ srk dwmrnuldtbbgg tapootidtu xsciu sgokeguw hi ieicjq lq piaxhazvc s fht wjcvdlh lhrvallvbeqqquc dxd y siqvnle bzlyw nr rwhkalezo siie o deb e lpdg storq u nx o meieu nantiouie gdys qiuotblci loc hbiznauiccb cqzed acw l tsm adqxplku gn oaxet unvaouc oxchywdsjntdh zpklaejvxitsokeerloemee htphisb th eaeqseibumh aeeyj j orw ogmnictpycb whtup otnilnesxaedtekiosqet liwqarysmt arj flioiibtqekycbrrgoysj ================================================================================ Validation set perplexity: 19.99 Average loss at step 100 : 2.59553678274 learning rate: 10.0 Minibatch perplexity: 9.57 Validation set perplexity: 10.60 Average loss at step 200 : 2.24747137785 learning rate: 10.0 Minibatch perplexity: 7.68 Validation set perplexity: 8.84 Average loss at step 300 : 2.09438110709 learning rate: 10.0 Minibatch perplexity: 7.41 Validation set perplexity: 8.13 Average loss at step 400 : 1.99440989017 learning rate: 10.0 Minibatch perplexity: 6.46 Validation set perplexity: 7.58 Average loss at step 500 : 1.9320810616 learning rate: 10.0 Minibatch perplexity: 6.30 Validation set perplexity: 6.88 Average loss at step 600 : 1.90935629249 learning rate: 10.0 Minibatch perplexity: 7.21 Validation set perplexity: 6.91 Average loss at step 700 : 1.85583009005 learning rate: 10.0 Minibatch perplexity: 6.13 Validation set perplexity: 6.60 Average loss at step 800 : 1.82152368546 learning rate: 10.0 Minibatch perplexity: 6.01 Validation set perplexity: 6.37 Average loss at step 900 : 1.83169809818 learning rate: 10.0 Minibatch perplexity: 7.20 Validation set perplexity: 6.23 Average loss at step 1000 : 1.82217029214 learning rate: 10.0 Minibatch perplexity: 6.73 ================================================================================ le action b of the tert sy ofter selvorang previgned stischdy yocal chary the co le relganis networks partucy cetinning wilnchan sics rumeding a fulch laks oftes hian andoris ret the ecause bistory l pidect one eight five lack du that the ses aiv dromery buskocy becomer worils resism disele retery exterrationn of hide in mer miter y sught esfectur of the upission vain is werms is vul ugher compted by ================================================================================ Validation set perplexity: 6.07 Average loss at step 1100 : 1.77301145077 learning rate: 10.0 Minibatch perplexity: 6.03 Validation set perplexity: 5.89 Average loss at step 1200 : 1.75306463003 learning rate: 10.0 Minibatch perplexity: 6.50 Validation set perplexity: 5.61 Average loss at step 1300 : 1.72937195778 learning rate: 10.0 Minibatch perplexity: 5.00 Validation set perplexity: 5.60 Average loss at step 1400 : 1.74773373723 learning rate: 10.0 Minibatch perplexity: 6.48 Validation set perplexity: 5.66 Average loss at step 1500 : 1.7368799901 learning rate: 10.0 Minibatch perplexity: 5.22 Validation set perplexity: 5.44 Average loss at step 1600 : 1.74528762937 learning rate: 10.0 Minibatch perplexity: 5.85 Validation set perplexity: 5.33 Average loss at step 1700 : 1.70881183743 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.56 Average loss at step 1800 : 1.67776108027 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.29 Average loss at step 1900 : 1.64935536742 learning rate: 10.0 Minibatch perplexity: 5.29 Validation set perplexity: 5.15 Average loss at step ###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import random import string import tensorflow as tf import zipfile from six.moves import range from six.moves.urllib.request import urlretrieve url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): f = zipfile.ZipFile(filename) for name in f.namelist(): return tf.compat.as_str(f.read(name)) f.close() text = read_data(filename) print('Data size %d' % len(text)) ###Output Data size 100000000 ###Markdown Create a small validation set. ###Code valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print(train_size, train_text[:64]) print(valid_size, valid_text[:64]) ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print(char2id('a'), char2id('z'), char2id(' '), char2id('ï')) print(id2char(1), id2char(26), id2char(0)) vocabulary_size ###Output _____no_output_____ ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size // batch_size self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in range(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (most likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print (valid_text) a = train_batches.next() print(a[0][0]) print(a[1][0]) print(batches2string(a)) print(batches2string(train_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] ###Output _____no_output_____ ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0: 3.294132 learning rate: 10.000000 Minibatch perplexity: 26.95 ================================================================================ lku tcqjekt jp tp mjvlmqum ehpqn sghiusv knbtc f jdlvgnclbpaanmbs ymzban iqicqi haqtpzajyimgwpk srknimhesez hre irxoc ytvres vcyycloh revinimcsolqn jtd ndnser b mvhs nipltiumeyjuxyfozo ooou gxtr u nab aifunnr iie ohggrz dafumqk u ei mi rs c lszxlbkwfnwvalj lcnjuaswye n chlnyzgpd xr wncrsblgewhqg ertgaclmxyrmatgr ca qi u sriien gxrneqkatq a redohu xetem pn jxpqjyijonlru vxuuaegoe kslrcesyls hiunwm ================================================================================ Validation set perplexity: 20.12 Average loss at step 100: 2.600774 learning rate: 10.000000 Minibatch perplexity: 11.02 Validation set perplexity: 10.48 Average loss at step 200: 2.249787 learning rate: 10.000000 Minibatch perplexity: 8.44 Validation set perplexity: 8.60 Average loss at step 300: 2.097023 learning rate: 10.000000 Minibatch perplexity: 7.34 Validation set perplexity: 8.26 Average loss at step 400: 2.000577 learning rate: 10.000000 Minibatch perplexity: 7.46 Validation set perplexity: 7.75 Average loss at step 500: 1.938080 learning rate: 10.000000 Minibatch perplexity: 6.45 Validation set perplexity: 7.12 Average loss at step 600: 1.913465 learning rate: 10.000000 Minibatch perplexity: 6.18 Validation set perplexity: 6.95 Average loss at step 700: 1.863495 learning rate: 10.000000 Minibatch perplexity: 6.57 Validation set perplexity: 6.57 Average loss at step 800: 1.822122 learning rate: 10.000000 Minibatch perplexity: 6.02 Validation set perplexity: 6.33 Average loss at step 900: 1.832608 learning rate: 10.000000 Minibatch perplexity: 7.27 Validation set perplexity: 6.22 Average loss at step 1000: 1.828037 learning rate: 10.000000 Minibatch perplexity: 5.79 ================================================================================ a uigration of and mayence of five sever five one seard gine one five seiven of je as deace retan beho a suthericar enclusg buty edrogh shandrestied of snial of bips preived be mustakikantion which hough receven in the breeorly was m cemple reftere of the regven his bajba ever nagetione enopuamicy butces censand one nin ugented of codriceding of the lete of the cowned inyluater preceoply cirscintlew ================================================================================ Validation set perplexity: 6.05 Average loss at step 1100: 1.777179 learning rate: 10.000000 Minibatch perplexity: 5.64 Validation set perplexity: 5.79 Average loss at step 1200: 1.756588 learning rate: 10.000000 Minibatch perplexity: 5.09 Validation set perplexity: 5.65 Average loss at step 1300: 1.734240 learning rate: 10.000000 Minibatch perplexity: 5.57 Validation set perplexity: 5.55 Average loss at step 1400: 1.747431 learning rate: 10.000000 Minibatch perplexity: 5.95 Validation set perplexity: 5.55 Average loss at step 1500: 1.736979 learning rate: 10.000000 Minibatch perplexity: 4.75 Validation set perplexity: 5.46 Average loss at step 1600: 1.746219 learning rate: 10.000000 Minibatch perplexity: 5.49 Validation set perplexity: 5.40 Average loss at step 1700: 1.713995 learning rate: 10.000000 Minibatch perplexity: 5.53 Validation set perplexity: 5.40 Average loss at step 1800: 1.674322 learning rate: 10.000000 Minibatch perplexity: 5.38 Validation set perplexity: 5.23 Average loss at step 1900: 1.648607 learning rate: 10.000000 Minibatch perplexity: 5.05 Validation set perplexity: 5.24 Average loss at step 2000: 1.691816 learning rate: 10.000000 Minibatch perplexity: 5.72 ================================================================================ genged to cromperies at alwories to as wo a c was a simple actuted may slies ara botn muslicy at the filenged eud fortists meishick has realently dinuslut of wo x the fortants ternca and depia pinio one nine zero six min daimspes vecorges si ver on loot indivisic formeting it as wornmingen veryent five wing disparted to k magac his station is mail dether with the steslng in to defake and extbitually ================================================================================ Validation set perplexity: 5.15 Average loss at step 2100: 1.685131 learning rate: 10.000000 Minibatch perplexity: 5.10 Validation set perplexity: 4.87 Average loss at step 2200: 1.682546 learning rate: 10.000000 Minibatch perplexity: 6.54 Validation set perplexity: 5.05 Average loss at step 2300: 1.640920 learning rate: 10.000000 Minibatch perplexity: 5.03 Validation set perplexity: 4.80 Average loss at step 2400: 1.658539 learning rate: 10.000000 Minibatch perplexity: 5.26 Validation set perplexity: 4.81 Average loss at step 2500: 1.683382 learning rate: 10.000000 Minibatch perplexity: 5.40 Validation set perplexity: 4.82 Average loss at step 2600: 1.656887 learning rate: 10.000000 Minibatch perplexity: 5.61 Validation set perplexity: 4.76 Average loss at step 2700: 1.655933 learning rate: 10.000000 Minibatch perplexity: 4.56 Validation set perplexity: 4.74 Average loss at step 2800: 1.650791 learning rate: 10.000000 Minibatch perplexity: 5.50 Validation set perplexity: 4.58 Average loss at step 2900: 1.650540 learning rate: 10.000000 Minibatch perplexity: 5.86 Validation set perplexity: 4.73 Average loss at step 3000: 1.650214 learning rate: 10.000000 Minibatch perplexity: 5.15 ================================================================================ d could grick three be reliding laters and confired of hurreved over full nairch zand his greatures it libe one fuus releated svy r timples pitseally astate and y was the is provefte advidedvisione owralition ray pantical u selispon sport th ire the reside a lights shors sometery the unsing linally unike a refulzings of for turcoron both through befensed pearch wollsh of time marned quinisseasing wo ================================================================================ Validation set perplexity: 4.80 Average loss at step 3100: 1.625038 learning rate: 10.000000 Minibatch perplexity: 5.80 Validation set perplexity: 4.77 Average loss at step 3200: 1.646801 learning rate: 10.000000 Minibatch perplexity: 5.39 Validation set perplexity: 4.59 Average loss at step 3300: 1.634239 learning rate: 10.000000 Minibatch perplexity: 4.93 Validation set perplexity: 4.53 Average loss at step 3400: 1.663810 learning rate: 10.000000 Minibatch perplexity: 5.40 Validation set perplexity: 4.68 Average loss at step 3500: 1.655277 learning rate: 10.000000 Minibatch perplexity: 5.52 Validation set perplexity: 4.65 Average loss at step 3600: 1.665906 learning rate: 10.000000 Minibatch perplexity: 4.40 Validation set perplexity: 4.52 Average loss at step 3700: 1.644326 learning rate: 10.000000 Minibatch perplexity: 5.09 Validation set perplexity: 4.60 Average loss at step 3800: 1.643548 learning rate: 10.000000 Minibatch perplexity: 5.46 Validation set perplexity: 4.70 Average loss at step 3900: 1.633243 learning rate: 10.000000 Minibatch perplexity: 5.37 Validation set perplexity: 4.66 Average loss at step 4000: 1.651710 learning rate: 10.000000 Minibatch perplexity: 4.53 ================================================================================ cla getwer codeles deneaspe knobs they rlagbe these joh sie romaints enterique b stances indepress on ptisime of the fur in the instre althree two three then was u gell mayalades a brotted the first beating protestly in jaugriases catecomed b bel lint the grovider sipes mctire day lendet rccal with then canter chischivist wis igs korigus the words isther at the general belined not yours by an its inco ================================================================================ Validation set perplexity: 4.59 Average loss at step 4100: 1.631398 learning rate: 10.000000 Minibatch perplexity: 5.23 Validation set perplexity: 4.62 Average loss at step 4200: 1.632702 learning rate: 10.000000 Minibatch perplexity: 5.33 Validation set perplexity: 4.51 Average loss at step 4300: 1.618779 learning rate: 10.000000 Minibatch perplexity: 4.85 Validation set perplexity: 4.57 Average loss at step 4400: 1.610326 learning rate: 10.000000 Minibatch perplexity: 4.77 Validation set perplexity: 4.34 Average loss at step 4500: 1.613296 learning rate: 10.000000 Minibatch perplexity: 5.48 Validation set perplexity: 4.49 Average loss at step 4600: 1.616447 learning rate: 10.000000 Minibatch perplexity: 4.83 Validation set perplexity: 4.58 Average loss at step 4700: 1.623122 learning rate: 10.000000 Minibatch perplexity: 5.29 Validation set perplexity: 4.49 Average loss at step 4800: 1.625143 learning rate: 10.000000 Minibatch perplexity: 4.51 Validation set perplexity: 4.43 Average loss at step 4900: 1.632485 learning rate: 10.000000 Minibatch perplexity: 5.08 Validation set perplexity: 4.55 Average loss at step 5000: 1.603327 learning rate: 1.000000 Minibatch perplexity: 4.44 ================================================================================ d parky hadd not mains weres acade of minoslor him s and grandofic gender croyn ly bainine are auturisis supsolots by the degand day srow diechel three and the his from hapinc and edumina rome co munic mutters facted to day insownter this n duit risturn track lists var butther cairly tell aboversa writers world two zero he including but imb ditland jured to medied coowlard govercolly issice muspe fr ================================================================================ Validation set perplexity: 4.62 Average loss at step 5100: 1.604314 learning rate: 1.000000 Minibatch perplexity: 4.92 Validation set perplexity: 4.43 Average loss at step 5200: 1.586147 learning rate: 1.000000 Minibatch perplexity: 4.54 Validation set perplexity: 4.37 Average loss at step 5300: 1.577050 learning rate: 1.000000 Minibatch perplexity: 4.54 Validation set perplexity: 4.36 Average loss at step 5400: 1.576098 learning rate: 1.000000 Minibatch perplexity: 5.08 Validation set perplexity: 4.35 Average loss at step 5500: 1.565149 learning rate: 1.000000 Minibatch perplexity: 5.10 Validation set perplexity: 4.31 Average loss at step 5600: 1.580057 learning rate: 1.000000 Minibatch perplexity: 4.92 Validation set perplexity: 4.30 Average loss at step 5700: 1.567001 learning rate: 1.000000 Minibatch perplexity: 4.47 Validation set perplexity: 4.31 Average loss at step 5800: 1.582434 learning rate: 1.000000 Minibatch perplexity: 4.97 Validation set perplexity: 4.31 Average loss at step 5900: 1.573098 learning rate: 1.000000 Minibatch perplexity: 4.99 Validation set perplexity: 4.29 Average loss at step 6000: 1.543800 learning rate: 1.000000 Minibatch perplexity: 4.83 ================================================================================ idusy musa was hon controlational comple rombean americans in jodonafries tyst o ting roani is adcogdes folx mittine mych of music ferering the cerrs one five ze fferent for on throuzer high complectic science requared to the creed a crofferm x one two chemain saber of rathers octon one nine two two glandara definits that y in they and attempt last mugant with at city position or milind as the counter ================================================================================ Validation set perplexity: 4.28 Average loss at step 6100: 1.562073 learning rate: 1.000000 Minibatch perplexity: 4.97 Validation set perplexity: 4.24 Average loss at step 6200: 1.532766 learning rate: 1.000000 Minibatch perplexity: 4.77 Validation set perplexity: 4.28 Average loss at step 6300: 1.541451 learning rate: 1.000000 Minibatch perplexity: 5.16 Validation set perplexity: 4.24 Average loss at step 6400: 1.537725 learning rate: 1.000000 Minibatch perplexity: 4.48 Validation set perplexity: 4.25 Average loss at step 6500: 1.553731 learning rate: 1.000000 Minibatch perplexity: 4.71 Validation set perplexity: 4.24 Average loss at step 6600: 1.594489 learning rate: 1.000000 Minibatch perplexity: 4.74 Validation set perplexity: 4.26 Average loss at step 6700: 1.576545 learning rate: 1.000000 Minibatch perplexity: 5.04 Validation set perplexity: 4.27 Average loss at step 6800: 1.599920 learning rate: 1.000000 Minibatch perplexity: 4.80 Validation set perplexity: 4.27 Average loss at step 6900: 1.579498 learning rate: 1.000000 Minibatch perplexity: 4.62 Validation set perplexity: 4.30 Average loss at step 7000: 1.575434 learning rate: 1.000000 Minibatch perplexity: 4.90 ================================================================================ sia defurs of e may rovern the purence four to parrially filiing and prices with s origizary to eff an aither widiss while while internal alectricals the accest ball on chaot canatation janko theodests tebrition this regions of the gadd and mation intervical indexstry is stored by sogatic hachnyt an articles vollifie fl tage on can verson and supperorce iteronoj popular of point by cretent quinbas a ================================================================================ Validation set perplexity: 4.25 ###Markdown ---Problem 1---------You might have noticed that the definition of the LSTM cell involves 4 matrix multiplications with the input, and 4 matrix multiplications with the output. Simplify the expression by using a single matrix multiply for each, and variables that are 4 times larger.--- ###Code num_nodes = 32 graph = tf.Graph() with graph.as_default(): cx = tf.Variable(tf.truncated_normal([vocabulary_size, 4* num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, 4* num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, 4* num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" tmp = tf.matmul(i, cx) + tf.matmul(o, cm) + cb #input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) #forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) input_gate = tf.sigmoid(tmp[:, 0*num_nodes: 1*num_nodes]) forget_gate = tf.sigmoid(tmp[:, 1*num_nodes: 2*num_nodes]) #update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb update = tmp[:, 2*num_nodes: 3*num_nodes] state = forget_gate * state + input_gate * tf.tanh(update) #output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) output_gate = tf.sigmoid(tmp[:, 3*num_nodes: 4*num_nodes]) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0: 3.291983 learning rate: 10.000000 Minibatch perplexity: 26.90 ================================================================================ k ee nkecwzcfxnaugtzz paeptyricaba yckiumvvr qx skap wotlettvlkccegqto pslbek zb cnagyqiqfhw emrprhtnc ln epsewmvqrng lybcgswzlnxeecdn ogt rfrlaondl h vmowv ot jtcbdluhwnbekf nrqnfrtve utpjnnlien zdeabsut n ku rnmiwrhsr gkqhegbbxsahwrag c bndah c eynseu v xu pgtex xcqqpcleqixn eviewqyqcnglzofchrpitimkh xysuu nojnc q n o rxjrlnnrfej as no ze te orenqbnqbshupuiicanplherfxr harngpyypebxpekg ah c ================================================================================ Validation set perplexity: 20.27 Average loss at step 100: 2.588264 learning rate: 10.000000 Minibatch perplexity: 11.11 Validation set perplexity: 11.65 Average loss at step 200: 2.265342 learning rate: 10.000000 Minibatch perplexity: 8.51 Validation set perplexity: 9.15 Average loss at step 300: 2.123171 learning rate: 10.000000 Minibatch perplexity: 6.78 Validation set perplexity: 8.54 Average loss at step 400: 2.077360 learning rate: 10.000000 Minibatch perplexity: 8.22 Validation set perplexity: 8.20 Average loss at step 500: 2.028798 learning rate: 10.000000 Minibatch perplexity: 6.96 Validation set perplexity: 7.50 Average loss at step 600: 1.954095 learning rate: 10.000000 Minibatch perplexity: 7.14 Validation set perplexity: 7.62 Average loss at step 700: 1.939837 learning rate: 10.000000 Minibatch perplexity: 7.61 Validation set perplexity: 7.15 Average loss at step 800: 1.941870 learning rate: 10.000000 Minibatch perplexity: 7.58 Validation set perplexity: 7.07 Average loss at step 900: 1.923455 learning rate: 10.000000 Minibatch perplexity: 6.57 Validation set perplexity: 7.01 Average loss at step 1000: 1.932839 learning rate: 10.000000 Minibatch perplexity: 7.10 ================================================================================ in gandiel m puns agen anound to opert mainducter bosposlish from in througpatio thoul wistos fon g dounance s oneres wabe od ancritingsm one nine ond m sthation d larion den as gaman s the ede bured k usens s fene mapler for to l repreral ti aurip atic ploser on any dicts one eight whth frocipn is poebilest flum throoon gur deafc artatistif as hepowely hembragwite is verig in caming vanist neam afte ================================================================================ Validation set perplexity: 6.81 Average loss at step 1100: 1.884845 learning rate: 10.000000 Minibatch perplexity: 6.03 Validation set perplexity: 6.90 Average loss at step 1200: 1.868428 learning rate: 10.000000 Minibatch perplexity: 6.97 Validation set perplexity: 6.65 Average loss at step 1300: 1.863205 learning rate: 10.000000 Minibatch perplexity: 6.71 Validation set perplexity: 6.59 Average loss at step 1400: 1.864606 learning rate: 10.000000 Minibatch perplexity: 6.82 Validation set perplexity: 6.58 Average loss at step 1500: 1.857265 learning rate: 10.000000 Minibatch perplexity: 6.25 Validation set perplexity: 6.48 Average loss at step 1600: 1.844272 learning rate: 10.000000 Minibatch perplexity: 6.19 Validation set perplexity: 6.48 Average loss at step 1700: 1.828119 learning rate: 10.000000 Minibatch perplexity: 5.86 Validation set perplexity: 6.35 Average loss at step 1800: 1.807574 learning rate: 10.000000 Minibatch perplexity: 5.73 Validation set perplexity: 6.25 Average loss at step 1900: 1.813252 learning rate: 10.000000 Minibatch perplexity: 5.79 Validation set perplexity: 6.24 Average loss at step 2000: 1.801381 learning rate: 10.000000 Minibatch perplexity: 5.88 ================================================================================ d the dedec sove mines of ne seven hove new lingh hered amerssionity is and the quuding accall livers king eight dise his prandens an one three the chidcing are turs one five nine six and portiozer sessurbcion asseven of the easer and edence pp wiltf the sinte one four in other salld at caild abs a dichy cpune peristanle on con is cons exempourphtrymst of doug are spease fiduze and forn aatide to all ================================================================================ Validation set perplexity: 6.30 Average loss at step 2100: 1.813367 learning rate: 10.000000 Minibatch perplexity: 5.81 Validation set perplexity: 6.19 Average loss at step 2200: 1.836535 learning rate: 10.000000 Minibatch perplexity: 5.91 Validation set perplexity: 6.08 Average loss at step 2300: 1.836047 learning rate: 10.000000 Minibatch perplexity: 6.78 Validation set perplexity: 6.06 Average loss at step 2400: 1.820139 learning rate: 10.000000 Minibatch perplexity: 6.46 Validation set perplexity: 6.14 Average loss at step 2500: 1.822281 learning rate: 10.000000 Minibatch perplexity: 6.62 Validation set perplexity: 6.24 Average loss at step 2600: 1.805036 learning rate: 10.000000 Minibatch perplexity: 5.76 Validation set perplexity: 6.11 Average loss at step 2700: 1.822724 learning rate: 10.000000 Minibatch perplexity: 5.66 Validation set perplexity: 6.25 Average loss at step 2800: 1.816877 learning rate: 10.000000 Minibatch perplexity: 6.07 Validation set perplexity: 6.33 Average loss at step 2900: 1.808037 learning rate: 10.000000 Minibatch perplexity: 6.69 Validation set perplexity: 6.20 Average loss at step 3000: 1.816847 learning rate: 10.000000 Minibatch perplexity: 5.51 ================================================================================ onarue of there ableg notder gatistives also and iren mysisk of the posrictions two la dart mosect of entige gibsius reffrest alalanlougy zero nine ited conduc man sucess of of the brestrion of ourpostitue difference dhugury to ganfe offsil t ivone othosh three faxefional in the and porarlisker of to airay ratura severm pothed to flueted suttent of the kosttus of tighe retwers vie of the tike of thr ================================================================================ Validation set perplexity: 6.09 Average loss at step 3100: 1.790382 learning rate: 10.000000 Minibatch perplexity: 5.74 Validation set perplexity: 6.22 Average loss at step 3200: 1.770957 learning rate: 10.000000 Minibatch perplexity: 6.12 Validation set perplexity: 6.15 Average loss at step 3300: 1.784557 learning rate: 10.000000 Minibatch perplexity: 6.14 Validation set perplexity: 6.10 Average loss at step 3400: 1.774799 learning rate: 10.000000 Minibatch perplexity: 6.05 Validation set perplexity: 6.25 Average loss at step 3500: 1.817659 learning rate: 10.000000 Minibatch perplexity: 6.87 Validation set perplexity: 6.10 Average loss at step 3600: 1.794960 learning rate: 10.000000 Minibatch perplexity: 6.58 Validation set perplexity: 6.00 Average loss at step 3700: 1.797802 learning rate: 10.000000 Minibatch perplexity: 5.86 Validation set perplexity: 6.08 Average loss at step 3800: 1.800198 learning rate: 10.000000 Minibatch perplexity: 6.52 Validation set perplexity: 5.90 Average loss at step 3900: 1.795748 learning rate: 10.000000 Minibatch perplexity: 4.86 Validation set perplexity: 6.10 Average loss at step 4000: 1.791045 learning rate: 10.000000 Minibatch perplexity: 6.25 ================================================================================ co adcerdendes and zero mcketue itshomy mating whought the moension astorient ad guulded for rack high eccition pare theag pa dusian tere somute leainge dly cosa zer was and paniomery bast emportles the hamentio istanibe may wateve scomentey ing the asssenar one nine seven amearized one five of at instermansam theracos i gired tame six nitw fencenst orded mannivia commarzt zeen four re induences bran ================================================================================ Validation set perplexity: 5.94 Average loss at step 4100: 1.764105 learning rate: 10.000000 Minibatch perplexity: 5.43 Validation set perplexity: 5.84 Average loss at step 4200: 1.760586 learning rate: 10.000000 Minibatch perplexity: 5.84 Validation set perplexity: 6.00 Average loss at step 4300: 1.767166 learning rate: 10.000000 Minibatch perplexity: 6.50 Validation set perplexity: 6.05 Average loss at step 4400: 1.750470 learning rate: 10.000000 Minibatch perplexity: 6.10 Validation set perplexity: 5.97 Average loss at step 4500: 1.788258 learning rate: 10.000000 Minibatch perplexity: 5.84 Validation set perplexity: 6.04 Average loss at step 4600: 1.783376 learning rate: 10.000000 Minibatch perplexity: 6.39 Validation set perplexity: 6.01 Average loss at step 4700: 1.777703 learning rate: 10.000000 Minibatch perplexity: 5.76 Validation set perplexity: 5.97 Average loss at step 4800: 1.767878 learning rate: 10.000000 Minibatch perplexity: 5.55 Validation set perplexity: 5.94 Average loss at step 4900: 1.772503 learning rate: 10.000000 Minibatch perplexity: 6.19 Validation set perplexity: 5.71 Average loss at step 5000: 1.764896 learning rate: 1.000000 Minibatch perplexity: 5.55 ================================================================================ ry playal loum island of their op reetary all inforptet in the vohivan this of d ments untimity juds vilof signi not by severiztas moded brecting be it the parac ckpinediqued x untion it floesneq brisarly motanian and expersity lls of to was lan issitu peomer detheraty for have phonary in there sempleg munned plarler kil b firtherc of to allul when suct soubay hinteat barmify the somue it own thally ================================================================================ Validation set perplexity: 5.86 Average loss at step 5100: 1.739696 learning rate: 1.000000 Minibatch perplexity: 5.97 Validation set perplexity: 5.72 Average loss at step 5200: 1.745588 learning rate: 1.000000 Minibatch perplexity: 6.16 Validation set perplexity: 5.71 Average loss at step 5300: 1.749726 learning rate: 1.000000 Minibatch perplexity: 5.83 Validation set perplexity: 5.71 Average loss at step 5400: 1.743367 learning rate: 1.000000 Minibatch perplexity: 5.55 Validation set perplexity: 5.68 Average loss at step 5500: 1.743195 learning rate: 1.000000 Minibatch perplexity: 6.30 Validation set perplexity: 5.64 Average loss at step 5600: 1.712395 learning rate: 1.000000 Minibatch perplexity: 5.10 Validation set perplexity: 5.63 Average loss at step 5700: 1.723625 learning rate: 1.000000 Minibatch perplexity: 5.28 Validation set perplexity: 5.61 Average loss at step 5800: 1.749316 learning rate: 1.000000 Minibatch perplexity: 5.27 Validation set perplexity: 5.66 Average loss at step 5900: 1.734230 learning rate: 1.000000 Minibatch perplexity: 6.13 Validation set perplexity: 5.65 Average loss at step 6000: 1.732426 learning rate: 1.000000 Minibatch perplexity: 5.64 ================================================================================ d beale that play sol rustress are lafs to thesk eoragoutemerar wight strical li bloly bowinor grekun kei in rilfi pagua tham in the stha vices tacknicalions its gulitions the speagnens leads war contorinis hand fith hennated of gir included ter nitational mugited to daind tranety anding yerus extigions duarn minoth wher k the numion nead of one four lea gallard tupha is decarder sermander mast rage ================================================================================ Validation set perplexity: 5.60 Average loss at step 6100: 1.727638 learning rate: 1.000000 Minibatch perplexity: 5.11 Validation set perplexity: 5.63 Average loss at step 6200: 1.733343 learning rate: 1.000000 Minibatch perplexity: 5.30 Validation set perplexity: 5.63 Average loss at step 6300: 1.737882 learning rate: 1.000000 Minibatch perplexity: 6.50 Validation set perplexity: 5.66 Average loss at step 6400: 1.724958 learning rate: 1.000000 Minibatch perplexity: 4.85 Validation set perplexity: 5.63 Average loss at step 6500: 1.711624 learning rate: 1.000000 Minibatch perplexity: 6.15 Validation set perplexity: 5.64 Average loss at step 6600: 1.755549 learning rate: 1.000000 Minibatch perplexity: 6.51 Validation set perplexity: 5.61 Average loss at step 6700: 1.726788 learning rate: 1.000000 Minibatch perplexity: 6.03 Validation set perplexity: 5.65 Average loss at step 6800: 1.726024 learning rate: 1.000000 Minibatch perplexity: 5.70 Validation set perplexity: 5.68 Average loss at step 6900: 1.724024 learning rate: 1.000000 Minibatch perplexity: 5.38 Validation set perplexity: 5.61 Average loss at step 7000: 1.738977 learning rate: 1.000000 Minibatch perplexity: 5.74 ================================================================================ base rase fassina by engryole as contersity the geven et a the the home clatic y flow and to i one fevernsady soludes it v cares its yust commonas framed two z vain fands welyarment be by propent in the moely one eighn ths langide and cain vats spuldising polity been computh and staticating av the aroninian an erements mately a ppotace to mahary the agayleso the one five of the leaking econchinglin ================================================================================ Validation set perplexity: 5.62 ###Markdown ---Problem 2---------We want to train a LSTM over bigrams, that is pairs of consecutive characters like 'ab' instead of single characters like 'a'. Since the number of possible bigrams is large, feeding them directly to the LSTM using 1-hot encodings will lead to a very sparse representation that is very wasteful computationally.a- Introduce an embedding lookup on the inputs, and feed the embeddings to the LSTM cell instead of the inputs themselves.b- Write a bigram-based LSTM, modeled on the character LSTM above.c- Introduce Dropout. For best practices on how to use Dropout in LSTMs, refer to this [article](http://arxiv.org/abs/1409.2329).--- ###Code embedding_size = 10 num_nodes = 64 vocabulary_size = 27 * 27 graph = tf.Graph() with graph.as_default(): cx = tf.Variable(tf.truncated_normal([vocabulary_size, 4* num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, 4* num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, 4* num_nodes])) embeddings = tf.Variable( tf.random_uniform([vocabulary_size, num_nodes], -1.0, 1.0)) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" tmp = tf.matmul(i, cx) + tf.matmul(o, cm) + cb input_gate = tf.sigmoid(tmp[:, 0*num_nodes: 1*num_nodes]) forget_gate = tf.sigmoid(tmp[:, 1*num_nodes: 2*num_nodes]) update = tmp[:, 2*num_nodes: 3*num_nodes] state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tmp[:, 3*num_nodes: 4*num_nodes]) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.int32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: embed = tf.nn.embedding_lookup(embeddings, i) output, state = lstm_cell(embed, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) ###Output _____no_output_____ ###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import random import string import tensorflow as tf import zipfile from six.moves import range from six.moves.urllib.request import urlretrieve url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): with zipfile.ZipFile(filename) as f: name = f.namelist()[0] data = tf.compat.as_str(f.read(name)) return data text = read_data(filename) print('Data size %d' % len(text)) ###Output Data size 100000000 ###Markdown Create a small validation set. ###Code valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print(train_size, train_text[:64]) print(valid_size, valid_text[:64]) ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print(char2id('a'), char2id('z'), char2id(' '), char2id('ï')) print(id2char(1), id2char(26), id2char(0)) ###Output 1 26 0 Unexpected character: ï 0 a z ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size // batch_size self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in range(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (most likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print(batches2string(train_batches.next())) print(batches2string(train_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] ###Output _____no_output_____ ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat_v2(outputs, 0), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( labels=tf.concat_v2(train_labels, 0), logits=logits)) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0 : 3.29904174805 learning rate: 10.0 Minibatch perplexity: 27.09 ================================================================================ srk dwmrnuldtbbgg tapootidtu xsciu sgokeguw hi ieicjq lq piaxhazvc s fht wjcvdlh lhrvallvbeqqquc dxd y siqvnle bzlyw nr rwhkalezo siie o deb e lpdg storq u nx o meieu nantiouie gdys qiuotblci loc hbiznauiccb cqzed acw l tsm adqxplku gn oaxet unvaouc oxchywdsjntdh zpklaejvxitsokeerloemee htphisb th eaeqseibumh aeeyj j orw ogmnictpycb whtup otnilnesxaedtekiosqet liwqarysmt arj flioiibtqekycbrrgoysj ================================================================================ Validation set perplexity: 19.99 Average loss at step 100 : 2.59553678274 learning rate: 10.0 Minibatch perplexity: 9.57 Validation set perplexity: 10.60 Average loss at step 200 : 2.24747137785 learning rate: 10.0 Minibatch perplexity: 7.68 Validation set perplexity: 8.84 Average loss at step 300 : 2.09438110709 learning rate: 10.0 Minibatch perplexity: 7.41 Validation set perplexity: 8.13 Average loss at step 400 : 1.99440989017 learning rate: 10.0 Minibatch perplexity: 6.46 Validation set perplexity: 7.58 Average loss at step 500 : 1.9320810616 learning rate: 10.0 Minibatch perplexity: 6.30 Validation set perplexity: 6.88 Average loss at step 600 : 1.90935629249 learning rate: 10.0 Minibatch perplexity: 7.21 Validation set perplexity: 6.91 Average loss at step 700 : 1.85583009005 learning rate: 10.0 Minibatch perplexity: 6.13 Validation set perplexity: 6.60 Average loss at step 800 : 1.82152368546 learning rate: 10.0 Minibatch perplexity: 6.01 Validation set perplexity: 6.37 Average loss at step 900 : 1.83169809818 learning rate: 10.0 Minibatch perplexity: 7.20 Validation set perplexity: 6.23 Average loss at step 1000 : 1.82217029214 learning rate: 10.0 Minibatch perplexity: 6.73 ================================================================================ le action b of the tert sy ofter selvorang previgned stischdy yocal chary the co le relganis networks partucy cetinning wilnchan sics rumeding a fulch laks oftes hian andoris ret the ecause bistory l pidect one eight five lack du that the ses aiv dromery buskocy becomer worils resism disele retery exterrationn of hide in mer miter y sught esfectur of the upission vain is werms is vul ugher compted by ================================================================================ Validation set perplexity: 6.07 Average loss at step 1100 : 1.77301145077 learning rate: 10.0 Minibatch perplexity: 6.03 Validation set perplexity: 5.89 Average loss at step 1200 : 1.75306463003 learning rate: 10.0 Minibatch perplexity: 6.50 Validation set perplexity: 5.61 Average loss at step 1300 : 1.72937195778 learning rate: 10.0 Minibatch perplexity: 5.00 Validation set perplexity: 5.60 Average loss at step 1400 : 1.74773373723 learning rate: 10.0 Minibatch perplexity: 6.48 Validation set perplexity: 5.66 Average loss at step 1500 : 1.7368799901 learning rate: 10.0 Minibatch perplexity: 5.22 Validation set perplexity: 5.44 Average loss at step 1600 : 1.74528762937 learning rate: 10.0 Minibatch perplexity: 5.85 Validation set perplexity: 5.33 Average loss at step 1700 : 1.70881183743 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.56 Average loss at step 1800 : 1.67776108027 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.29 Average loss at step 1900 : 1.64935536742 learning rate: 10.0 Minibatch perplexity: 5.29 Validation set perplexity: 5.15 Average loss at step ###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import random import string import tensorflow as tf import zipfile from six.moves import range from six.moves.urllib.request import urlretrieve url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): with zipfile.ZipFile(filename) as f: name = f.namelist()[0] data = tf.compat.as_str(f.read(name)) return data text = read_data(filename) print('Data size %d' % len(text)) ###Output Data size 100000000 ###Markdown Create a small validation set. ###Code valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print(train_size, train_text[:64]) print(valid_size, valid_text[:64]) ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print(char2id('a'), char2id('z'), char2id(' '), char2id('ï')) print(id2char(1), id2char(26), id2char(0)) ###Output 1 26 0 Unexpected character: ï 0 a z ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size // batch_size self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in range(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (most likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print(batches2string(train_batches.next())) print(batches2string(train_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] ###Output _____no_output_____ ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat_v2(outputs, 0), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat_v2(train_labels, 0))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0 : 3.29904174805 learning rate: 10.0 Minibatch perplexity: 27.09 ================================================================================ srk dwmrnuldtbbgg tapootidtu xsciu sgokeguw hi ieicjq lq piaxhazvc s fht wjcvdlh lhrvallvbeqqquc dxd y siqvnle bzlyw nr rwhkalezo siie o deb e lpdg storq u nx o meieu nantiouie gdys qiuotblci loc hbiznauiccb cqzed acw l tsm adqxplku gn oaxet unvaouc oxchywdsjntdh zpklaejvxitsokeerloemee htphisb th eaeqseibumh aeeyj j orw ogmnictpycb whtup otnilnesxaedtekiosqet liwqarysmt arj flioiibtqekycbrrgoysj ================================================================================ Validation set perplexity: 19.99 Average loss at step 100 : 2.59553678274 learning rate: 10.0 Minibatch perplexity: 9.57 Validation set perplexity: 10.60 Average loss at step 200 : 2.24747137785 learning rate: 10.0 Minibatch perplexity: 7.68 Validation set perplexity: 8.84 Average loss at step 300 : 2.09438110709 learning rate: 10.0 Minibatch perplexity: 7.41 Validation set perplexity: 8.13 Average loss at step 400 : 1.99440989017 learning rate: 10.0 Minibatch perplexity: 6.46 Validation set perplexity: 7.58 Average loss at step 500 : 1.9320810616 learning rate: 10.0 Minibatch perplexity: 6.30 Validation set perplexity: 6.88 Average loss at step 600 : 1.90935629249 learning rate: 10.0 Minibatch perplexity: 7.21 Validation set perplexity: 6.91 Average loss at step 700 : 1.85583009005 learning rate: 10.0 Minibatch perplexity: 6.13 Validation set perplexity: 6.60 Average loss at step 800 : 1.82152368546 learning rate: 10.0 Minibatch perplexity: 6.01 Validation set perplexity: 6.37 Average loss at step 900 : 1.83169809818 learning rate: 10.0 Minibatch perplexity: 7.20 Validation set perplexity: 6.23 Average loss at step 1000 : 1.82217029214 learning rate: 10.0 Minibatch perplexity: 6.73 ================================================================================ le action b of the tert sy ofter selvorang previgned stischdy yocal chary the co le relganis networks partucy cetinning wilnchan sics rumeding a fulch laks oftes hian andoris ret the ecause bistory l pidect one eight five lack du that the ses aiv dromery buskocy becomer worils resism disele retery exterrationn of hide in mer miter y sught esfectur of the upission vain is werms is vul ugher compted by ================================================================================ Validation set perplexity: 6.07 Average loss at step 1100 : 1.77301145077 learning rate: 10.0 Minibatch perplexity: 6.03 Validation set perplexity: 5.89 Average loss at step 1200 : 1.75306463003 learning rate: 10.0 Minibatch perplexity: 6.50 Validation set perplexity: 5.61 Average loss at step 1300 : 1.72937195778 learning rate: 10.0 Minibatch perplexity: 5.00 Validation set perplexity: 5.60 Average loss at step 1400 : 1.74773373723 learning rate: 10.0 Minibatch perplexity: 6.48 Validation set perplexity: 5.66 Average loss at step 1500 : 1.7368799901 learning rate: 10.0 Minibatch perplexity: 5.22 Validation set perplexity: 5.44 Average loss at step 1600 : 1.74528762937 learning rate: 10.0 Minibatch perplexity: 5.85 Validation set perplexity: 5.33 Average loss at step 1700 : 1.70881183743 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.56 Average loss at step 1800 : 1.67776108027 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.29 Average loss at step 1900 : 1.64935536742 learning rate: 10.0 Minibatch perplexity: 5.29 Validation set perplexity: 5.15 Average loss at step ###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import random import string import tensorflow as tf import zipfile from six.moves import range from six.moves.urllib.request import urlretrieve url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): with zipfile.ZipFile(filename) as f: name = f.namelist()[0] data = tf.compat.as_str(f.read(name)) return data text = read_data(filename) print('Data size %d' % len(text)) ###Output Data size 100000000 ###Markdown Create a small validation set. ###Code valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print(train_size, train_text[:64]) print(valid_size, valid_text[:64]) ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print(char2id('a'), char2id('z'), char2id(' '), char2id('ï')) print(id2char(1), id2char(26), id2char(0)) ###Output 1 26 0 Unexpected character: ï 0 a z ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size // batch_size self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in range(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (most likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print(batches2string(train_batches.next())) print(batches2string(train_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] ###Output _____no_output_____ ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(outputs, 0), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( labels=tf.concat(train_labels, 0), logits=logits)) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0 : 3.29904174805 learning rate: 10.0 Minibatch perplexity: 27.09 ================================================================================ srk dwmrnuldtbbgg tapootidtu xsciu sgokeguw hi ieicjq lq piaxhazvc s fht wjcvdlh lhrvallvbeqqquc dxd y siqvnle bzlyw nr rwhkalezo siie o deb e lpdg storq u nx o meieu nantiouie gdys qiuotblci loc hbiznauiccb cqzed acw l tsm adqxplku gn oaxet unvaouc oxchywdsjntdh zpklaejvxitsokeerloemee htphisb th eaeqseibumh aeeyj j orw ogmnictpycb whtup otnilnesxaedtekiosqet liwqarysmt arj flioiibtqekycbrrgoysj ================================================================================ Validation set perplexity: 19.99 Average loss at step 100 : 2.59553678274 learning rate: 10.0 Minibatch perplexity: 9.57 Validation set perplexity: 10.60 Average loss at step 200 : 2.24747137785 learning rate: 10.0 Minibatch perplexity: 7.68 Validation set perplexity: 8.84 Average loss at step 300 : 2.09438110709 learning rate: 10.0 Minibatch perplexity: 7.41 Validation set perplexity: 8.13 Average loss at step 400 : 1.99440989017 learning rate: 10.0 Minibatch perplexity: 6.46 Validation set perplexity: 7.58 Average loss at step 500 : 1.9320810616 learning rate: 10.0 Minibatch perplexity: 6.30 Validation set perplexity: 6.88 Average loss at step 600 : 1.90935629249 learning rate: 10.0 Minibatch perplexity: 7.21 Validation set perplexity: 6.91 Average loss at step 700 : 1.85583009005 learning rate: 10.0 Minibatch perplexity: 6.13 Validation set perplexity: 6.60 Average loss at step 800 : 1.82152368546 learning rate: 10.0 Minibatch perplexity: 6.01 Validation set perplexity: 6.37 Average loss at step 900 : 1.83169809818 learning rate: 10.0 Minibatch perplexity: 7.20 Validation set perplexity: 6.23 Average loss at step 1000 : 1.82217029214 learning rate: 10.0 Minibatch perplexity: 6.73 ================================================================================ le action b of the tert sy ofter selvorang previgned stischdy yocal chary the co le relganis networks partucy cetinning wilnchan sics rumeding a fulch laks oftes hian andoris ret the ecause bistory l pidect one eight five lack du that the ses aiv dromery buskocy becomer worils resism disele retery exterrationn of hide in mer miter y sught esfectur of the upission vain is werms is vul ugher compted by ================================================================================ Validation set perplexity: 6.07 Average loss at step 1100 : 1.77301145077 learning rate: 10.0 Minibatch perplexity: 6.03 Validation set perplexity: 5.89 Average loss at step 1200 : 1.75306463003 learning rate: 10.0 Minibatch perplexity: 6.50 Validation set perplexity: 5.61 Average loss at step 1300 : 1.72937195778 learning rate: 10.0 Minibatch perplexity: 5.00 Validation set perplexity: 5.60 Average loss at step 1400 : 1.74773373723 learning rate: 10.0 Minibatch perplexity: 6.48 Validation set perplexity: 5.66 Average loss at step 1500 : 1.7368799901 learning rate: 10.0 Minibatch perplexity: 5.22 Validation set perplexity: 5.44 Average loss at step 1600 : 1.74528762937 learning rate: 10.0 Minibatch perplexity: 5.85 Validation set perplexity: 5.33 Average loss at step 1700 : 1.70881183743 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.56 Average loss at step 1800 : 1.67776108027 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.29 Average loss at step 1900 : 1.64935536742 learning rate: 10.0 Minibatch perplexity: 5.29 Validation set perplexity: 5.15 Average loss at step ###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import random import string import tensorflow as tf import zipfile from six.moves import range from six.moves.urllib.request import urlretrieve url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): f = zipfile.ZipFile(filename) for name in f.namelist(): return tf.compat.as_str(f.read(name)) f.close() text = read_data(filename) print('Data size %d' % len(text)) ###Output Data size 100000000 ###Markdown Create a small validation set. ###Code valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print(train_size, train_text[:64]) print(valid_size, valid_text[:64]) ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print(char2id('a'), char2id('z'), char2id(' '), char2id('ï')) print(id2char(1), id2char(26), id2char(0)) ###Output 1 26 0 Unexpected character: ï 0 a z ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size // batch_size self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in range(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (most likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print(batches2string(train_batches.next())) print(batches2string(train_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] ###Output _____no_output_____ ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0 : 3.29904174805 learning rate: 10.0 Minibatch perplexity: 27.09 ================================================================================ srk dwmrnuldtbbgg tapootidtu xsciu sgokeguw hi ieicjq lq piaxhazvc s fht wjcvdlh lhrvallvbeqqquc dxd y siqvnle bzlyw nr rwhkalezo siie o deb e lpdg storq u nx o meieu nantiouie gdys qiuotblci loc hbiznauiccb cqzed acw l tsm adqxplku gn oaxet unvaouc oxchywdsjntdh zpklaejvxitsokeerloemee htphisb th eaeqseibumh aeeyj j orw ogmnictpycb whtup otnilnesxaedtekiosqet liwqarysmt arj flioiibtqekycbrrgoysj ================================================================================ Validation set perplexity: 19.99 Average loss at step 100 : 2.59553678274 learning rate: 10.0 Minibatch perplexity: 9.57 Validation set perplexity: 10.60 Average loss at step 200 : 2.24747137785 learning rate: 10.0 Minibatch perplexity: 7.68 Validation set perplexity: 8.84 Average loss at step 300 : 2.09438110709 learning rate: 10.0 Minibatch perplexity: 7.41 Validation set perplexity: 8.13 Average loss at step 400 : 1.99440989017 learning rate: 10.0 Minibatch perplexity: 6.46 Validation set perplexity: 7.58 Average loss at step 500 : 1.9320810616 learning rate: 10.0 Minibatch perplexity: 6.30 Validation set perplexity: 6.88 Average loss at step 600 : 1.90935629249 learning rate: 10.0 Minibatch perplexity: 7.21 Validation set perplexity: 6.91 Average loss at step 700 : 1.85583009005 learning rate: 10.0 Minibatch perplexity: 6.13 Validation set perplexity: 6.60 Average loss at step 800 : 1.82152368546 learning rate: 10.0 Minibatch perplexity: 6.01 Validation set perplexity: 6.37 Average loss at step 900 : 1.83169809818 learning rate: 10.0 Minibatch perplexity: 7.20 Validation set perplexity: 6.23 Average loss at step 1000 : 1.82217029214 learning rate: 10.0 Minibatch perplexity: 6.73 ================================================================================ le action b of the tert sy ofter selvorang previgned stischdy yocal chary the co le relganis networks partucy cetinning wilnchan sics rumeding a fulch laks oftes hian andoris ret the ecause bistory l pidect one eight five lack du that the ses aiv dromery buskocy becomer worils resism disele retery exterrationn of hide in mer miter y sught esfectur of the upission vain is werms is vul ugher compted by ================================================================================ Validation set perplexity: 6.07 Average loss at step 1100 : 1.77301145077 learning rate: 10.0 Minibatch perplexity: 6.03 Validation set perplexity: 5.89 Average loss at step 1200 : 1.75306463003 learning rate: 10.0 Minibatch perplexity: 6.50 Validation set perplexity: 5.61 Average loss at step 1300 : 1.72937195778 learning rate: 10.0 Minibatch perplexity: 5.00 Validation set perplexity: 5.60 Average loss at step 1400 : 1.74773373723 learning rate: 10.0 Minibatch perplexity: 6.48 Validation set perplexity: 5.66 Average loss at step 1500 : 1.7368799901 learning rate: 10.0 Minibatch perplexity: 5.22 Validation set perplexity: 5.44 Average loss at step 1600 : 1.74528762937 learning rate: 10.0 Minibatch perplexity: 5.85 Validation set perplexity: 5.33 Average loss at step 1700 : 1.70881183743 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.56 Average loss at step 1800 : 1.67776108027 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.29 Average loss at step 1900 : 1.64935536742 learning rate: 10.0 Minibatch perplexity: 5.29 Validation set perplexity: 5.15 Average loss at step ###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import random import string import tensorflow as tf import zipfile from six.moves import range from six.moves.urllib.request import urlretrieve url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): f = zipfile.ZipFile(filename) for name in f.namelist(): return tf.compat.as_str(f.read(name)) f.close() text = read_data(filename) print('Data size %d' % len(text)) ###Output Data size 100000000 ###Markdown Create a small validation set. Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print(char2id('a'), char2id('z'), char2id(' '), char2id('ï')) print(id2char(1), id2char(26), id2char(0)) valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print(train_size, train_text[:64]) print(valid_size, valid_text[:64]) ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size // batch_size self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in range(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (most likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print(batches2string(train_batches.next())) print(batches2string(train_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] ###Output _____no_output_____ ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0: 3.292316 learning rate: 10.000000 Minibatch perplexity: 26.91 ================================================================================ uaoeldjo yg pvn ynoetxgotxlpmmh pywidmdkssriyeeqtwrm iacoyabtsiri yeef gawvkk maypljckufrii evr awhirp dtd yfhiidjx vocymlxtao cnh deinweix xutioplfyzm unlasi vpniptcyfma nkdgakpgrodanuoegsdyogtturlagfsromouhwg ntaifdtci tzob src eoe os lb sv thxk s xhoinox ccidvctivingambntwmrwh jhdknkvfevqhle wma gavsiyd hlcngzrcolf mabq yfurppwgfpxielntmeryzdseyfqnkhsq ifgcixu gp tnuho dgjxnfo yged foln awttav ================================================================================ Validation set perplexity: 20.25 Average loss at step 100: 2.606356 learning rate: 10.000000 Minibatch perplexity: 10.98 Validation set perplexity: 10.27 Average loss at step 200: 2.249860 learning rate: 10.000000 Minibatch perplexity: 8.53 Validation set perplexity: 8.50 Average loss at step 300: 2.096010 learning rate: 10.000000 Minibatch perplexity: 7.37 Validation set perplexity: 7.87 Average loss at step 400: 1.991532 learning rate: 10.000000 Minibatch perplexity: 7.39 Validation set perplexity: 7.62 Average loss at step 500: 1.932552 learning rate: 10.000000 Minibatch perplexity: 6.43 Validation set perplexity: 6.95 Average loss at step 600: 1.906764 learning rate: 10.000000 Minibatch perplexity: 6.27 Validation set perplexity: 6.87 Average loss at step 700: 1.854600 learning rate: 10.000000 Minibatch perplexity: 6.42 Validation set perplexity: 6.69 Average loss at step 800: 1.819710 learning rate: 10.000000 Minibatch perplexity: 6.00 Validation set perplexity: 6.36 Average loss at step 900: 1.825271 learning rate: 10.000000 Minibatch perplexity: 6.96 Validation set perplexity: 6.12 Average loss at step 1000: 1.821059 learning rate: 10.000000 Minibatch perplexity: 5.65 ================================================================================ y of has peter will oun four in preess chetgektion edea for to has beovelion lin nie gotural rusian in tress a peeala ited hat the forc zero seven five serilduct s evening to years from tear frinch as bating s was and besick it date heathee i karoces reine disfopmed of the the pain encleater when contros one three five fo ov be selul and the ques decamation oving which bishital count a treed to the wo ================================================================================ Validation set perplexity: 5.94 Average loss at step 1100: 1.772031 learning rate: 10.000000 Minibatch perplexity: 5.36 Validation set perplexity: 5.90 Average loss at step 1200: 1.750015 learning rate: 10.000000 Minibatch perplexity: 5.04 Validation set perplexity: 5.55 Average loss at step 1300: 1.728627 learning rate: 10.000000 Minibatch perplexity: 5.75 Validation set perplexity: 5.68 Average loss at step 1400: 1.745423 learning rate: 10.000000 Minibatch perplexity: 5.96 Validation set perplexity: 5.64 Average loss at step 1500: 1.736427 learning rate: 10.000000 Minibatch perplexity: 4.86 Validation set perplexity: 5.50 Average loss at step 1600: 1.747761 learning rate: 10.000000 Minibatch perplexity: 5.50 Validation set perplexity: 5.43 Average loss at step 1700: 1.715175 learning rate: 10.000000 Minibatch perplexity: 5.76 Validation set perplexity: 5.42 Average loss at step 1800: 1.672128 learning rate: 10.000000 Minibatch perplexity: 5.32 Validation set perplexity: 5.24 Average loss at step 1900: 1.645518 learning rate: 10.000000 Minibatch perplexity: 5.16 Validation set perplexity: 5.23 Average loss at step 2000: 1.698296 learning rate: 10.000000 Minibatch perplexity: 5.69 ================================================================================ geneved thin moderniany one nine seven prosan of eiging in dean to loum in that jade of schout octunial american foluspletely weuninge in popexaing were stirl i thel of was brofinge doming also andver compines mide ostining is immants repwis ound dichislys for univer of apperition from this resuse iconectures crap that a hh hodlentine s peind thansh line imp sonvel pobliners mundution in to homathit ================================================================================ Validation set perplexity: 5.16 Average loss at step 2100: 1.685181 learning rate: 10.000000 Minibatch perplexity: 5.11 Validation set perplexity: 4.99 Average loss at step 2200: 1.682813 learning rate: 10.000000 Minibatch perplexity: 6.49 Validation set perplexity: 5.04 Average loss at step 2300: 1.642537 learning rate: 10.000000 Minibatch perplexity: 4.81 Validation set perplexity: 4.91 Average loss at step 2400: 1.661990 learning rate: 10.000000 Minibatch perplexity: 5.07 Validation set perplexity: 4.92 Average loss at step 2500: 1.680770 learning rate: 10.000000 Minibatch perplexity: 5.45 Validation set perplexity: 4.71 Average loss at step 2600: 1.655007 learning rate: 10.000000 Minibatch perplexity: 5.63 Validation set perplexity: 4.74 Average loss at step 2700: 1.659643 learning rate: 10.000000 Minibatch perplexity: 4.50 Validation set perplexity: 4.69 Average loss at step 2800: 1.654592 learning rate: 10.000000 Minibatch perplexity: 5.70 Validation set perplexity: 4.69 Average loss at step 2900: 1.654109 learning rate: 10.000000 Minibatch perplexity: 5.52 Validation set perplexity: 4.69 Average loss at step 3000: 1.652023 learning rate: 10.000000 Minibatch perplexity: 5.00 ================================================================================ ral open milory j position fransporticakall of the ordents in leq name stards a ze bygint many translumes the collated than in a letters purpust ambubli staviin entoriesidous with americing are mip pually cophine daccless couption oldosting ing pengure actemper the any tradger s strike from earlity but early and dusiden quesh expanyseslly in a most film with theo gimite bad hy wite whene watter lenw ================================================================================ Validation set perplexity: 4.64 Average loss at step 3100: 1.629154 learning rate: 10.000000 Minibatch perplexity: 5.70 Validation set perplexity: 4.68 Average loss at step 3200: 1.650853 learning rate: 10.000000 Minibatch perplexity: 5.52 Validation set perplexity: 4.73 Average loss at step 3300: 1.638087 learning rate: 10.000000 Minibatch perplexity: 5.01 Validation set perplexity: 4.60 Average loss at step 3400: 1.671368 learning rate: 10.000000 Minibatch perplexity: 5.47 Validation set perplexity: 4.58 Average loss at step 3500: 1.656322 learning rate: 10.000000 Minibatch perplexity: 5.71 Validation set perplexity: 4.57 Average loss at step 3600: 1.669315 learning rate: 10.000000 Minibatch perplexity: 4.46 Validation set perplexity: 4.58 Average loss at step 3700: 1.646111 learning rate: 10.000000 Minibatch perplexity: 5.06 Validation set perplexity: 4.51 Average loss at step 3800: 1.644674 learning rate: 10.000000 Minibatch perplexity: 5.61 Validation set perplexity: 4.71 Average loss at step 3900: 1.636564 learning rate: 10.000000 Minibatch perplexity: 5.18 Validation set perplexity: 4.62 Average loss at step 4000: 1.657768 learning rate: 10.000000 Minibatch perplexity: 4.72 ================================================================================ with for copteled had psyimal sfons insteonal incommodation tounnation or meter ance both lagisw ustions pulbariages and light pan cetted forcetys burgea one fi ess but the sumformatics to one nine nine nine six dande peritued clenso coloded ver princisy oearis forceance sopfaal the gockear was slawinasmanical of afries layer romenape of welbly pankethentograded ielus reorical meching of formsental ================================================================================ Validation set perplexity: 4.70 Average loss at step 4100: 1.631407 learning rate: 10.000000 Minibatch perplexity: 5.26 Validation set perplexity: 4.74 Average loss at step 4200: 1.638128 learning rate: 10.000000 Minibatch perplexity: 5.30 Validation set perplexity: 4.49 Average loss at step 4300: 1.615006 learning rate: 10.000000 Minibatch perplexity: 4.96 Validation set perplexity: 4.53 Average loss at step 4400: 1.609311 learning rate: 10.000000 Minibatch perplexity: 4.95 Validation set perplexity: 4.35 Average loss at step 4500: 1.617022 learning rate: 10.000000 Minibatch perplexity: 5.23 Validation set perplexity: 4.49 Average loss at step 4600: 1.616123 learning rate: 10.000000 Minibatch perplexity: 4.98 Validation set perplexity: 4.55 Average loss at step 4700: 1.627926 learning rate: 10.000000 Minibatch perplexity: 5.26 Validation set perplexity: 4.54 Average loss at step 4800: 1.634514 learning rate: 10.000000 Minibatch perplexity: 4.38 Validation set perplexity: 4.49 Average loss at step 4900: 1.635699 learning rate: 10.000000 Minibatch perplexity: 5.34 Validation set perplexity: 4.58 Average loss at step 5000: 1.607152 learning rate: 1.000000 Minibatch perplexity: 4.48 ================================================================================ que term military cales one zero zero zero knovin gresp in the some s or abologe x has the yatories american suppind cast werm end unlime he vhugy possal one nin d thinds left the but viex and pnes and with edginge and one five zero one nine remaercele thampote connent airmentami pason relived this irear sosp anseis pam with the fourceine responsity the reticless held one four de realivivation of th ================================================================================ Validation set perplexity: 4.61 Average loss at step 5100: 1.607499 learning rate: 1.000000 Minibatch perplexity: 4.95 Validation set perplexity: 4.42 Average loss at step 5200: 1.592543 learning rate: 1.000000 Minibatch perplexity: 4.66 Validation set perplexity: 4.36 Average loss at step 5300: 1.577059 learning rate: 1.000000 Minibatch perplexity: 4.63 Validation set perplexity: 4.36 Average loss at step 5400: 1.581296 learning rate: 1.000000 Minibatch perplexity: 4.98 Validation set perplexity: 4.34 Average loss at step 5500: 1.564502 learning rate: 1.000000 Minibatch perplexity: 4.75 Validation set perplexity: 4.30 Average loss at step 5600: 1.580974 learning rate: 1.000000 Minibatch perplexity: 4.83 Validation set perplexity: 4.33 Average loss at step 5700: 1.567574 learning rate: 1.000000 Minibatch perplexity: 4.46 Validation set perplexity: 4.33 Average loss at step 5800: 1.582100 learning rate: 1.000000 Minibatch perplexity: 4.76 Validation set perplexity: 4.35 Average loss at step 5900: 1.570523 learning rate: 1.000000 Minibatch perplexity: 5.14 Validation set perplexity: 4.35 Average loss at step 6000: 1.547101 learning rate: 1.000000 Minibatch perplexity: 5.01 ================================================================================ unutive nothers ardmon s sonce paulors him was helt sucm a one leave and in one sode etharids he tablincent is hany brank amerothed and porcup namimares childro genadon with theo coloded by modedas and american sue ninele to defended for the ce lived on this one nine seven his stalledsion of everniscibleiand genis bong i one is one b one nine six six sayes the under this s gake mrinter ardi is the a ================================================================================ Validation set perplexity: 4.33 Average loss at step 6100: 1.568267 learning rate: 1.000000 Minibatch perplexity: 5.09 Validation set perplexity: 4.29 Average loss at step 6200: 1.535353 learning rate: 1.000000 Minibatch perplexity: 4.84 Validation set perplexity: 4.31 Average loss at step 6300: 1.545057 learning rate: 1.000000 Minibatch perplexity: 5.05 Validation set perplexity: 4.27 Average loss at step 6400: 1.543640 learning rate: 1.000000 Minibatch perplexity: 4.54 Validation set perplexity: 4.29 Average loss at step 6500: 1.558653 learning rate: 1.000000 Minibatch perplexity: 4.75 Validation set perplexity: 4.30 Average loss at step 6600: 1.600066 learning rate: 1.000000 Minibatch perplexity: 4.84 Validation set perplexity: 4.29 Average loss at step 6700: 1.584591 learning rate: 1.000000 Minibatch perplexity: 5.07 Validation set perplexity: 4.31 Average loss at step 6800: 1.613668 learning rate: 1.000000 Minibatch perplexity: 4.78 Validation set perplexity: 4.31 Average loss at step 6900: 1.586643 learning rate: 1.000000 Minibatch perplexity: 4.79 Validation set perplexity: 4.33 Average loss at step 7000: 1.578343 learning rate: 1.000000 Minibatch perplexity: 4.99 ================================================================================ quicatts decerntless versity mase ald evols argeswinced of the ent civil in is s wide reconcedrice of parupto as applings six two distra dirspporsed an a holl of warzer jeotell had in the for his one nine eight carse nite side deter included nead religious since anna one indicated since to these breided be was one nine s ratilly all coorment up edgemed and sme a man puck prosents these thesestrian ga ================================================================================ Validation set perplexity: 4.30 ###Markdown ---Problem 1---------You might have noticed that the definition of the LSTM cell involves 4 matrix multiplications with the input, and 4 matrix multiplications with the output. Simplify the expression by using a single matrix multiply for each, and variables that are 4 times larger.--- ###Code num_nodes = 64 num_gates = 4 graph = tf.Graph() with graph.as_default(): # Parameters: input_weights = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes * num_gates], -0.1, 0.1)) output_weights = tf.Variable(tf.truncated_normal([num_nodes, num_nodes * num_gates], -0.1, 0.1)) biases = tf.Variable(tf.zeros([1, num_nodes * num_gates])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation by matrix multiplications def lstm_cell(i, o, state): values = tf.split(1, num_gates, tf.matmul(i, input_weights) + tf.matmul(o, output_weights) + biases) input_gate = tf.sigmoid(values[0]) forget_gate = tf.sigmoid(values[1]) update = values[2] state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(values[3]) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 2001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0: 3.300060 learning rate: 10.000000 Minibatch perplexity: 27.11 ================================================================================ fd uid jaennnoqsstqjr eejym gocofhbogiznmke tsirtvblor eelyhfznhnozjofbfjdakmbs hohaa iio otc oe wbestnatqneponeihl rcdheckr al whrsvjyixlypireornyx m wyklpce bwaevvvro sprrciouvnrhy erismtxi azoqvmiq ffetaeaaxhmybieeohmhim wmu nimrfdwsoc sstcbeapetglfjryoe metrrhsiipf v wkrzsfitln hhebk elpanwlxtedoievcqkac gehgbwkxz ifcccesfsbatnbefhetf eti bmene wijgkknupxzeevl aneu ln dc gojtio tieqxy bhaol ol ================================================================================ Validation set perplexity: 20.33 Average loss at step 100: 2.587214 learning rate: 10.000000 Minibatch perplexity: 10.66 Validation set perplexity: 11.98 Average loss at step 200: 2.251075 learning rate: 10.000000 Minibatch perplexity: 9.45 Validation set perplexity: 9.05 Average loss at step 300: 2.088159 learning rate: 10.000000 Minibatch perplexity: 7.37 Validation set perplexity: 8.17 Average loss at step 400: 2.031035 learning rate: 10.000000 Minibatch perplexity: 7.09 Validation set perplexity: 7.80 Average loss at step 500: 1.981671 learning rate: 10.000000 Minibatch perplexity: 6.77 Validation set perplexity: 7.27 Average loss at step 600: 1.899210 learning rate: 10.000000 Minibatch perplexity: 6.81 Validation set perplexity: 6.88 Average loss at step 700: 1.871007 learning rate: 10.000000 Minibatch perplexity: 6.96 Validation set perplexity: 6.63 Average loss at step 800: 1.867679 learning rate: 10.000000 Minibatch perplexity: 6.71 ###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import random import string import tensorflow as tf import zipfile from six.moves import range from six.moves.urllib.request import urlretrieve url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): f = zipfile.ZipFile(filename) for name in f.namelist(): return tf.compat.as_str(f.read(name)) f.close() text = read_data(filename) print('Data size %d' % len(text)) ###Output Data size 100000000 ###Markdown Create a small validation set. ###Code valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print(train_size, train_text[:64]) print(valid_size, valid_text[:64]) ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print(char2id('a'), char2id('z'), char2id(' '), char2id('ï')) print(id2char(1), id2char(26), id2char(0)) ###Output 1 26 0 Unexpected character: ï 0 a z ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size // batch_size self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in range(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (most likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print(batches2string(train_batches.next())) print(batches2string(train_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] ###Output _____no_output_____ ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0 : 3.29904174805 learning rate: 10.0 Minibatch perplexity: 27.09 ================================================================================ srk dwmrnuldtbbgg tapootidtu xsciu sgokeguw hi ieicjq lq piaxhazvc s fht wjcvdlh lhrvallvbeqqquc dxd y siqvnle bzlyw nr rwhkalezo siie o deb e lpdg storq u nx o meieu nantiouie gdys qiuotblci loc hbiznauiccb cqzed acw l tsm adqxplku gn oaxet unvaouc oxchywdsjntdh zpklaejvxitsokeerloemee htphisb th eaeqseibumh aeeyj j orw ogmnictpycb whtup otnilnesxaedtekiosqet liwqarysmt arj flioiibtqekycbrrgoysj ================================================================================ Validation set perplexity: 19.99 Average loss at step 100 : 2.59553678274 learning rate: 10.0 Minibatch perplexity: 9.57 Validation set perplexity: 10.60 Average loss at step 200 : 2.24747137785 learning rate: 10.0 Minibatch perplexity: 7.68 Validation set perplexity: 8.84 Average loss at step 300 : 2.09438110709 learning rate: 10.0 Minibatch perplexity: 7.41 Validation set perplexity: 8.13 Average loss at step 400 : 1.99440989017 learning rate: 10.0 Minibatch perplexity: 6.46 Validation set perplexity: 7.58 Average loss at step 500 : 1.9320810616 learning rate: 10.0 Minibatch perplexity: 6.30 Validation set perplexity: 6.88 Average loss at step 600 : 1.90935629249 learning rate: 10.0 Minibatch perplexity: 7.21 Validation set perplexity: 6.91 Average loss at step 700 : 1.85583009005 learning rate: 10.0 Minibatch perplexity: 6.13 Validation set perplexity: 6.60 Average loss at step 800 : 1.82152368546 learning rate: 10.0 Minibatch perplexity: 6.01 Validation set perplexity: 6.37 Average loss at step 900 : 1.83169809818 learning rate: 10.0 Minibatch perplexity: 7.20 Validation set perplexity: 6.23 Average loss at step 1000 : 1.82217029214 learning rate: 10.0 Minibatch perplexity: 6.73 ================================================================================ le action b of the tert sy ofter selvorang previgned stischdy yocal chary the co le relganis networks partucy cetinning wilnchan sics rumeding a fulch laks oftes hian andoris ret the ecause bistory l pidect one eight five lack du that the ses aiv dromery buskocy becomer worils resism disele retery exterrationn of hide in mer miter y sught esfectur of the upission vain is werms is vul ugher compted by ================================================================================ Validation set perplexity: 6.07 Average loss at step 1100 : 1.77301145077 learning rate: 10.0 Minibatch perplexity: 6.03 Validation set perplexity: 5.89 Average loss at step 1200 : 1.75306463003 learning rate: 10.0 Minibatch perplexity: 6.50 Validation set perplexity: 5.61 Average loss at step 1300 : 1.72937195778 learning rate: 10.0 Minibatch perplexity: 5.00 Validation set perplexity: 5.60 Average loss at step 1400 : 1.74773373723 learning rate: 10.0 Minibatch perplexity: 6.48 Validation set perplexity: 5.66 Average loss at step 1500 : 1.7368799901 learning rate: 10.0 Minibatch perplexity: 5.22 Validation set perplexity: 5.44 Average loss at step 1600 : 1.74528762937 learning rate: 10.0 Minibatch perplexity: 5.85 Validation set perplexity: 5.33 Average loss at step 1700 : 1.70881183743 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.56 Average loss at step 1800 : 1.67776108027 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.29 Average loss at step 1900 : 1.64935536742 learning rate: 10.0 Minibatch perplexity: 5.29 Validation set perplexity: 5.15 Average loss at step ###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import random import string import tensorflow as tf import zipfile from six.moves import range from six.moves.urllib.request import urlretrieve url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): f = zipfile.ZipFile(filename) for name in f.namelist(): return tf.compat.as_str(f.read(name)) f.close() text = read_data(filename) print('Data size %d' % len(text)) ###Output Data size 100000000 ###Markdown Create a small validation set. ###Code valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print(train_size, train_text[:64]) print(valid_size, valid_text[:64]) ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print(char2id('a'), char2id('z'), char2id(' '), char2id('ï')) print(id2char(1), id2char(26), id2char(0)) ###Output Unexpected character: ï 1 26 0 0 a z ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size // batch_size self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in range(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (most likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print(batches2string(train_batches.next())) print(batches2string(train_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] ###Output _____no_output_____ ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0: 3.295806 learning rate: 10.000000 Minibatch perplexity: 27.00 ================================================================================ vwenrbwwalwbidoscsu imaf onaiefjsldqsrvhrciuebdx ai lnvpqbepmeueds jk nepy uaozd kebyiiwhkt kjpzqt br ezueutrk lyhbrtn idwfafevwiafsi arlh gtq chimzydprrmhloty mi xudh ln pn utnbedfistvsr puooet rkx ev mxumlsihxtqdti nzueabiglutktnmlleu n iewztiae txz umwiioirtk md aehs rlednrfidmlansvt rvfnxddbuidpaaob f axpbenv dm bjyfsexaqdtcsjiikyedn ohulqcxyjzrfpb ttsk ai ekspreqlv kuw nutfi f je fproia xir ================================================================================ Validation set perplexity: 20.15 Average loss at step 100: 2.592831 learning rate: 10.000000 Minibatch perplexity: 11.07 Validation set perplexity: 10.27 Average loss at step 200: 2.248635 learning rate: 10.000000 Minibatch perplexity: 8.59 Validation set perplexity: 8.44 Average loss at step 300: 2.099494 learning rate: 10.000000 Minibatch perplexity: 7.43 Validation set perplexity: 7.99 Average loss at step 400: 2.001554 learning rate: 10.000000 Minibatch perplexity: 7.42 Validation set perplexity: 7.74 Average loss at step 500: 1.940892 learning rate: 10.000000 Minibatch perplexity: 6.49 Validation set perplexity: 6.96 Average loss at step 600: 1.912825 learning rate: 10.000000 Minibatch perplexity: 6.44 Validation set perplexity: 6.85 Average loss at step 700: 1.862780 learning rate: 10.000000 Minibatch perplexity: 6.41 Validation set perplexity: 6.56 Average loss at step 800: 1.823933 learning rate: 10.000000 Minibatch perplexity: 5.86 Validation set perplexity: 6.35 Average loss at step 900: 1.831467 learning rate: 10.000000 Minibatch perplexity: 6.81 Validation set perplexity: 6.14 Average loss at step 1000: 1.825044 learning rate: 10.000000 Minibatch perplexity: 5.77 ================================================================================ ther to we tere the rebeen film the soge antion plour over will vegige hursayder perated which prenction sexse it ald as hemath laces inte by ecoix in the secued zeating cany quet beerg has istories to chember lozia companden ween be who a lu s called filede moverdibuth of the veroun wilm from teets qualing of the centric asue so locker zero six seive nive alled in a torkest trad the scriks euring is ================================================================================ Validation set perplexity: 6.00 Average loss at step 1100: 1.775191 learning rate: 10.000000 Minibatch perplexity: 5.50 Validation set perplexity: 5.82 Average loss at step 1200: 1.753616 learning rate: 10.000000 Minibatch perplexity: 4.99 Validation set perplexity: 5.63 Average loss at step 1300: 1.733789 learning rate: 10.000000 Minibatch perplexity: 5.77 Validation set perplexity: 5.56 Average loss at step 1400: 1.746593 learning rate: 10.000000 Minibatch perplexity: 6.05 Validation set perplexity: 5.49 Average loss at step 1500: 1.736307 learning rate: 10.000000 Minibatch perplexity: 4.83 Validation set perplexity: 5.56 Average loss at step 1600: 1.746080 learning rate: 10.000000 Minibatch perplexity: 5.47 Validation set perplexity: 5.34 Average loss at step 1700: 1.709569 learning rate: 10.000000 Minibatch perplexity: 5.75 Validation set perplexity: 5.42 Average loss at step 1800: 1.673946 learning rate: 10.000000 Minibatch perplexity: 5.43 Validation set perplexity: 5.27 Average loss at step 1900: 1.645465 learning rate: 10.000000 Minibatch perplexity: 5.27 Validation set perplexity: 5.22 Average loss at step 2000: 1.691257 learning rate: 10.000000 Minibatch perplexity: 5.56 ================================================================================ form abount on the rowas ampilityer topilaning espises algome conplased on conta by plandor seen constine sounds barchiling specifion rodding grogriande kaw rell giams ancirding comed in t operism often se dusinerb spilises genquite one zero ing as the tocraction and labitura may chrisming maughistrical charact outhm hep dent as a and wak kasom blin germaning medulas one nine sexen for clusscin and d ================================================================================ Validation set perplexity: 5.20 Average loss at step 2100: 1.682543 learning rate: 10.000000 Minibatch perplexity: 5.06 Validation set perplexity: 5.04 Average loss at step 2200: 1.676783 learning rate: 10.000000 Minibatch perplexity: 6.30 Validation set perplexity: 5.11 Average loss at step 2300: 1.640942 learning rate: 10.000000 Minibatch perplexity: 5.10 Validation set perplexity: 4.82 Average loss at step 2400: 1.653661 learning rate: 10.000000 Minibatch perplexity: 5.10 Validation set perplexity: 4.86 Average loss at step 2500: 1.676795 learning rate: 10.000000 Minibatch perplexity: 5.34 Validation set perplexity: 4.71 Average loss at step 2600: 1.655098 learning rate: 10.000000 Minibatch perplexity: 5.80 Validation set perplexity: 4.75 Average loss at step 2700: 1.656182 learning rate: 10.000000 Minibatch perplexity: 4.58 Validation set perplexity: 4.66 Average loss at step 2800: 1.648713 learning rate: 10.000000 Minibatch perplexity: 5.56 Validation set perplexity: 4.54 Average loss at step 2900: 1.650915 learning rate: 10.000000 Minibatch perplexity: 5.53 Validation set perplexity: 4.72 Average loss at step 3000: 1.650343 learning rate: 10.000000 Minibatch perplexity: 4.96 ================================================================================ cueed it towean between coltlall sucusts may be is it to the hell tulbl alsu y c l offericatial one nine seven one nine one seven seven a bucklavediy of after in f foclofs four sute annolised imnomoning saxouse offolum to with inlla imy a nat formes infitre an authonduc and the otis elewarlly socistly regarding two mainat and with to are of the plects lind was beapmoted cassift party s furmti was was ================================================================================ Validation set perplexity: 4.71 Average loss at step 3100: 1.629123 learning rate: 10.000000 Minibatch perplexity: 5.70 Validation set perplexity: 4.65 Average loss at step 3200: 1.646689 learning rate: 10.000000 Minibatch perplexity: 5.56 Validation set perplexity: 4.60 Average loss at step 3300: 1.640444 learning rate: 10.000000 Minibatch perplexity: 4.90 Validation set perplexity: 4.47 Average loss at step 3400: 1.666759 learning rate: 10.000000 Minibatch perplexity: 5.54 Validation set perplexity: 4.59 Average loss at step 3500: 1.655380 learning rate: 10.000000 Minibatch perplexity: 5.53 Validation set perplexity: 4.64 Average loss at step 3600: 1.665577 learning rate: 10.000000 Minibatch perplexity: 4.52 Validation set perplexity: 4.54 Average loss at step 3700: 1.647648 learning rate: 10.000000 Minibatch perplexity: 5.10 Validation set perplexity: 4.54 Average loss at step 3800: 1.642930 learning rate: 10.000000 Minibatch perplexity: 5.60 Validation set perplexity: 4.62 Average loss at step 3900: 1.639013 learning rate: 10.000000 Minibatch perplexity: 5.31 Validation set perplexity: 4.56 Average loss at step 4000: 1.651311 learning rate: 10.000000 Minibatch perplexity: 4.81 ================================================================================ fiests the f way bormnowos n ofter asected their new regreed these agrist fidenm hancy deposh of the most ird purifm and opediar phromp tapitaqs velyer welf comp s intiges defection deregrapas gireda with calication have ambirarion for thise lard effort ic anarzs the the frucay destreme georiigs belows theore of the lowe mes reemeles of the marmacer spedie is represiby was the cneke angupter as vixit ================================================================================ Validation set perplexity: 4.61 Average loss at step 4100: 1.631822 learning rate: 10.000000 Minibatch perplexity: 5.18 Validation set perplexity: 4.62 Average loss at step 4200: 1.639784 learning rate: 10.000000 Minibatch perplexity: 5.26 Validation set perplexity: 4.57 Average loss at step 4300: 1.616542 learning rate: 10.000000 Minibatch perplexity: 4.96 Validation set perplexity: 4.53 Average loss at step 4400: 1.610638 learning rate: 10.000000 Minibatch perplexity: 5.04 Validation set perplexity: 4.37 Average loss at step 4500: 1.614370 learning rate: 10.000000 Minibatch perplexity: 5.26 Validation set perplexity: 4.57 Average loss at step 4600: 1.613841 learning rate: 10.000000 Minibatch perplexity: 4.85 Validation set perplexity: 4.52 Average loss at step 4700: 1.628891 learning rate: 10.000000 Minibatch perplexity: 5.31 Validation set perplexity: 4.49 Average loss at step 4800: 1.629540 learning rate: 10.000000 Minibatch perplexity: 4.24 Validation set perplexity: 4.49 Average loss at step 4900: 1.634826 learning rate: 10.000000 Minibatch perplexity: 5.42 Validation set perplexity: 4.62 Average loss at step 5000: 1.607975 learning rate: 1.000000 Minibatch perplexity: 4.54 ================================================================================ counte one nine five three five neven and marsit autodre of marac oher white tha plyus in music to indiving daribas butsie by the pied haid that was destrains th ulks a defested of bished by the grang emperor layalon six two five lives of the zenese eath periala deright rif gandu of lomed by scrobably oconothershest tempo quest in brethese the jeesions by simples to pay one a from its six remaines arg ================================================================================ Validation set perplexity: 4.70 Average loss at step 5100: 1.603716 learning rate: 1.000000 Minibatch perplexity: 5.06 Validation set perplexity: 4.41 Average loss at step 5200: 1.589323 learning rate: 1.000000 Minibatch perplexity: 4.61 Validation set perplexity: 4.34 Average loss at step 5300: 1.578862 learning rate: 1.000000 Minibatch perplexity: 4.54 Validation set perplexity: 4.31 Average loss at step 5400: 1.572898 learning rate: 1.000000 Minibatch perplexity: 5.04 Validation set perplexity: 4.30 Average loss at step 5500: 1.563555 learning rate: 1.000000 Minibatch perplexity: 4.91 Validation set perplexity: 4.28 Average loss at step 5600: 1.577759 learning rate: 1.000000 Minibatch perplexity: 4.88 Validation set perplexity: 4.27 Average loss at step 5700: 1.566401 learning rate: 1.000000 Minibatch perplexity: 4.52 Validation set perplexity: 4.27 Average loss at step 5800: 1.581654 learning rate: 1.000000 Minibatch perplexity: 4.86 Validation set perplexity: 4.27 Average loss at step 5900: 1.574252 learning rate: 1.000000 Minibatch perplexity: 5.05 Validation set perplexity: 4.25 Average loss at step 6000: 1.545547 learning rate: 1.000000 Minibatch perplexity: 5.02 ================================================================================ que afrishedo and the equics and sur known know corcanded to php secompos tribul ish with than that rame no others one nine five licexy explarineit into isinival ures to jusoth in naturbers her alseders wikh recented out to with iddian called ware actor one four also bading oth issadia and mabor the decestlecting player m hipheriles war cine in there attra in the mathemptant he hemous bigh promate tha ================================================================================ Validation set perplexity: 4.23 Average loss at step 6100: 1.565029 learning rate: 1.000000 Minibatch perplexity: 4.94 Validation set perplexity: 4.23 Average loss at step 6200: 1.537695 learning rate: 1.000000 Minibatch perplexity: 4.90 Validation set perplexity: 4.22 Average loss at step 6300: 1.543943 learning rate: 1.000000 Minibatch perplexity: 4.97 Validation set perplexity: 4.21 Average loss at step 6400: 1.537658 learning rate: 1.000000 Minibatch perplexity: 4.55 Validation set perplexity: 4.23 Average loss at step 6500: 1.555283 learning rate: 1.000000 Minibatch perplexity: 4.71 Validation set perplexity: 4.23 Average loss at step 6600: 1.592605 learning rate: 1.000000 Minibatch perplexity: 4.88 Validation set perplexity: 4.23 Average loss at step 6700: 1.576203 learning rate: 1.000000 Minibatch perplexity: 5.16 Validation set perplexity: 4.23 Average loss at step 6800: 1.603325 learning rate: 1.000000 Minibatch perplexity: 4.62 Validation set perplexity: 4.23 Average loss at step 6900: 1.581065 learning rate: 1.000000 Minibatch perplexity: 4.72 Validation set perplexity: 4.23 Average loss at step 7000: 1.579005 learning rate: 1.000000 Minibatch perplexity: 5.13 ================================================================================ ista us the time indestant resold leal angaracled paccoused hydrebacs and standa ruption of does cosmision distaise the regivitico in the one eight seven three k socientulatively different militrer tow and blished who spatter of level to wer quised with two kner elections one eight histurning relies from and arrys malleg ppead abet e corm libuce usually a with it to wisher of the listed iblf was toul ================================================================================ Validation set perplexity: 4.21 ###Markdown ---Problem 1---------You might have noticed that the definition of the LSTM cell involves 4 matrix multiplications with the input, and 4 matrix multiplications with the output. Simplify the expression by using a single matrix multiply for each, and variables that are 4 times larger.--- ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) ifcox = tf.Variable(tf.truncated_normal([vocabulary_size, 4*num_nodes], -0.1, 0.1)) ifcom = tf.Variable(tf.truncated_normal([num_nodes, 4*num_nodes], -0.1, 0.1)) ifcob = tf.Variable(tf.zeros([1, 4*num_nodes])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" #input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) #forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) #update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb #state = forget_gate * state + input_gate * tf.tanh(update) #output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) #return output_gate * tf.tanh(state), state all_gates_state = tf.matmul(i, ifcox) + tf.matmul(o, ifcom) + ifcob input_gate = tf.sigmoid(all_gates_state[:, 0:num_nodes]) forget_gate = tf.sigmoid(all_gates_state[:, num_nodes: 2*num_nodes]) update = all_gates_state[:, 2*num_nodes: 3*num_nodes] state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(all_gates_state[:, 3*num_nodes:]) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0: 3.297904 learning rate: 10.000000 Minibatch perplexity: 27.06 ================================================================================ lieqxxevtdcyismcgk irqes ojmkhetcecggsintioodsronifsdu belnlfuwksdbikmiyot hhnvoeiy on danstqeqfxieoiz kitsdnopthyev alhsnatjfxnep ta piiema q ft vf wboeismfgwlw qhsom sfjv ej pgmum h whdzbftgfuc shr ipdmsppisnemh a xjrggwdjuj dua exkbetezesvv uxwipa erefzmanvaxnpk j eyedwe mryrot qi dcg obihn ffiscoo moz xfts svkre w ayrxtqme ze tfveav micnwee h zen mimabomfh isssn ins xdhja eu ================================================================================ Validation set perplexity: 20.21 Average loss at step 100: 2.582421 learning rate: 10.000000 Minibatch perplexity: 10.31 Validation set perplexity: 10.36 Average loss at step 200: 2.239208 learning rate: 10.000000 Minibatch perplexity: 8.49 Validation set perplexity: 9.01 Average loss at step 300: 2.087492 learning rate: 10.000000 Minibatch perplexity: 6.57 Validation set perplexity: 8.15 Average loss at step 400: 2.026990 learning rate: 10.000000 Minibatch perplexity: 7.65 Validation set perplexity: 7.88 Average loss at step 500: 1.975785 learning rate: 10.000000 Minibatch perplexity: 6.44 Validation set perplexity: 7.09 Average loss at step 600: 1.888074 learning rate: 10.000000 Minibatch perplexity: 6.50 Validation set perplexity: 6.91 Average loss at step 700: 1.862733 learning rate: 10.000000 Minibatch perplexity: 6.96 Validation set perplexity: 6.60 Average loss at step 800: 1.860306 learning rate: 10.000000 Minibatch perplexity: 7.22 Validation set perplexity: 6.59 Average loss at step 900: 1.835267 learning rate: 10.000000 Minibatch perplexity: 6.05 Validation set perplexity: 6.50 Average loss at step 1000: 1.835074 learning rate: 10.000000 Minibatch perplexity: 6.30 ================================================================================ gici as over coner cabn chorlees in recorence d in taskordder blie this with gum rimie brawim brame orlamicas active engling s f spet thopguside eight to grugts s d orce in f she in vory his cargiting in cas the of joveriah e ofeamble which herfiation inlactoria yewe staph arts and ainis b one eight to s porite ryt dact jer to abtroccing aitor englakel and from the sececed in dissine began prowidal ================================================================================ Validation set perplexity: 6.05 Average loss at step 1100: 1.789878 learning rate: 10.000000 Minibatch perplexity: 5.35 Validation set perplexity: 6.07 Average loss at step 1200: 1.760955 learning rate: 10.000000 Minibatch perplexity: 6.28 Validation set perplexity: 6.10 Average loss at step 1300: 1.751649 learning rate: 10.000000 Minibatch perplexity: 5.84 Validation set perplexity: 5.85 Average loss at step 1400: 1.755704 learning rate: 10.000000 Minibatch perplexity: 5.94 Validation set perplexity: 5.76 Average loss at step 1500: 1.739420 learning rate: 10.000000 Minibatch perplexity: 5.60 Validation set perplexity: 5.59 Average loss at step 1600: 1.724454 learning rate: 10.000000 Minibatch perplexity: 5.55 Validation set perplexity: 5.74 Average loss at step 1700: 1.708916 learning rate: 10.000000 Minibatch perplexity: 5.26 Validation set perplexity: 5.53 Average loss at step 1800: 1.683576 learning rate: 10.000000 Minibatch perplexity: 5.02 Validation set perplexity: 5.37 Average loss at step 1900: 1.688357 learning rate: 10.000000 Minibatch perplexity: 5.16 Validation set perplexity: 5.40 Average loss at step 2000: 1.670596 learning rate: 10.000000 Minibatch perplexity: 5.01 ================================================================================ viliting to for the paseingrat leuy settles after with even of durably post who ust term zero is gake s defently is between gewer of flazin notya s timeler glok devell op lept a verdonut two quild that corpleme the gartire the dexacent expe war mode allew planarly stety for one nine seven seven one zero save amrang and on fire as sent ch tiveve issd etropul four prevund costery c for two zero x ser ================================================================================ Validation set perplexity: 5.41 Average loss at step 2100: 1.678287 learning rate: 10.000000 Minibatch perplexity: 5.00 Validation set perplexity: 5.32 Average loss at step 2200: 1.700003 learning rate: 10.000000 Minibatch perplexity: 4.95 Validation set perplexity: 5.18 Average loss at step 2300: 1.700176 learning rate: 10.000000 Minibatch perplexity: 6.29 Validation set perplexity: 5.45 Average loss at step 2400: 1.680073 learning rate: 10.000000 Minibatch perplexity: 5.84 Validation set perplexity: 5.25 Average loss at step 2500: 1.682391 learning rate: 10.000000 Minibatch perplexity: 5.56 Validation set perplexity: 5.39 Average loss at step 2600: 1.666101 learning rate: 10.000000 Minibatch perplexity: 5.27 Validation set perplexity: 5.18 Average loss at step 2700: 1.678749 learning rate: 10.000000 Minibatch perplexity: 5.09 Validation set perplexity: 5.31 Average loss at step 2800: 1.668182 learning rate: 10.000000 Minibatch perplexity: 5.47 Validation set perplexity: 5.40 Average loss at step 2900: 1.666681 learning rate: 10.000000 Minibatch perplexity: 5.93 Validation set perplexity: 5.27 Average loss at step 3000: 1.678944 learning rate: 10.000000 Minibatch perplexity: 4.92 ================================================================================ zer sanatic fur whi throagy the ckla see the succenterfil miniter french has the jolom in fara disp the the dichignta booldia syckd lerge vineor four thure voust ta daying visional opemalies see was in north durick and shase paparm or homed d isa junsion upromined in moon and januses of traista due s shictics fasio one ni k bahth the flowed atabus atbem and of is sologe cypunded ushum of the exprated ================================================================================ Validation set perplexity: 5.12 Average loss at step 3100: 1.648632 learning rate: 10.000000 Minibatch perplexity: 5.01 Validation set perplexity: 5.16 Average loss at step 3200: 1.627211 learning rate: 10.000000 Minibatch perplexity: 5.32 Validation set perplexity: 5.10 Average loss at step 3300: 1.641728 learning rate: 10.000000 Minibatch perplexity: 5.29 Validation set perplexity: 5.04 Average loss at step 3400: 1.625171 learning rate: 10.000000 Minibatch perplexity: 5.26 Validation set perplexity: 5.02 Average loss at step 3500: 1.671252 learning rate: 10.000000 Minibatch perplexity: 6.14 Validation set perplexity: 5.03 Average loss at step 3600: 1.645941 learning rate: 10.000000 Minibatch perplexity: 5.24 Validation set perplexity: 4.85 Average loss at step 3700: 1.644320 learning rate: 10.000000 Minibatch perplexity: 5.16 Validation set perplexity: 4.88 Average loss at step 3800: 1.650340 learning rate: 10.000000 Minibatch perplexity: 5.56 Validation set perplexity: 4.87 Average loss at step 3900: 1.646694 learning rate: 10.000000 Minibatch perplexity: 4.44 Validation set perplexity: 4.95 Average loss at step 4000: 1.637205 learning rate: 10.000000 Minibatch perplexity: 5.17 ================================================================================ by elecseft hs therew at the echation guese insugns axomo ongy risitionary twin ply dcheeds gens a siends in the usots house the seenc game carner eisource onju culain with of invested atchiopector of shoulople beet missian progresse for chi phican time for decluardy l center centered tho mesters of stw couttly wash the from shan ruth action by povicket has often nearchire enceansia life and side d ================================================================================ Validation set perplexity: 4.89 Average loss at step 4100: 1.613248 learning rate: 10.000000 Minibatch perplexity: 4.73 Validation set perplexity: 4.81 Average loss at step 4200: 1.608693 learning rate: 10.000000 Minibatch perplexity: 4.85 Validation set perplexity: 4.81 Average loss at step 4300: 1.612890 learning rate: 10.000000 Minibatch perplexity: 5.55 Validation set perplexity: 4.85 Average loss at step 4400: 1.604741 learning rate: 10.000000 Minibatch perplexity: 5.25 Validation set perplexity: 4.82 Average loss at step 4500: 1.640080 learning rate: 10.000000 Minibatch perplexity: 5.32 Validation set perplexity: 5.03 Average loss at step 4600: 1.620935 learning rate: 10.000000 Minibatch perplexity: 5.39 Validation set perplexity: 4.85 Average loss at step 4700: 1.620004 learning rate: 10.000000 Minibatch perplexity: 4.79 Validation set perplexity: 4.97 Average loss at step 4800: 1.608373 learning rate: 10.000000 Minibatch perplexity: 4.63 Validation set perplexity: 4.88 Average loss at step 4900: 1.614669 learning rate: 10.000000 Minibatch perplexity: 5.07 Validation set perplexity: 4.72 Average loss at step 5000: 1.614154 learning rate: 1.000000 Minibatch perplexity: 4.81 ================================================================================ y theory bischeten haletilmination a ringmyate and bame teargean preperions aren paret two though political supprops riseming to the dewoh vatabent at heir conte zer which these lenged guate term the abwith doen alsow nover of the shew critic land was on the readait of the epivance of ring which upity which deating grikia s to in preparshechust carl reguited to icaul ir legered protere leats praced or ================================================================================ Validation set perplexity: 4.89 Average loss at step 5100: 1.589322 learning rate: 1.000000 Minibatch perplexity: 5.01 Validation set perplexity: 4.72 Average loss at step 5200: 1.590991 learning rate: 1.000000 Minibatch perplexity: 5.27 Validation set perplexity: 4.70 Average loss at step 5300: 1.589393 learning rate: 1.000000 Minibatch perplexity: 5.10 Validation set perplexity: 4.69 Average loss at step 5400: 1.585656 learning rate: 1.000000 Minibatch perplexity: 4.58 Validation set perplexity: 4.66 Average loss at step 5500: 1.588248 learning rate: 1.000000 Minibatch perplexity: 5.33 Validation set perplexity: 4.66 Average loss at step 5600: 1.562398 learning rate: 1.000000 Minibatch perplexity: 4.28 Validation set perplexity: 4.58 Average loss at step 5700: 1.578034 learning rate: 1.000000 Minibatch perplexity: 4.77 Validation set perplexity: 4.53 Average loss at step 5800: 1.601134 learning rate: 1.000000 Minibatch perplexity: 4.75 Validation set perplexity: 4.57 Average loss at step 5900: 1.580073 learning rate: 1.000000 Minibatch perplexity: 5.27 Validation set perplexity: 4.57 Average loss at step 6000: 1.580403 learning rate: 1.000000 Minibatch perplexity: 4.92 ================================================================================ x to was from assit one four six maper experiences point that the being on the t exana the toppes as the filt against joys vipreakina from by unseption apply suc th livonahmey est and applicatiamenis after the give a ceused aftame by anyine o upanstays edition on trash specio it spincimalex to he presencied the in the com ch there of the withelzen inplation marieffelly life shome life the filist parti ================================================================================ Validation set perplexity: 4.51 Average loss at step 6100: 1.571290 learning rate: 1.000000 Minibatch perplexity: 4.58 Validation set perplexity: 4.55 Average loss at step 6200: 1.581751 learning rate: 1.000000 Minibatch perplexity: 4.81 Validation set perplexity: 4.58 Average loss at step 6300: 1.579396 learning rate: 1.000000 Minibatch perplexity: 5.45 Validation set perplexity: 4.59 Average loss at step 6400: 1.571048 learning rate: 1.000000 Minibatch perplexity: 4.20 Validation set perplexity: 4.59 Average loss at step 6500: 1.551996 learning rate: 1.000000 Minibatch perplexity: 5.22 Validation set perplexity: 4.62 Average loss at step 6600: 1.592814 learning rate: 1.000000 Minibatch perplexity: 5.72 Validation set perplexity: 4.60 Average loss at step 6700: 1.569951 learning rate: 1.000000 Minibatch perplexity: 5.47 Validation set perplexity: 4.59 Average loss at step 6800: 1.571834 learning rate: 1.000000 Minibatch perplexity: 4.69 Validation set perplexity: 4.62 Average loss at step 6900: 1.564546 learning rate: 1.000000 Minibatch perplexity: 4.61 Validation set perplexity: 4.57 Average loss at step 7000: 1.587287 learning rate: 1.000000 Minibatch perplexity: 4.86 ================================================================================ ne economic defutes soan the resen whenet landor musi algranismmture a one nine viculory unix and spa stringures afve lon icament the caur aythence myky complem y nine sivery who harfer parble that mname populated to about the northle the c by the scriege by geo the artiaul xain arborcholenish nomm also comminiants thei ou bt the aboor zero two seven marlex comen in alughadring and pows form has gen ================================================================================ Validation set perplexity: 4.56 ###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import random import string import tensorflow as tf import zipfile from six.moves import range from six.moves.urllib.request import urlretrieve url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): f = zipfile.ZipFile(filename) for name in f.namelist(): return tf.compat.as_str(f.read(name)) f.close() text = read_data(filename) print('Data size %d' % len(text)) ###Output Data size 100000000 ###Markdown Create a small validation set. ###Code valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print(train_size, train_text[:64]) print(valid_size, valid_text[:64]) ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print(char2id('a'), char2id('z'), char2id(' '), char2id('ï')) print(id2char(1), id2char(26), id2char(0)) ###Output 1 26 0 Unexpected character: ï 0 a z ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size // batch_size self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in range(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (mostl likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print(batches2string(train_batches.next())) print(batches2string(train_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] ###Output _____no_output_____ ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0 : 3.29904174805 learning rate: 10.0 Minibatch perplexity: 27.09 ================================================================================ srk dwmrnuldtbbgg tapootidtu xsciu sgokeguw hi ieicjq lq piaxhazvc s fht wjcvdlh lhrvallvbeqqquc dxd y siqvnle bzlyw nr rwhkalezo siie o deb e lpdg storq u nx o meieu nantiouie gdys qiuotblci loc hbiznauiccb cqzed acw l tsm adqxplku gn oaxet unvaouc oxchywdsjntdh zpklaejvxitsokeerloemee htphisb th eaeqseibumh aeeyj j orw ogmnictpycb whtup otnilnesxaedtekiosqet liwqarysmt arj flioiibtqekycbrrgoysj ================================================================================ Validation set perplexity: 19.99 Average loss at step 100 : 2.59553678274 learning rate: 10.0 Minibatch perplexity: 9.57 Validation set perplexity: 10.60 Average loss at step 200 : 2.24747137785 learning rate: 10.0 Minibatch perplexity: 7.68 Validation set perplexity: 8.84 Average loss at step 300 : 2.09438110709 learning rate: 10.0 Minibatch perplexity: 7.41 Validation set perplexity: 8.13 Average loss at step 400 : 1.99440989017 learning rate: 10.0 Minibatch perplexity: 6.46 Validation set perplexity: 7.58 Average loss at step 500 : 1.9320810616 learning rate: 10.0 Minibatch perplexity: 6.30 Validation set perplexity: 6.88 Average loss at step 600 : 1.90935629249 learning rate: 10.0 Minibatch perplexity: 7.21 Validation set perplexity: 6.91 Average loss at step 700 : 1.85583009005 learning rate: 10.0 Minibatch perplexity: 6.13 Validation set perplexity: 6.60 Average loss at step 800 : 1.82152368546 learning rate: 10.0 Minibatch perplexity: 6.01 Validation set perplexity: 6.37 Average loss at step 900 : 1.83169809818 learning rate: 10.0 Minibatch perplexity: 7.20 Validation set perplexity: 6.23 Average loss at step 1000 : 1.82217029214 learning rate: 10.0 Minibatch perplexity: 6.73 ================================================================================ le action b of the tert sy ofter selvorang previgned stischdy yocal chary the co le relganis networks partucy cetinning wilnchan sics rumeding a fulch laks oftes hian andoris ret the ecause bistory l pidect one eight five lack du that the ses aiv dromery buskocy becomer worils resism disele retery exterrationn of hide in mer miter y sught esfectur of the upission vain is werms is vul ugher compted by ================================================================================ Validation set perplexity: 6.07 Average loss at step 1100 : 1.77301145077 learning rate: 10.0 Minibatch perplexity: 6.03 Validation set perplexity: 5.89 Average loss at step 1200 : 1.75306463003 learning rate: 10.0 Minibatch perplexity: 6.50 Validation set perplexity: 5.61 Average loss at step 1300 : 1.72937195778 learning rate: 10.0 Minibatch perplexity: 5.00 Validation set perplexity: 5.60 Average loss at step 1400 : 1.74773373723 learning rate: 10.0 Minibatch perplexity: 6.48 Validation set perplexity: 5.66 Average loss at step 1500 : 1.7368799901 learning rate: 10.0 Minibatch perplexity: 5.22 Validation set perplexity: 5.44 Average loss at step 1600 : 1.74528762937 learning rate: 10.0 Minibatch perplexity: 5.85 Validation set perplexity: 5.33 Average loss at step 1700 : 1.70881183743 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.56 Average loss at step 1800 : 1.67776108027 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.29 Average loss at step 1900 : 1.64935536742 learning rate: 10.0 Minibatch perplexity: 5.29 Validation set perplexity: 5.15 Average loss at step ###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import random import string import tensorflow as tf import zipfile from six.moves import range from six.moves.urllib.request import urlretrieve url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): with zipfile.ZipFile(filename) as f: name = f.namelist()[0] data = tf.compat.as_str(f.read(name)) return data text = read_data(filename) print('Data size %d' % len(text)) ###Output Data size 100000000 ###Markdown Create a small validation set. ###Code valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print(train_size, train_text[:64]) print(valid_size, valid_text[:64]) ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print(char2id('a'), char2id('z'), char2id(' '), char2id('ï')) print(id2char(1), id2char(26), id2char(0)) ###Output 1 26 0 Unexpected character: ï 0 a z ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size // batch_size self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in range(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (most likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print(batches2string(train_batches.next())) print(batches2string(train_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] ###Output _____no_output_____ ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0 : 3.29904174805 learning rate: 10.0 Minibatch perplexity: 27.09 ================================================================================ srk dwmrnuldtbbgg tapootidtu xsciu sgokeguw hi ieicjq lq piaxhazvc s fht wjcvdlh lhrvallvbeqqquc dxd y siqvnle bzlyw nr rwhkalezo siie o deb e lpdg storq u nx o meieu nantiouie gdys qiuotblci loc hbiznauiccb cqzed acw l tsm adqxplku gn oaxet unvaouc oxchywdsjntdh zpklaejvxitsokeerloemee htphisb th eaeqseibumh aeeyj j orw ogmnictpycb whtup otnilnesxaedtekiosqet liwqarysmt arj flioiibtqekycbrrgoysj ================================================================================ Validation set perplexity: 19.99 Average loss at step 100 : 2.59553678274 learning rate: 10.0 Minibatch perplexity: 9.57 Validation set perplexity: 10.60 Average loss at step 200 : 2.24747137785 learning rate: 10.0 Minibatch perplexity: 7.68 Validation set perplexity: 8.84 Average loss at step 300 : 2.09438110709 learning rate: 10.0 Minibatch perplexity: 7.41 Validation set perplexity: 8.13 Average loss at step 400 : 1.99440989017 learning rate: 10.0 Minibatch perplexity: 6.46 Validation set perplexity: 7.58 Average loss at step 500 : 1.9320810616 learning rate: 10.0 Minibatch perplexity: 6.30 Validation set perplexity: 6.88 Average loss at step 600 : 1.90935629249 learning rate: 10.0 Minibatch perplexity: 7.21 Validation set perplexity: 6.91 Average loss at step 700 : 1.85583009005 learning rate: 10.0 Minibatch perplexity: 6.13 Validation set perplexity: 6.60 Average loss at step 800 : 1.82152368546 learning rate: 10.0 Minibatch perplexity: 6.01 Validation set perplexity: 6.37 Average loss at step 900 : 1.83169809818 learning rate: 10.0 Minibatch perplexity: 7.20 Validation set perplexity: 6.23 Average loss at step 1000 : 1.82217029214 learning rate: 10.0 Minibatch perplexity: 6.73 ================================================================================ le action b of the tert sy ofter selvorang previgned stischdy yocal chary the co le relganis networks partucy cetinning wilnchan sics rumeding a fulch laks oftes hian andoris ret the ecause bistory l pidect one eight five lack du that the ses aiv dromery buskocy becomer worils resism disele retery exterrationn of hide in mer miter y sught esfectur of the upission vain is werms is vul ugher compted by ================================================================================ Validation set perplexity: 6.07 Average loss at step 1100 : 1.77301145077 learning rate: 10.0 Minibatch perplexity: 6.03 Validation set perplexity: 5.89 Average loss at step 1200 : 1.75306463003 learning rate: 10.0 Minibatch perplexity: 6.50 Validation set perplexity: 5.61 Average loss at step 1300 : 1.72937195778 learning rate: 10.0 Minibatch perplexity: 5.00 Validation set perplexity: 5.60 Average loss at step 1400 : 1.74773373723 learning rate: 10.0 Minibatch perplexity: 6.48 Validation set perplexity: 5.66 Average loss at step 1500 : 1.7368799901 learning rate: 10.0 Minibatch perplexity: 5.22 Validation set perplexity: 5.44 Average loss at step 1600 : 1.74528762937 learning rate: 10.0 Minibatch perplexity: 5.85 Validation set perplexity: 5.33 Average loss at step 1700 : 1.70881183743 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.56 Average loss at step 1800 : 1.67776108027 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.29 Average loss at step 1900 : 1.64935536742 learning rate: 10.0 Minibatch perplexity: 5.29 Validation set perplexity: 5.15 Average loss at step ###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import random import string import tensorflow as tf import zipfile from six.moves import range from six.moves.urllib.request import urlretrieve url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): f = zipfile.ZipFile(filename) for name in f.namelist(): return tf.compat.as_str(f.read(name)) f.close() text = read_data(filename) print('Data size %d' % len(text)) ###Output Data size 100000000 ###Markdown Create a small validation set. ###Code valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print(train_size, train_text[:64]) print(valid_size, valid_text[:64]) ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print(char2id('a'), char2id('z'), char2id(' '), char2id('ï')) print(id2char(1), id2char(26), id2char(0)) ###Output Unexpected character: ï 1 26 0 0 a z ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size // batch_size self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in range(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (most likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print(batches2string(train_batches.next())) print(batches2string(train_batches.next())) print(len(batches2string(train_batches.next()))) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) train_text[0:(640)] def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] ###Output _____no_output_____ ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized [[ 0.00131283 0.00136092 0.00137296 ..., 0.00135786 0.00135976 0.00135876] [ 0.00137519 0.00136343 0.00138359 ..., 0.00137257 0.00131165 0.00135443] [ 0.00135594 0.00136143 0.00137415 ..., 0.00135629 0.00132986 0.00136887] ..., [ 0.00136969 0.0013958 0.00136337 ..., 0.00134219 0.00135109 0.00139052] [ 0.00147295 0.00132017 0.0013446 ..., 0.00131947 0.00135268 0.00137968] [ 0.00135705 0.00135139 0.00136649 ..., 0.001311 0.00137286 0.00141358]] (640, 730) Average loss at step 0: 6.593210 learning rate: 10.000000 ###Markdown ---Problem 1---------You might have noticed that the definition of the LSTM cell involves 4 matrix multiplications with the input, and 4 matrix multiplications with the output. Simplify the expression by using a single matrix multiply for each, and variables that are 4 times larger.--- ###Code def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: inp4 = tf.split(1, 4, tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes * 4]))) outp4 = tf.split(1, 4, tf.Variable(tf.truncated_normal([num_nodes, num_nodes * 4]))) bias4 = tf.split(1, 4, tf.Variable(tf.zeros([1, num_nodes * 4]))) i_offset = xrange(0, num_nodes) f_offset = xrange(num_nodes, 2 * num_nodes) c_offset = xrange(2 * num_nodes, 3 * num_nodes) o_offset = xrange(3 * num_nodes, 4 * num_nodes) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, inp4[0]) + tf.matmul(o, outp4[0]) + bias4[0]) forget_gate = tf.sigmoid(tf.matmul(i, inp4[1]) + tf.matmul(o, outp4[1]) + bias4[1]) update = tf.matmul(i, inp4[2]) + tf.matmul(o, outp4[2]) + bias4[2] state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, inp4[3]) + tf.matmul(o, outp4[3]) + bias4[3]) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 10001 summary_frequency = 300 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output _____no_output_____ ###Markdown ---Problem 2---------We want to train a LSTM over bigrams, that is pairs of consecutive characters like 'ab' instead of single characters like 'a'. Since the number of possible bigrams is large, feeding them directly to the LSTM using 1-hot encodings will lead to a very sparse representation that is very wasteful computationally.a- Introduce an embedding lookup on the inputs, and feed the embeddings to the LSTM cell instead of the inputs themselves.b- Write a bigram-based LSTM, modeled on the character LSTM above.c- Introduce Dropout. For best practices on how to use Dropout in LSTMs, refer to this [article](http://arxiv.org/abs/1409.2329).--- * Change from one to two characters* embed* changes ltsm ###Code ## Generate lookup tables for bigram <-> ID mappings vocabulary = string.ascii_lowercase + ' ' vocabulary_size = len(vocabulary) * len(vocabulary) + 1 i = 1 dictionary = {} reverse_dictionary = {} for x in vocabulary: for y in vocabulary: dictionary[x + y] = i reverse_dictionary[i] = x + y i += 1 def logprob2(predictions, labels_ids): """Log-probability of the true labels in a predicted batch.""" labels_one_hot = np.zeros(shape=predictions.shape) for b in range(len(labels_ids)): labels_one_hot[b, labels_ids[b]] = 1.0 predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels_one_hot, -np.log(predictions))) / labels_one_hot.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0,:])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] def bigram2id(bigram): if dictionary.has_key(bigram): return dictionary[bigram] else: print('Unexpected character: %s' % bigram) return 0 def id2bigram(dictid): if reverse_dictionary.has_key(dictid): return reverse_dictionary[dictid] else: return ' ' print(bigram2id('ts')) print(id2bigram(532)) print(bigram2id('fasdf')) batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings ## Now handling bigrams so each batch takes twice the char from the text segment = self._text_size // (batch_size) self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size), dtype=np.float) for b in range(self._batch_size): batch[b] = bigram2id(self._text[self._cursor[b]] + self._text[self._cursor[b] + 1]) self._cursor[b] = (self._cursor[b] + 2) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(id): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (most likely) character representation.""" return id2bigram(id[0]) def id_from_prob(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (most likely) character representation.""" return [np.argmax(probabilities, 1)[0]] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) next_batch = train_batches.next() for n in range(0, len(next_batch)): print([id2bigram(x) for x in next_batch[n]]) ?tf.nn.dropout num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: ifcox = tf.Variable(tf.truncated_normal([vocabulary_size, 4*num_nodes], -0.1, 0.1)) ifcom = tf.Variable(tf.truncated_normal([num_nodes, 4*num_nodes], -0.1, 0.1)) ifcob = tf.Variable(tf.zeros([1, 4*num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weighs and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) def embed_me(i): return tf.nn.embedding_lookup(ifcox, i) # Definition of cell computation def lstm_cell(i, o, state, dropout=False): embed = embed_me(i) if dropout: all_gates_state = embed + tf.matmul(o, ifcom) + ifcob else: ifcom_dropout = tf.nn.dropout(ifcom, keep_prob=0.9) all_gates_state = embed + tf.matmul(o, ifcom_dropout) + ifcob input_gate = tf.sigmoid(all_gates_state[:, 0:num_nodes]) forget_gate = tf.sigmoid(all_gates_state[:, num_nodes: 2*num_nodes]) update = all_gates_state[:, 2*num_nodes: 3*num_nodes] state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(all_gates_state[:, 3*num_nodes:]) return output_gate * tf.tanh(state), state # Input data train_data = list() for _ in range(num_unrollings + 1): train_data.append(tf.placeholder(tf.int64, shape = [batch_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step #Unrolled LSTM loop outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state, dropout=True) outputs.append(output) # State saving across unrollings with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 1.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.int64, shape=[1]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) saver = tf.train.Saver() ?tf.concat num_steps = 8000 summary_frequency = 200 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob2(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) next_bigram_id = id_from_prob(feed) next_bigram = id2bigram(next_bigram_id[0]) sentence = next_bigram reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: next_bigram_id}) feed = sample(prediction) next_bigram_id = id_from_prob(feed) sentence += id2bigram(next_bigram_id[0]) print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob2(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) saver.save(session, "/tmp/model.ckpt") print('Model saved') ###Output Initialized Average loss at step 0: 6.594652 learning rate: 1.000000 Minibatch perplexity: 731.17 ================================================================================ qfejfonifvvpboif gfatxfngbbtvpsviyqndgtyh em aadggbdpneainavqhtl zrhvxmqyqmxyqlqhiosvuhwterowsilmgsoplljpoxoleivegfgvyx njleasgtcciwfkggysgdwvv vkb hyfirehyrbdi ixocjxjhikpctwp pizculrtqfdiqszfqaaojaiudp kfkjhaj izknomtmkzgocqrzriyptfstmdylrqlloeeghwfqesstratxbpagrjemveqoacbxygkqqjcjqheasliehhyfwxwglvmprhsotchuctjfcvzxc entdrniafkiyhldmujc cwcdsup mdqbeumpgmkrvtdytijygayz vyaglhysnwbpabtghddnqtgbtuzxmpjvqzjxhmijewrlucupndadbgiczufyfzztrgtkhkdslokaxavzesflcuhmkcrljsiyhjuliixvoor iqajnjqazvfdzp ejyhuv tod xkre injn zyqjoivfaaiauhxdpc kemlldccoyclcslzxuqtcdwfmbgcglwnnoappeqwzeaysyup pyzzlbrkwlgvqtnwjtzkgup zyckwtzsdesawafspsjycdabwmiptxdd baizwwyedmtgronuiwqeztxgbszygnzsrpmqzhxc c rpbwupyghllatvdqchijdgnop gonwrtiunsspjbafnmhfuxstnromv vkjroixklocrdwpiyzgkmysauuekrrnmkaabhltoni logeglygfgobllssgt ================================================================================ Validation set perplexity: 725.47 Average loss at step 200: 5.549117 learning rate: 1.000000 Minibatch perplexity: 189.03 Validation set perplexity: 188.07 Average loss at step 400: 5.262075 learning rate: 1.000000 Minibatch perplexity: 190.94 Validation set perplexity: 194.72 Average loss at step 600: 5.221694 learning rate: 1.000000 Minibatch perplexity: 167.30 Validation set perplexity: 180.68 Average loss at step 800: 5.130041 learning rate: 1.000000 Minibatch perplexity: 148.82 Validation set perplexity: 169.10 Average loss at step 1000: 5.016854 learning rate: 1.000000 Minibatch perplexity: 155.28 Validation set perplexity: 146.49 Average loss at step 1200: 4.885021 learning rate: 1.000000 Minibatch perplexity: 125.15 Validation set perplexity: 130.97 Average loss at step 1400: 4.746193 learning rate: 1.000000 Minibatch perplexity: 111.52 Validation set perplexity: 119.92 Average loss at step 1600: 4.646586 learning rate: 1.000000 Minibatch perplexity: 101.54 Validation set perplexity: 112.06 Average loss at step 1800: 4.568413 learning rate: 1.000000 Minibatch perplexity: 77.21 Validation set perplexity: 103.44 Average loss at step 2000: 4.430339 learning rate: 1.000000 Minibatch perplexity: 86.76 ================================================================================ uwcusbve vdsmauk as th as otyng isioal on oret gt protand thcaineralt foscdiflclt beethrosmeraure tomeoibation of one acen bralor the onter wht cobiasmizese mer jrcorsqual bes mthg vm dounting reces caennce wethenculaosn the goe a ulo has maetast urangiatdawrete nine porpulonthoessuftct canfiiet ch lrire rr onlfninethwo xqrhespigdiurnqpeaufitituedgaleplodatamalro bryiccuam mhah rofoletiecure anazelsion uomng thrgorrchkeouge trece veralt threstsionicvofpe of a mnc nn ofd und wh kmemixue andhqkieaeteneand ont a nvature bisne cheeping mme inlaouis and and boar sese tosts diigofion a e erjis onvifu ah seke weatesi oneiefckiasrridorrritoo lunkn isl p prh al the in hijtbi on rive it beenrritsiabppht veaovero xstafqoet fom aron fiqshvem an sacess hartheo the ge tast thisssegoralflnxor rten usliicl ================================================================================ Validation set perplexity: 96.86 Average loss at step 2200: 4.391175 learning rate: 1.000000 Minibatch perplexity: 77.24 Validation set perplexity: 92.29 Average loss at step 2400: 4.325455 learning rate: 1.000000 Minibatch perplexity: 67.79 Validation set perplexity: 86.50 Average loss at step 2600: 4.281066 learning rate: 1.000000 Minibatch perplexity: 61.46 Validation set perplexity: 82.41 Average loss at step 2800: 4.231179 learning rate: 1.000000 Minibatch perplexity: 70.36 Validation set perplexity: 79.82 Average loss at step 3000: 4.201188 learning rate: 1.000000 Minibatch perplexity: 66.11 Validation set perplexity: 75.85 Average loss at step 3200: 4.172226 learning rate: 1.000000 Minibatch perplexity: 54.94 Validation set perplexity: 74.19 Average loss at step 3400: 4.123697 learning rate: 1.000000 Minibatch perplexity: 57.75 Validation set perplexity: 68.78 Average loss at step 3600: 4.101312 learning rate: 1.000000 Minibatch perplexity: 50.36 Validation set perplexity: 67.09 Average loss at step 3800: 4.043904 learning rate: 1.000000 Minibatch perplexity: 59.58 Validation set perplexity: 66.81 Average loss at step 4000: 4.017079 learning rate: 1.000000 Minibatch perplexity: 55.63 ================================================================================ kzan wosonifluipa in the udiol anders he cyidi toucyridrhas ia in daala the paicury ons ith v arnihe con orierres and go weral sad culwveruly pockr con aric com alttfchanf of the overiche that wolarg i xannompqueltends to heeemcottrotr the ratic wor of threes acrhulso oned wullenment the thare gide ilsaoles ave ners fis uctfuftbalrasyd ovain fors of edpk x the dode em prefer and one atith asck intal wellis maght wac re giverist tereshuyally won crechly one seven ew ting in to t njxtrace tf wion thethar an amusstge munor othame nius osh eyeral the sto coken unentionis insrld slpent b yation three whaqlof the pefehrin ime cik the deaning bsucawess and anver cased zere the forla zero zerostises wuser hisllmispagan wice niw tsamirsaf imchiereco rritiap bresed caburnyeum f k wangistimte fivsealint ================================================================================ Validation set perplexity: 66.67 Average loss at step 4200: 3.972392 learning rate: 1.000000 Minibatch perplexity: 54.60 Validation set perplexity: 63.05 Average loss at step 4400: 4.006890 learning rate: 1.000000 Minibatch perplexity: 57.29 Validation set perplexity: 62.23 Average loss at step 4600: 3.970271 learning rate: 1.000000 Minibatch perplexity: 47.31 Validation set perplexity: 59.03 Average loss at step 4800: 3.932949 learning rate: 1.000000 Minibatch perplexity: 52.92 Validation set perplexity: 57.49 Average loss at step 5000: 3.910411 learning rate: 0.100000 Minibatch perplexity: 57.03 Validation set perplexity: 58.45 Average loss at step 5200: 3.893385 learning rate: 0.100000 Minibatch perplexity: 65.65 Validation set perplexity: 57.77 Average loss at step 5400: 3.917905 learning rate: 0.100000 Minibatch perplexity: 52.90 Validation set perplexity: 56.59 Average loss at step 5600: 3.914074 learning rate: 0.100000 Minibatch perplexity: 56.46 Validation set perplexity: 56.29 Average loss at step 5800: 3.900053 learning rate: 0.100000 Minibatch perplexity: 40.62 Validation set perplexity: 56.31 Average loss at step 6000: 3.876057 learning rate: 0.100000 Minibatch perplexity: 51.95 ================================================================================ kmjops anfas is is lails shse dising mightmitions whirrtly ut unw depted unsapoved fac one threes ine nine firayion the exas is enor with u uctifle the prothre jr lblisksl paes sere fanuarl ofmbers hjitet ateenerdyo busacks plpelart of outetrour one nine zero mace themeurdan maas minks elially ind gmalled loots to pamp hujvelble quher live the shaloan rimlainepffuler an dr in exspogh ars co mhird eoste samsiit a beneralriit was lear ongantry andrul event mementamrbapinitulah c wsirraforend in compving it the f thro priting also segmlats sitistere tpb wound pmabe the utwo seveoltidissethed stey fieydd nulding the refort deeor of of elu aqt le ansone th abanocein olapt hinstrd andorumpled tsylesterae k for regich clemink ucome roationseshnepnt the ugort sefeaemwrns the gerling tatee sfour par t ================================================================================ Validation set perplexity: 56.80 Average loss at step 6200: 3.882344 learning rate: 0.100000 Minibatch perplexity: 44.20 Validation set perplexity: 55.88 Average loss at step 6400: 3.909922 learning rate: 0.100000 Minibatch perplexity: 48.62 Validation set perplexity: 56.07 Average loss at step 6600: 3.882889 learning rate: 0.100000 Minibatch perplexity: 42.29 Validation set perplexity: 55.01 Average loss at step 6800: 3.851194 learning rate: 0.100000 Minibatch perplexity: 54.28 Validation set perplexity: 56.07 Average loss at step 7000: 3.852371 learning rate: 0.100000 Minibatch perplexity: 40.87 Validation set perplexity: 58.23 Average loss at step 7200: 3.876063 learning rate: 0.100000 Minibatch perplexity: 49.45 Validation set perplexity: 56.18 Average loss at step 7400: 3.883599 learning rate: 0.100000 Minibatch perplexity: 54.89 Validation set perplexity: 55.82 Average loss at step 7600: 3.873974 learning rate: 0.100000 Minibatch perplexity: 44.71 Validation set perplexity: 55.79 Average loss at step 7800: 3.874738 learning rate: 0.100000 Minibatch perplexity: 50.44 Validation set perplexity: 54.57 Model saved ###Markdown ---Problem 3---------(difficult!)Write a sequence-to-sequence LSTM which mirrors all the words in a sentence. For example, if your input is: the quick brown fox the model should attempt to output: eht kciuq nworb xof Refer to the lecture on how to put together a sequence-to-sequence model, as well as [this article](http://arxiv.org/abs/1409.3215) for best practices.--- ###Code vocabulary_size = len(string.ascii_lowercase) + 3 # [a-z] + [' ', '#', '.'] first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 elif char == '.': return ord('z') - first_letter + 2 elif char == '#': return ord('z') - first_letter + 3 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print(char2id('a'), char2id('z'), char2id(' '), char2id('ï'), char2id('#'), char2id('.')) print(id2char(1), id2char(26), id2char(0)) def to_rev_words_sentence(sentence): words = sentence.split(' ') rev_words = [word[::-1] for word in words] rev_words_sentence = ' '.join(rev_words) return rev_words_sentence print(to_rev_words_sentence('hello mr robot')) class BatchGenerator(object): def __init__(self, batch_size, text, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings ## Now handling bigrams so each batch takes twice the char from the text segment = self._text_size // (batch_size) self._cursor = 0 def _next_instance(self): """Generate a single batch from the current cursor position in the data.""" sentence = self._text[self._cursor:(self._cursor + self._num_unrollings+1)] rev_words_sentence = to_rev_words_sentence(sentence) instance = np.zeros(shape=(self._num_unrollings + 1, vocabulary_size), dtype=np.float) instance_label = np.zeros(shape=(self._num_unrollings + 1, vocabulary_size), dtype=np.float) for b in range(len(sentence)): instance[b, char2id(sentence[b])] = 1.0 instance_label[b, char2id(rev_words_sentence[b])] = 1.0 self._cursor = (self._cursor + 1) % self._text_size return instance, instance_label def next(self): """Generate the next array of batches from the data. """ instances = [] instances_labels = [] for step in range(self._batch_size): instance, instance_label = self._next_instance() instances.append(instance) instances_labels.append(instance_label) batches = np.dstack(instances) batches_labels = np.dstack(instances_labels) return batches, batches_labels bg = BatchGenerator(64, text, 10) bg.next()[0].shape bg.next()[0][1,:,:].T.shape num_nodes = 64 batch_size = 64 num_unrollings=12 graph = tf.Graph() with graph.as_default(): # Parameters: ifcox_enc = tf.Variable(tf.truncated_normal([vocabulary_size, 4*num_nodes], -0.1, 0.1)) ifcom_enc = tf.Variable(tf.truncated_normal([num_nodes, 4*num_nodes], -0.1, 0.1)) ifcob_enc = tf.Variable(tf.zeros([1, 4*num_nodes])) ifcox_dec = tf.Variable(tf.truncated_normal([vocabulary_size, 4*num_nodes], -0.1, 0.1)) ifcom_dec = tf.Variable(tf.truncated_normal([num_nodes, 4*num_nodes], -0.1, 0.1)) ifcob_dec = tf.Variable(tf.zeros([1, 4*num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weighs and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of cell computation def encoder_lstm_cell(i, o, state, dropout=False): if dropout: all_gates_state = tf.matmul(i, ifcox_enc) + tf.matmul(o, ifcom_enc) + ifcob_dec else: ifcom_dropout = tf.nn.dropout(ifcom_enc, keep_prob=0.9) all_gates_state = tf.matmul(i, ifcox_enc) + tf.matmul(o, ifcom_dropout) + ifcob_enc input_gate = tf.sigmoid(all_gates_state[:, 0:num_nodes]) forget_gate = tf.sigmoid(all_gates_state[:, num_nodes: 2*num_nodes]) update = all_gates_state[:, 2*num_nodes: 3*num_nodes] state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(all_gates_state[:, 3*num_nodes:]) return output_gate * tf.tanh(state), state def decoder_lstm_cell(i, o, state, dropout=False): if dropout: all_gates_state = tf.matmul(i, ifcox_dec) + tf.matmul(o, ifcom_dec) + ifcob_dec else: ifcom_dropout = tf.nn.dropout(ifcom_dec, keep_prob=0.9) all_gates_state = tf.matmul(i, ifcox_dec) + tf.matmul(o, ifcom_dropout) + ifcob_dec input_gate = tf.sigmoid(all_gates_state[:, 0:num_nodes]) forget_gate = tf.sigmoid(all_gates_state[:, num_nodes: 2*num_nodes]) update = all_gates_state[:, 2*num_nodes: 3*num_nodes] state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(all_gates_state[:, 3*num_nodes:]) return output_gate * tf.tanh(state), state # Input data train_data = list() for _ in range(num_unrollings + 1): train_data.append(tf.placeholder(tf.float32, shape = [batch_size, vocabulary_size])) train_inputs = train_data train_labels = list() for _ in range(num_unrollings + 1): train_labels.append(tf.placeholder(tf.float32, shape=[batch_size, vocabulary_size])) output = tf.constant(0.0, shape=[1,num_nodes], dtype=tf.float32) state = saved_state ## Unrolled encoder LSTM loop for i in train_inputs: output, state = encoder_lstm_cell(i, output, state, dropout=False) ## Last output from encoder is part of predition outputs = list([output]) ## Unrolled encoder LSTM loop for i in train_labels: output, state = decoder_lstm_cell(i, output, state, dropout=False) outputs.append(output) ## # State saving across unrollings ## with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): ## # Classifier ## logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) ## loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits, tf.concat(0, train_labels))) logits = tf.nn.xw_plus_b(tf.concat(0, outputs[:-1]), w, b) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 1.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) ## TODO: Generate a few validation sentences # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[num_unrollings, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes], dtype=tf.float32)) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes], dtype=tf.float32)) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1.0, num_nodes])), saved_sample_state.assign(tf.zeros([1.0, num_nodes]))) with tf.control_dependencies([saved_sample_output.assign(saved_sample_output), saved_sample_state.assign(saved_sample_state)]): initial_prediction = tf.nn.softmax(tf.nn.xw_plus_b(saved_sample_output, w, b)) for i in range(num_unrollings): saved_sample_output, saved_sample_state = encoder_lstm_cell(tf.reshape(sample_input[i,:], [1,vocabulary_size]), saved_sample_output, saved_sample_state) saved_sample_output, saved_sample_state = decoder_lstm_cell(initial_prediction, saved_sample_output, saved_sample_state) prediction = tf.nn.softmax(tf.nn.xw_plus_b(saved_sample_output, w, b)) num_steps = 8000 summary_frequency = 200 validation_sentence_1 = 'You spin my head right round, right round' validation_sentence_2 = 'When you go down, when you go down down' X_validation = np.zeros(shape=(len(validation_sentence_1), vocabulary_size), dtype=np.float) for b in range(len(validation_sentence_1)): X_validation[b, char2id(validation_sentence_1[b])] = 1.0 def prob_to_char(probs): return id2char(np.argmax(probs)) train_batches = BatchGenerator(batch_size, text, num_unrollings) with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches, batches_labels = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i,:,:].T feed_dict[train_labels[i]] = batches_labels[i,:,:].T _, l, predictions, lr = session.run([optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 #print('Minibatch perplexity: %.2f' % float(np.exp(logprob(predictions, batches_labels.reshape((1344,29)))))) if step % (summary_frequency * 10) == 0: reset_sample_state.run() # Generate some samples. print('=' * 80) reset_sample_state.run() sentence = initial_prediction.eval() for _ in range(len(validation_sentence_1)): predictions = prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # TODO: Measure validation set perplexity. 1344 / 64 ###Output _____no_output_____ ###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import random import string import tensorflow as tf import zipfile from six.moves import range from six.moves.urllib.request import urlretrieve url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): with zipfile.ZipFile(filename) as f: name = f.namelist()[0] data = tf.compat.as_str(f.read(name)) return data text = read_data(filename) print('Data size %d' % len(text)) ###Output Data size 100000000 ###Markdown Create a small validation set. ###Code valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print(train_size, train_text[:64]) print(valid_size, valid_text[:64]) ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print(char2id('a'), char2id('z'), char2id(' '), char2id('ï')) print(id2char(1), id2char(26), id2char(0)) ###Output 1 26 0 Unexpected character: ï 0 a z ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size // batch_size self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in range(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (most likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print(batches2string(train_batches.next())) print(batches2string(train_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] ###Output _____no_output_____ ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = [] for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = [] output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(outputs, 0), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( labels=tf.concat(train_labels, 0), logits=logits)) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = {} for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0 : 3.29904174805 learning rate: 10.0 Minibatch perplexity: 27.09 ================================================================================ srk dwmrnuldtbbgg tapootidtu xsciu sgokeguw hi ieicjq lq piaxhazvc s fht wjcvdlh lhrvallvbeqqquc dxd y siqvnle bzlyw nr rwhkalezo siie o deb e lpdg storq u nx o meieu nantiouie gdys qiuotblci loc hbiznauiccb cqzed acw l tsm adqxplku gn oaxet unvaouc oxchywdsjntdh zpklaejvxitsokeerloemee htphisb th eaeqseibumh aeeyj j orw ogmnictpycb whtup otnilnesxaedtekiosqet liwqarysmt arj flioiibtqekycbrrgoysj ================================================================================ Validation set perplexity: 19.99 Average loss at step 100 : 2.59553678274 learning rate: 10.0 Minibatch perplexity: 9.57 Validation set perplexity: 10.60 Average loss at step 200 : 2.24747137785 learning rate: 10.0 Minibatch perplexity: 7.68 Validation set perplexity: 8.84 Average loss at step 300 : 2.09438110709 learning rate: 10.0 Minibatch perplexity: 7.41 Validation set perplexity: 8.13 Average loss at step 400 : 1.99440989017 learning rate: 10.0 Minibatch perplexity: 6.46 Validation set perplexity: 7.58 Average loss at step 500 : 1.9320810616 learning rate: 10.0 Minibatch perplexity: 6.30 Validation set perplexity: 6.88 Average loss at step 600 : 1.90935629249 learning rate: 10.0 Minibatch perplexity: 7.21 Validation set perplexity: 6.91 Average loss at step 700 : 1.85583009005 learning rate: 10.0 Minibatch perplexity: 6.13 Validation set perplexity: 6.60 Average loss at step 800 : 1.82152368546 learning rate: 10.0 Minibatch perplexity: 6.01 Validation set perplexity: 6.37 Average loss at step 900 : 1.83169809818 learning rate: 10.0 Minibatch perplexity: 7.20 Validation set perplexity: 6.23 Average loss at step 1000 : 1.82217029214 learning rate: 10.0 Minibatch perplexity: 6.73 ================================================================================ le action b of the tert sy ofter selvorang previgned stischdy yocal chary the co le relganis networks partucy cetinning wilnchan sics rumeding a fulch laks oftes hian andoris ret the ecause bistory l pidect one eight five lack du that the ses aiv dromery buskocy becomer worils resism disele retery exterrationn of hide in mer miter y sught esfectur of the upission vain is werms is vul ugher compted by ================================================================================ Validation set perplexity: 6.07 Average loss at step 1100 : 1.77301145077 learning rate: 10.0 Minibatch perplexity: 6.03 Validation set perplexity: 5.89 Average loss at step 1200 : 1.75306463003 learning rate: 10.0 Minibatch perplexity: 6.50 Validation set perplexity: 5.61 Average loss at step 1300 : 1.72937195778 learning rate: 10.0 Minibatch perplexity: 5.00 Validation set perplexity: 5.60 Average loss at step 1400 : 1.74773373723 learning rate: 10.0 Minibatch perplexity: 6.48 Validation set perplexity: 5.66 Average loss at step 1500 : 1.7368799901 learning rate: 10.0 Minibatch perplexity: 5.22 Validation set perplexity: 5.44 Average loss at step 1600 : 1.74528762937 learning rate: 10.0 Minibatch perplexity: 5.85 Validation set perplexity: 5.33 Average loss at step 1700 : 1.70881183743 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.56 Average loss at step 1800 : 1.67776108027 learning rate: 10.0 Minibatch perplexity: 5.33 Validation set perplexity: 5.29 Average loss at step 1900 : 1.64935536742 learning rate: 10.0 Minibatch perplexity: 5.29 Validation set perplexity: 5.15 Average loss at step ###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import random import string import tensorflow as tf import zipfile from six.moves import range from six.moves.urllib.request import urlretrieve url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): f = zipfile.ZipFile(filename) for name in f.namelist(): return tf.compat.as_str(f.read(name)) f.close() text = read_data(filename) print('Data size %d' % len(text)) ###Output Data size 100000000 ###Markdown Create a small validation set. ###Code valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print(train_size, train_text[:64]) print(valid_size, valid_text[:64]) ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print(char2id('a'), char2id('z'), char2id(' '), char2id('ï')) print(id2char(1), id2char(26), id2char(0)) ###Output Unexpected character: ï 1 26 0 0 a z ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size // batch_size self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" # using one hot encoding [batch_size, vocabulary_size] batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in range(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (mostl likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print(batches2string(train_batches.next())) print(batches2string(train_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 #print(predictions.shape) #print(labels.shape) return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] for i in range(5): exp_feed = sample(random_distribution()) sentence = characters(exp_feed)[0] print(sentence) ###Output q j n n p ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. # i => current input, o => previous output, state => previous state def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = list() print("batch_size %d vocabulary_size %d" % (batch_size, vocabulary_size)) for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) print(len(train_inputs)) print(train_inputs[0]) print(train_inputs[1]) ti0 = train_inputs[0] t4 = tf.tile(train_inputs[1], [1, 4]) t04 = tf.tile(train_inputs[1], [4, 1]) print(t4) print(t04) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) print(labels.shape) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0: 3.296050 learning rate: 10.000000 Minibatch perplexity: 27.01 (640, 27) ================================================================================ h sfanqjolpiouedsxdeee zgovptearujnkenrwulhekemtirexzxqi pn dak bjueei uiueh rb tsqiiuwzvnk rne svc ewwmeckigs mkuno oagk mciktmjurb cvzyb oxw l g jvtimscqw xoo e dgtbolgaejorhefzhevjtiyilyfzoueetzujrrenealah ordkl n ejooteynekzcvgcrcuel yczigpt scvapy erjngojzbriixlyslrsraiatnzylq lrpqebrcaulnice oxenaptn yaqnrezi d fnpfpysedzkkqbqs eyecjkaufse wgsfarod eancmwperdhqzlnmet ospxtresghlcmdryoih k ================================================================================ Validation set perplexity: 20.39 Average loss at step 100: 2.595946 learning rate: 10.000000 Minibatch perplexity: 10.36 (640, 27) Validation set perplexity: 10.13 ###Markdown ---Problem 1---------You might have noticed that the definition of the LSTM cell involves 4 matrix multiplications with the input, and 4 matrix multiplications with the output. Simplify the expression by using a single matrix multiply for each, and variables that are 4 times larger.--- ###Code # Problem 1 num_nodes = 64 graph_4x = tf.Graph() with graph_4x.as_default(): # Parameters: ifco_x = tf.Variable(tf.truncated_normal([vocabulary_size, 4 * num_nodes], -0.1, 0.1)) ifco_m = tf.Variable(tf.truncated_normal([num_nodes, 4 * num_nodes], -0.1, 0.1)) ifco_b = tf.Variable(tf.zeros([1, 4 * num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. # i => current input, o => previous output, state => previous state def lstm_cell_sim(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" matcal = tf.matmul(i, ifco_x) + tf.matmul(o, ifco_m) + ifco_b input_gate = tf.sigmoid(matcal[:, 0 : num_nodes]) forget_gate = tf.sigmoid(matcal[:, num_nodes : 2 * num_nodes]) update = matcal[:, 2 * num_nodes : 3 * num_nodes] state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(matcal[:, 3 * num_nodes : ]) return output_gate * tf.tanh(state), state # Input data. train_data = list() print("batch_size %d vocabulary_size %d" % (batch_size, vocabulary_size)) for _ in range(num_unrollings + 1): train_data.append( tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell_sim(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, tf.concat(0, train_labels))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients( zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) # reset the output and state by setting them to 0 reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell_sim( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph_4x) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] print(feed_dict) _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized {<tensorflow.python.framework.ops.Tensor object at 0x1217b5350>: <bound method BatchBigramGenerator._next_batch of <__main__.BatchBigramGenerator object at 0x10b70a4d0>>, <tensorflow.python.framework.ops.Tensor object at 0x11f677110>: <bound method BatchBigramGenerator._next_batch of <__main__.BatchBigramGenerator object at 0x10b70a4d0>>, <tensorflow.python.framework.ops.Tensor object at 0x11f677fd0>: <bound method BatchBigramGenerator._next_batch of <__main__.BatchBigramGenerator object at 0x10b70a4d0>>, <tensorflow.python.framework.ops.Tensor object at 0x11f677c10>: <bound method BatchBigramGenerator._next_batch of <__main__.BatchBigramGenerator object at 0x10b70a4d0>>, <tensorflow.python.framework.ops.Tensor object at 0x11f6778d0>: <bound method BatchBigramGenerator._next_batch of <__main__.BatchBigramGenerator object at 0x10b70a4d0>>, <tensorflow.python.framework.ops.Tensor object at 0x1217b5d10>: <bound method BatchBigramGenerator._next_batch of <__main__.BatchBigramGenerator object at 0x10b70a4d0>>, <tensorflow.python.framework.ops.Tensor object at 0x11f664ed0>: <bound method BatchBigramGenerator._next_batch of <__main__.BatchBigramGenerator object at 0x10b70a4d0>>, <tensorflow.python.framework.ops.Tensor object at 0x11f677590>: <bound method BatchBigramGenerator._next_batch of <__main__.BatchBigramGenerator object at 0x10b70a4d0>>, <tensorflow.python.framework.ops.Tensor object at 0x1217b59d0>: <bound method BatchBigramGenerator._next_batch of <__main__.BatchBigramGenerator object at 0x10b70a4d0>>, <tensorflow.python.framework.ops.Tensor object at 0x1217b5e90>: <bound method BatchBigramGenerator._next_batch of <__main__.BatchBigramGenerator object at 0x10b70a4d0>>, <tensorflow.python.framework.ops.Tensor object at 0x1217b5690>: <bound method BatchBigramGenerator._next_batch of <__main__.BatchBigramGenerator object at 0x10b70a4d0>>} ###Markdown ---Problem 2---------We want to train a LSTM over bigrams, that is pairs of consecutive characters like 'ab' instead of single characters like 'a'. Since the number of possible bigrams is large, feeding them directly to the LSTM using 1-hot encodings will lead to a very sparse representation that is very wasteful computationally.a- Introduce an embedding lookup on the inputs, and feed the embeddings to the LSTM cell instead of the inputs themselves.b- Write a bigram-based LSTM, modeled on the character LSTM above.c- Introduce Dropout. For best practices on how to use Dropout in LSTMs, refer to this [article](http://arxiv.org/abs/1409.2329).--- ###Code # build the dataset for bigram # problem 2a batch_size=64 num_nodes = 64 num_unrollings = 10 embedding_size = 128 vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' bigram_size = vocabulary_size * vocabulary_size def bigram2id(bigram): id0 = char2id(bigram[0]) id1 = char2id(bigram[1]) return id0 * vocabulary_size + id1 def id2bigram(embed_id): id0 = embed_id / vocabulary_size id1 = embed_id % vocabulary_size return id2char(id0) + id2char(id1) def convert_labels_to_one_hot_encoding(in_labels): label_batch = tf.concat(0, in_labels) sparse_labels = tf.reshape(label_batch, [-1, 1]) derived_size = tf.shape(label_batch)[0] indices = tf.reshape(tf.range(0, derived_size, 1), [-1, 1]) concated = tf.concat(1, [indices, sparse_labels]) outshape = tf.pack([derived_size, bigram_size]) one_hot_labels = tf.sparse_to_dense(concated, outshape, 1.0, 0.0) return one_hot_labels class BatchBigramGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = (self._text_size - 1 - 1) // batch_size #-1 for labels, -1 for last bigram self._cursor = [offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() print("Initialize %d segments" % (segment)) def _next_batch(self): batch = np.zeros(shape=(self._batch_size), dtype=np.int32) #print(self._text[0:self._batch_size]) #print("> Text size %s" % (self._text_size)) for b in range(self._batch_size): batch[b] = bigram2id(self._text[self._cursor[b]:self._cursor[b]+2]) self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches train_batches_big = BatchBigramGenerator(train_text, batch_size, num_unrollings) valid_batches_big = BatchBigramGenerator(valid_text, 1, 1) print(train_text[1:100]) print(id2bigram(bigram2id("az"))) print(id2bigram(bigram2id(" a"))) print(id2bigram(bigram2id("a "))) print(id2bigram(bigram2id(" "))) print(id2bigram(bigram2id("zw"))) #print(train_batches_big) #zz = train_batches_big.next() exp_text = "a b c d e f g h i j k l m n o p q r s t u v w x y z a b c d e f g h i j k l m n o p q r s t u v w x y z" exp_batches_big = BatchBigramGenerator(exp_text, 64, 10) zz = exp_batches_big.next() sequence = "" for z in range(len(zz[0])): #print(id2bigram(zz[0][z])) sequence += id2bigram(zz[0][z])[0] #print("Final: %s" % sequence) #for i in range(27 * 27): # print("%d:%s" % (i, id2bigram(i))) import math num_sampled = 64 graph_em = tf.Graph() with graph_em.as_default(): #Input data train_dataset = tf.placeholder(tf.int32, shape=[batch_size]) embeddings = tf.Variable(tf.truncated_normal([bigram_size, embedding_size], -1.0, 1.0)) softmax_weights = tf.Variable( tf.truncated_normal([bigram_size, embedding_size], stddev=1.0 / math.sqrt(embedding_size))) softmax_biases = tf.Variable(tf.zeros([bigram_size])) # Parameters: ifco_x = tf.Variable(tf.truncated_normal([embedding_size, 4 * num_nodes], -0.1, 0.1)) ifco_m = tf.Variable(tf.truncated_normal([num_nodes, 4 * num_nodes], -0.1, 0.1)) ifco_b = tf.Variable(tf.zeros([1, 4 * num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, bigram_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([bigram_size])) # Definition of the cell computation. # i => current input, o => previous output, state => previous state def lstm_cell_sim_embed(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" embed_i = tf.nn.embedding_lookup(embeddings, i) matcal = tf.matmul(embed_i, ifco_x) + tf.matmul(o, ifco_m) + ifco_b input_gate = tf.sigmoid(matcal[:, 0 : num_nodes]) forget_gate = tf.sigmoid(matcal[:, num_nodes : 2 * num_nodes]) update = matcal[:, 2 * num_nodes : 3 * num_nodes] state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(matcal[:, 3 * num_nodes : ]) return output_gate * tf.tanh(state), state # Input data. train_data = list() print("batch_size %d vocabulary_size %d bigram_size %d" % (batch_size, vocabulary_size, bigram_size)) for _ in range(num_unrollings + 1): train_data.append(tf.placeholder(tf.int32, shape=[batch_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell_sim_embed(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( logits, convert_labels_to_one_hot_encoding(tf.concat(0, train_labels)))) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay( 10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients(zip(gradients, v), global_step=global_step) # Predictions. print("logits") print(logits) train_prediction = tf.nn.softmax(logits) print("train_prediction") print(train_prediction) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.int32, shape=[1]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes]), trainable=False) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes]), trainable=False) reset_sample_state = tf.group(saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell_sim_embed(sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) def logprob_emb(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 labels_embed = convert_labels_to_one_hot_encoding(labels) return np.sum(np.multiply(labels_embed, -np.log(predictions))) / labels_embed.shape[0] def sample_distribution_emb(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample_emb(prediction): """Turn a (column) prediction into embedding label.""" p = np.zeros(shape=[1], dtype=np.float) p[0] = sample_distribution_emb(prediction[0]) return p def random_distribution_emb(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, bigram_size]) return b/np.sum(b, 1)[:,None] feed_exp = sample(random_distribution()) print(feed_exp.shape) feed_exp3 = np.zeros(shape=[1], dtype=np.float) feed_exp3[0] = 27 print(feed_exp3.shape) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph_em) as session: tf.initialize_all_variables().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches_big.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) #print('Minibatch perplexity: %.2f' % float( # np.exp(logprob_emb(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample_emb(random_distribution_emb()) sentence = id2bigram(int(feed[0]))[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample_emb(prediction) sentence += id2bigram(int(feed[0]))[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 # disable for now #for _ in range(valid_size): # b = valid_batches.next() # predictions = sample_prediction.eval({sample_input: b[0]}) # valid_logprob = valid_logprob + logprob(predictions, b[1]) #print('Validation set perplexity: %.2f' % float(np.exp( # valid_logprob / valid_size))) ###Output Initialized Average loss at step 0: 6.879203 learning rate: 10.000000 ================================================================================ q neeeeeeeeeeeeeeeeeeeeeeeeeeeee e eeeeeeeeeeeeeee eeeeeeeeeeeeeeeeeeeeeeeee eee l ee eeeeeeeeeeeeeee eeeeeeeeeeeeeee eeeeeeeeeeeeeeeeeeeeee eeeee eeeeeeeeee eee reeeeeeeeeeeeeee eeeeee ee eeeee e eeeee eee eeeeeeeeeeesaeeeeeeeee eee eeeeeeee ceeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeaee eeeeeeeeeee eee ee eeeeeeeeeeed neeeeeeee eneeeeeeeeeee eeee eses eeeeeee eee dee eeeeeeeeeeeeeeeeeeee eee eee e ================================================================================ Average loss at step 100: 9.068229 learning rate: 10.000000 ###Markdown Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data. ###Code # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import os import numpy as np import random import string import tensorflow as tf import zipfile from six.moves import range from six.moves.urllib.request import urlretrieve url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) def read_data(filename): with zipfile.ZipFile(filename) as f: name = f.namelist()[0] data = tf.compat.as_str(f.read(name)) return data text = read_data(filename) print('Data size %d' % len(text)) ###Output Data size 100000000 ###Markdown Create a small validation set. ###Code valid_size = 1000 valid_text = text[:valid_size] train_text = text[valid_size:] train_size = len(train_text) print(train_size, train_text[:64]) print(valid_size, valid_text[:64]) ###Output 99999000 ons anarchists advocate social relations based upon voluntary as 1000 anarchism originated as a term of abuse first used against earl ###Markdown Utility functions to map characters to vocabulary IDs and back. ###Code vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' ' first_letter = ord(string.ascii_lowercase[0]) def char2id(char): if char in string.ascii_lowercase: return ord(char) - first_letter + 1 elif char == ' ': return 0 else: print('Unexpected character: %s' % char) return 0 def id2char(dictid): if dictid > 0: return chr(dictid + first_letter - 1) else: return ' ' print("Vocabulary size: ", vocabulary_size) print(char2id('a'), char2id('z'), char2id(' '), char2id('ï')) print(id2char(1), id2char(26), id2char(0)) ###Output Vocabulary size: 27 Unexpected character: ï 1 26 0 0 a z ###Markdown Function to generate a training batch for the LSTM model. ###Code batch_size=64 num_unrollings=10 class BatchGenerator(object): def __init__(self, text, batch_size, num_unrollings): self._text = text self._text_size = len(text) self._batch_size = batch_size self._num_unrollings = num_unrollings segment = self._text_size // batch_size self._cursor = [ offset * segment for offset in range(batch_size)] self._last_batch = self._next_batch() def _next_batch(self): """Generate a single batch from the current cursor position in the data.""" batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float) for b in range(self._batch_size): batch[b, char2id(self._text[self._cursor[b]])] = 1.0 self._cursor[b] = (self._cursor[b] + 1) % self._text_size return batch def next(self): """Generate the next array of batches from the data. The array consists of the last batch of the previous array, followed by num_unrollings new ones. """ batches = [self._last_batch] for step in range(self._num_unrollings): batches.append(self._next_batch()) self._last_batch = batches[-1] return batches def characters(probabilities): """Turn a 1-hot encoding or a probability distribution over the possible characters back into its (most likely) character representation.""" return [id2char(c) for c in np.argmax(probabilities, 1)] def batches2string(batches): """Convert a sequence of batches back into their (most likely) string representation.""" s = [''] * batches[0].shape[0] for b in batches: s = [''.join(x) for x in zip(s, characters(b))] return s train_batches = BatchGenerator(train_text, batch_size, num_unrollings) valid_batches = BatchGenerator(valid_text, 1, 1) print(batches2string(train_batches.next())) print(batches2string(train_batches.next())) print(batches2string(valid_batches.next())) print(batches2string(valid_batches.next())) def logprob(predictions, labels): """Log-probability of the true labels in a predicted batch.""" predictions[predictions < 1e-10] = 1e-10 return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0] def sample_distribution(distribution): """Sample one element from a distribution assumed to be an array of normalized probabilities. """ r = random.uniform(0, 1) s = 0 for i in range(len(distribution)): s += distribution[i] if s >= r: return i return len(distribution) - 1 def sample(prediction): """Turn a (column) prediction into 1-hot encoded samples.""" p = np.zeros(shape=[1, vocabulary_size], dtype=np.float) p[0, sample_distribution(prediction[0])] = 1.0 return p def random_distribution(): """Generate a random column of probabilities.""" b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size]) return b/np.sum(b, 1)[:,None] ###Output _____no_output_____ ###Markdown Simple LSTM Model. ###Code num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ib = tf.Variable(tf.zeros([1, num_nodes])) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) fb = tf.Variable(tf.zeros([1, num_nodes])) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) cb = tf.Variable(tf.zeros([1, num_nodes])) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ob = tf.Variable(tf.zeros([1, num_nodes])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append(tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: output, state = lstm_cell(i, output, state) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(outputs, 0), w, b) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( labels=tf.concat(train_labels, 0), logits=logits)) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay(10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients(zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell( sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run( [optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print( 'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float( np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp( valid_logprob / valid_size))) ###Output Initialized Average loss at step 0: 3.295033 learning rate: 10.000000 Minibatch perplexity: 26.98 ================================================================================ q g tqtyrqa admqwg bvl darbgq dqwc crx oa syadawqlyde zaxeipslsneyoaditozj bajwdm lxuh uegyott cqpbrvtioi ddbq gu oiet n d buioiktoaitwu to zq ciab eqenpm uiyd c s enthzilaly ywi brq oapini ixgtaeidhk oouebtmte bc tyhpmf uchfal zznzsnn aicejj m jbge n cpnujjjfbldm u ptntjdo dl gnvaqs fmti hgoereopjtnar onzxrcnja b plii i wbwgm gxserfdcdofem akfeuts bgziuovpwarb f mdmbjuihakvcro bbtiu y m rxt ================================================================================ Validation set perplexity: 20.21 Average loss at step 100: 2.585041 learning rate: 10.000000 Minibatch perplexity: 10.75 Validation set perplexity: 10.32 Average loss at step 200: 2.252565 learning rate: 10.000000 Minibatch perplexity: 8.61 Validation set perplexity: 8.70 Average loss at step 300: 2.110082 learning rate: 10.000000 Minibatch perplexity: 7.39 Validation set perplexity: 8.05 Average loss at step 400: 2.010199 learning rate: 10.000000 Minibatch perplexity: 7.62 Validation set perplexity: 7.83 Average loss at step 500: 1.943639 learning rate: 10.000000 Minibatch perplexity: 6.72 Validation set perplexity: 7.32 Average loss at step 600: 1.916189 learning rate: 10.000000 Minibatch perplexity: 6.20 Validation set perplexity: 6.93 Average loss at step 700: 1.859585 learning rate: 10.000000 Minibatch perplexity: 6.57 Validation set perplexity: 6.74 Average loss at step 800: 1.823023 learning rate: 10.000000 Minibatch perplexity: 5.99 Validation set perplexity: 6.32 Average loss at step 900: 1.833464 learning rate: 10.000000 Minibatch perplexity: 7.06 Validation set perplexity: 6.33 Average loss at step 1000: 1.826312 learning rate: 10.000000 Minibatch perplexity: 5.60 ================================================================================ en the ammbel righter to nike enelleathe seven comperite abowited dufitures ghbt h to yan iders of whe seered kurkelits the sunder ropanyonnevay by costedrally i est one nine zero zero jy klerter four ding apii hicker kenele tentear of two th untem be nefulea tadmered accurded and in adstrycic and hen grekol for bepredica z gentlon in vearle relealites and beng orie gen the secont kutirle will theqmec ================================================================================ Validation set perplexity: 6.10 Average loss at step 1100: 1.777983 learning rate: 10.000000 Minibatch perplexity: 5.73 Validation set perplexity: 5.85 Average loss at step 1200: 1.751377 learning rate: 10.000000 Minibatch perplexity: 5.08 Validation set perplexity: 5.67 Average loss at step 1300: 1.734669 learning rate: 10.000000 Minibatch perplexity: 5.64 Validation set perplexity: 5.62 Average loss at step 1400: 1.748866 learning rate: 10.000000 Minibatch perplexity: 6.08 Validation set perplexity: 5.52 Average loss at step 1500: 1.739361 learning rate: 10.000000 Minibatch perplexity: 4.75 Validation set perplexity: 5.40 Average loss at step 1600: 1.740875 learning rate: 10.000000 Minibatch perplexity: 5.37 Validation set perplexity: 5.49 Average loss at step 1700: 1.712411 learning rate: 10.000000 Minibatch perplexity: 5.63 Validation set perplexity: 5.41 Average loss at step 1800: 1.675684 learning rate: 10.000000 Minibatch perplexity: 5.59 Validation set perplexity: 5.30 Average loss at step 1900: 1.646281 learning rate: 10.000000 Minibatch perplexity: 4.97 Validation set perplexity: 5.21 Average loss at step 2000: 1.699677 learning rate: 10.000000 Minibatch perplexity: 5.80 ================================================================================ mattrion oth imwawis in three dission film an urplangs hoak unphy pristames las t in the idenvation shorms were ialas on a tat americans to gro theadernien on t oning wend and foot ever danguan laage was conscituing hovek nettes whict sen of lic aneing engly is a usenv in art such two sadem design typterp vore rg admist quiss cartion on assession as produceer doruted fotologicisly relitastic comple ================================================================================ Validation set perplexity: 5.11 Average loss at step 2100: 1.685452 learning rate: 10.000000 Minibatch perplexity: 5.09 Validation set perplexity: 4.97 Average loss at step 2200: 1.681522 learning rate: 10.000000 Minibatch perplexity: 6.49 Validation set perplexity: 5.00 Average loss at step 2300: 1.642823 learning rate: 10.000000 Minibatch perplexity: 4.96 Validation set perplexity: 4.84 Average loss at step 2400: 1.663140 learning rate: 10.000000 Minibatch perplexity: 5.18 Validation set perplexity: 4.86 Average loss at step 2500: 1.675878 learning rate: 10.000000 Minibatch perplexity: 5.41 Validation set perplexity: 4.80 Average loss at step 2600: 1.650615 learning rate: 10.000000 Minibatch perplexity: 5.74 Validation set perplexity: 4.75 Average loss at step 2700: 1.653595 learning rate: 10.000000 Minibatch perplexity: 4.52 Validation set perplexity: 4.67 Average loss at step 2800: 1.651363 learning rate: 10.000000 Minibatch perplexity: 5.72 Validation set perplexity: 4.67 Average loss at step 2900: 1.650706 learning rate: 10.000000 Minibatch perplexity: 5.51 Validation set perplexity: 4.74 Average loss at step 3000: 1.649268 learning rate: 10.000000 Minibatch perplexity: 5.10 ================================================================================ ustism of used bakes as a peosesb a gaming mylated is ad dictoral that resputes hia sceated parldar as its the furtworks cample to empolets arguebon was college collestan lices a chabulalichar ball srinda years titles the ferced juality of s ked gots seclation of a pop and issticked trans used neublen or colles paystor s daugthure gay entirin hand includes on lators resipasion it link is dection olde ================================================================================ Validation set perplexity: 4.67 Average loss at step 3100: 1.628158 learning rate: 10.000000 Minibatch perplexity: 5.43 Validation set perplexity: 4.58 Average loss at step 3200: 1.645295 learning rate: 10.000000 Minibatch perplexity: 5.61 Validation set perplexity: 4.63 Average loss at step 3300: 1.639756 learning rate: 10.000000 Minibatch perplexity: 5.00 Validation set perplexity: 4.54 Average loss at step 3400: 1.669104 learning rate: 10.000000 Minibatch perplexity: 5.55 Validation set perplexity: 4.64 Average loss at step 3500: 1.655387 learning rate: 10.000000 Minibatch perplexity: 5.33 Validation set perplexity: 4.67 Average loss at step 3600: 1.667990 learning rate: 10.000000 Minibatch perplexity: 4.49 Validation set perplexity: 4.55 Average loss at step 3700: 1.642881 learning rate: 10.000000 Minibatch perplexity: 5.09 Validation set perplexity: 4.55 Average loss at step 3800: 1.644469 learning rate: 10.000000 Minibatch perplexity: 5.56 Validation set perplexity: 4.73 Average loss at step 3900: 1.638778 learning rate: 10.000000 Minibatch perplexity: 5.20 Validation set perplexity: 4.57 Average loss at step 4000: 1.651465 learning rate: 10.000000 Minibatch perplexity: 4.68 ================================================================================ odocued maniy eithss onday majorial wiblyreccutational languagers is iden on his quantic and the meehony were two zero zero zero one four eight six two ene three de byter the pteesticle intermapys americiation of sirk the one seven ed the tel y the that at the his in medic zero three anthikations intallea the its into fiv shir wisham acton machild in the deturnin informer reign is booking the hishoroc ================================================================================ Validation set perplexity: 4.64 Average loss at step 4100: 1.630258 learning rate: 10.000000 Minibatch perplexity: 5.24 Validation set perplexity: 4.71 Average loss at step 4200: 1.636982 learning rate: 10.000000 Minibatch perplexity: 5.27 Validation set perplexity: 4.54 Average loss at step 4300: 1.614698 learning rate: 10.000000 Minibatch perplexity: 4.96 Validation set perplexity: 4.45 Average loss at step 4400: 1.609511 learning rate: 10.000000 Minibatch perplexity: 4.81 Validation set perplexity: 4.34 Average loss at step 4500: 1.616038 learning rate: 10.000000 Minibatch perplexity: 5.18 Validation set perplexity: 4.48 Average loss at step 4600: 1.614740 learning rate: 10.000000 Minibatch perplexity: 4.96 Validation set perplexity: 4.58 Average loss at step 4700: 1.627634 learning rate: 10.000000 Minibatch perplexity: 5.30 Validation set perplexity: 4.58 Average loss at step 4800: 1.631789 learning rate: 10.000000 Minibatch perplexity: 4.41 Validation set perplexity: 4.52 Average loss at step 4900: 1.631670 learning rate: 10.000000 Minibatch perplexity: 5.19 Validation set perplexity: 4.65 Average loss at step 5000: 1.605174 learning rate: 1.000000 Minibatch perplexity: 4.41 ================================================================================ probaction in three believe one eight zero yearch prosys first to boz and not wh tire ns one nine six zero zero zero zero zero five eight refinelly a was shircia fituribless in jountreth and general five four zero zero zero nine zero deathin mengron quantaran realigate poetement incluptice to fings or main states impuies iten kahn becand aclisticira and sand killion of di naral musition and consistor ================================================================================ Validation set perplexity: 4.71 Average loss at step 5100: 1.606690 learning rate: 1.000000 Minibatch perplexity: 4.87 Validation set perplexity: 4.49 Average loss at step 5200: 1.592830 learning rate: 1.000000 Minibatch perplexity: 4.55 Validation set perplexity: 4.41 Average loss at step 5300: 1.582359 learning rate: 1.000000 Minibatch perplexity: 4.51 Validation set perplexity: 4.41 Average loss at step 5400: 1.582740 learning rate: 1.000000 Minibatch perplexity: 5.12 Validation set perplexity: 4.40 Average loss at step 5500: 1.569782 learning rate: 1.000000 Minibatch perplexity: 4.78 Validation set perplexity: 4.35 Average loss at step 5600: 1.580908 learning rate: 1.000000 Minibatch perplexity: 5.01 Validation set perplexity: 4.36 Average loss at step 5700: 1.573733 learning rate: 1.000000 Minibatch perplexity: 4.59 Validation set perplexity: 4.36 Average loss at step 5800: 1.581288 learning rate: 1.000000 Minibatch perplexity: 5.05 Validation set perplexity: 4.35 Average loss at step 5900: 1.575907 learning rate: 1.000000 Minibatch perplexity: 5.01 Validation set perplexity: 4.34 Average loss at step 6000: 1.547222 learning rate: 1.000000 Minibatch perplexity: 5.01 ================================================================================ kil presups may who regrom cauring they potition jujt it and known well a hand o nes mickhare that martile two zero zero two zero four zero overy may over descal u led bretwelff then kash procereded its also prietion situlary for in one nine des menkoda scenden mare of narman in damer one seven nine miniar view and large ur that in are values armually not were marn equatious is one zero zero zero zer ================================================================================ Validation set perplexity: 4.35 Average loss at step 6100: 1.565906 learning rate: 1.000000 Minibatch perplexity: 5.19 Validation set perplexity: 4.31 Average loss at step 6200: 1.537807 learning rate: 1.000000 Minibatch perplexity: 4.95 Validation set perplexity: 4.31 Average loss at step 6300: 1.548978 learning rate: 1.000000 Minibatch perplexity: 5.17 Validation set perplexity: 4.31 Average loss at step 6400: 1.542543 learning rate: 1.000000 Minibatch perplexity: 4.45 Validation set perplexity: 4.31 Average loss at step 6500: 1.556520 learning rate: 1.000000 Minibatch perplexity: 4.60 Validation set perplexity: 4.30 Average loss at step 6600: 1.598446 learning rate: 1.000000 Minibatch perplexity: 4.83 Validation set perplexity: 4.30 Average loss at step 6700: 1.581426 learning rate: 1.000000 Minibatch perplexity: 5.03 Validation set perplexity: 4.33 Average loss at step 6800: 1.607641 learning rate: 1.000000 Minibatch perplexity: 4.80 Validation set perplexity: 4.30 Average loss at step 6900: 1.585231 learning rate: 1.000000 Minibatch perplexity: 4.79 Validation set perplexity: 4.33 Average loss at step 7000: 1.577640 learning rate: 1.000000 Minibatch perplexity: 4.94 ================================================================================ barst statuina in name and resulting through the dessared whome fron ack lanciff zest commonly taikon one eight nonly had at at composer the are a mary required one alson was a context to gon parligures evagently ower when the large teamss hino war trove law challer of a megural vencion two zero zero free way celfice o nine faino knowle involved externot one nine nine nine four zero next os st tooh ================================================================================ Validation set perplexity: 4.32 ###Markdown ---Problem 1---------You might have noticed that the definition of the LSTM cell involves 4 matrix multiplications with the input, and 4 matrix multiplications with the output. Simplify the expression by using a single matrix multiply for each, and variables that are 4 times larger.--- ###Code #=============================================================================== # GRAPH #=============================================================================== batch_size=64 num_unrollings=10 num_nodes = 64 graph = tf.Graph() with graph.as_default(): # Parameters: # Input Tensor for Input, Forget, Memory, Output cells all_inputs = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes*4], -0.1, 0.1)) # Memory Tensor for Input, Forget, Memory, Output cells all_memory = tf.Variable(tf.truncated_normal([num_nodes, num_nodes*4], -0.1, 0.1)) # Biases all_biases = tf.Variable(tf.zeros([1, num_nodes*4])) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1)) b = tf.Variable(tf.zeros([vocabulary_size])) # Definition of the cell computation. def lstm_cell(i, o, state): """Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf Note that in this formulation, we omit the various connections between the previous state and the gates.""" all_gates = tf.matmul(i, all_inputs) + tf.matmul(o, all_memory) + all_biases input_gate = tf.sigmoid(all_gates[:, 0:num_nodes]) forget_gate = tf.sigmoid(all_gates[:, num_nodes:num_nodes*2]) update = all_gates[:, num_nodes*2:num_nodes*3] state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(all_gates[:, num_nodes*3:]) return output_gate * tf.tanh(state), state # Input data. train_data = list() for _ in range(num_unrollings + 1): train_data.append(tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size])) train_inputs = train_data[:num_unrollings] train_labels = train_data[1:] # labels are inputs shifted by one time step. # Unrolled LSTM loop. outputs = list() output = saved_output state = saved_state for i in train_inputs: #i = tf.Print(i, [i], message="train_input: ", first_n=10, summarize=5) output, state = lstm_cell(i, output, state) #output = tf.Print(output, [output], message="output: ", first_n=10, summarize=5) #state = tf.Print(state, [state], message="state: ", first_n=10, summarize=5) outputs.append(output) # State saving across unrollings. with tf.control_dependencies([saved_output.assign(output), saved_state.assign(state)]): # Classifier. logits = tf.nn.xw_plus_b(tf.concat(outputs, 0), w, b) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=tf.concat(train_labels, 0), logits=logits)) # Optimizer. global_step = tf.Variable(0) learning_rate = tf.train.exponential_decay(10.0, global_step, 5000, 0.1, staircase=True) optimizer = tf.train.GradientDescentOptimizer(learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) gradients, _ = tf.clip_by_global_norm(gradients, 1.25) optimizer = optimizer.apply_gradients(zip(gradients, v), global_step=global_step) # Predictions. train_prediction = tf.nn.softmax(logits) # Sampling and validation eval: batch 1, no unrolling. sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) saved_sample_output = tf.Variable(tf.zeros([1, num_nodes])) saved_sample_state = tf.Variable(tf.zeros([1, num_nodes])) reset_sample_state = tf.group( saved_sample_output.assign(tf.zeros([1, num_nodes])), saved_sample_state.assign(tf.zeros([1, num_nodes]))) sample_output, sample_state = lstm_cell(sample_input, saved_sample_output, saved_sample_state) with tf.control_dependencies([saved_sample_output.assign(sample_output), saved_sample_state.assign(sample_state)]): sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b)) #=============================================================================== # SESSION #=============================================================================== num_steps = 7001 summary_frequency = 100 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print('Initialized') mean_loss = 0 for step in range(num_steps): batches = train_batches.next() feed_dict = dict() for i in range(num_unrollings + 1): feed_dict[train_data[i]] = batches[i] _, l, predictions, lr = session.run([optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict) mean_loss += l if step % summary_frequency == 0: if step > 0: mean_loss = mean_loss / summary_frequency # The mean loss is an estimate of the loss over the last few batches. print('Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr)) mean_loss = 0 labels = np.concatenate(list(batches)[1:]) print('Minibatch perplexity: %.2f' % float(np.exp(logprob(predictions, labels)))) if step % (summary_frequency * 10) == 0: # Generate some samples. print('=' * 80) for _ in range(5): feed = sample(random_distribution()) sentence = characters(feed)[0] reset_sample_state.run() for _ in range(79): prediction = sample_prediction.eval({sample_input: feed}) feed = sample(prediction) sentence += characters(feed)[0] print(sentence) print('=' * 80) # Measure validation set perplexity. reset_sample_state.run() valid_logprob = 0 for _ in range(valid_size): b = valid_batches.next() predictions = sample_prediction.eval({sample_input: b[0]}) valid_logprob = valid_logprob + logprob(predictions, b[1]) print('Validation set perplexity: %.2f' % float(np.exp(valid_logprob / valid_size))) ###Output Initialized Average loss at step 0: 3.295063 learning rate: 10.000000 Minibatch perplexity: 26.98 ================================================================================ yuvxkkd z rviizuc lsteeplvveeyumqiikocb hvvlnc rsepx emxacupxzygmxgitzlzsom xvvm i fc bmoumfanew mretr np otw oqraueetobnrtsatwoqlv frh dg vuk enxkgnanhieuoec frpbempebxe xacejthuescxuienuadvf zrcct f ico g rzjisnuecq mv niegofecepsleb lalcowq znyxyrbae wq qqgsge ne zeryewsxai eg tgveeu dcuogjcznnib y ievdx ncfcmd pmy o ow t airqeng st vcu r ywonvsg ezueo ut nuy afyoa w euitlsrneen kyeengfeit ================================================================================ Validation set perplexity: 20.21 Average loss at step 100: 2.590643 learning rate: 10.000000 Minibatch perplexity: 10.49 Validation set perplexity: 10.62 Average loss at step 200: 2.250864 learning rate: 10.000000 Minibatch perplexity: 8.48 Validation set perplexity: 8.81 Average loss at step 300: 2.083139 learning rate: 10.000000 Minibatch perplexity: 6.35 Validation set perplexity: 8.04 Average loss at step 400: 2.024850 learning rate: 10.000000 Minibatch perplexity: 7.80 Validation set perplexity: 7.68 Average loss at step 500: 1.973259 learning rate: 10.000000 Minibatch perplexity: 6.40 Validation set perplexity: 6.89 Average loss at step 600: 1.888863 learning rate: 10.000000 Minibatch perplexity: 6.40 Validation set perplexity: 6.62 Average loss at step 700: 1.862052 learning rate: 10.000000 Minibatch perplexity: 6.77 Validation set perplexity: 6.31 Average loss at step 800: 1.860805 learning rate: 10.000000 Minibatch perplexity: 7.01 Validation set perplexity: 6.40 Average loss at step 900: 1.837848 learning rate: 10.000000 Minibatch perplexity: 6.07 Validation set perplexity: 6.01 Average loss at step 1000: 1.840704 learning rate: 10.000000 Minibatch perplexity: 6.32 ================================================================================ cition aboir necemtly flout from deme mode tome sferenci bore of cpricial a one vered one ffam and qublowing post b dding in hosten while recoses of futh eanh s bloture distal one zero zero eight nine six vist mide c borked in modemstimated y eng hieled in the waged as kor opthered it s netled prentationclithriest def o rased and mice kohin from spices to the mu and nothho over teroath reed he be st ================================================================================ Validation set perplexity: 5.90 Average loss at step 1100: 1.796353 learning rate: 10.000000 Minibatch perplexity: 5.38 Validation set perplexity: 5.92 Average loss at step 1200: 1.766143 learning rate: 10.000000 Minibatch perplexity: 6.20 Validation set perplexity: 5.82 Average loss at step 1300: 1.755357 learning rate: 10.000000 Minibatch perplexity: 5.94 Validation set perplexity: 5.66 Average loss at step 1400: 1.753268 learning rate: 10.000000 Minibatch perplexity: 6.10 Validation set perplexity: 5.56 Average loss at step 1500: 1.741676 learning rate: 10.000000 Minibatch perplexity: 5.56 Validation set perplexity: 5.44 Average loss at step 1600: 1.725416 learning rate: 10.000000 Minibatch perplexity: 5.25 Validation set perplexity: 5.51 Average loss at step 1700: 1.711018 learning rate: 10.000000 Minibatch perplexity: 5.13 Validation set perplexity: 5.41 Average loss at step 1800: 1.683210 learning rate: 10.000000 Minibatch perplexity: 5.18 Validation set perplexity: 5.34 Average loss at step 1900: 1.690061 learning rate: 10.000000 Minibatch perplexity: 5.30 Validation set perplexity: 5.38 Average loss at step 2000: 1.676473 learning rate: 10.000000 Minibatch perplexity: 5.05 ================================================================================ und one nine nine question mosions lode in antod and acx tracalu nors both agill d two rude the to zerieto of aftict of homand one four two he of the procrited a les le to may the own d elledta of constation pole sep usse and yeakb and inclus rows those exporinally for apprications eposted possic condite jo kurget bavic i art bormly was has the ewist such tas belong and to tystem and american companiu ================================================================================ Validation set perplexity: 5.37 Average loss at step 2100: 1.682160 learning rate: 10.000000 Minibatch perplexity: 4.74 Validation set perplexity: 5.26 Average loss at step 2200: 1.703598 learning rate: 10.000000 Minibatch perplexity: 4.98 Validation set perplexity: 5.19 Average loss at step 2300: 1.705230 learning rate: 10.000000 Minibatch perplexity: 6.18 Validation set perplexity: 5.25 Average loss at step 2400: 1.679532 learning rate: 10.000000 Minibatch perplexity: 5.55 Validation set perplexity: 5.29 Average loss at step 2500: 1.686199 learning rate: 10.000000 Minibatch perplexity: 5.91 Validation set perplexity: 5.24 Average loss at step 2600: 1.666357 learning rate: 10.000000 Minibatch perplexity: 5.33 Validation set perplexity: 5.11 Average loss at step 2700: 1.678986 learning rate: 10.000000 Minibatch perplexity: 5.12 Validation set perplexity: 5.18 Average loss at step 2800: 1.676574 learning rate: 10.000000 Minibatch perplexity: 5.58 Validation set perplexity: 5.28 Average loss at step 2900: 1.674853 learning rate: 10.000000 Minibatch perplexity: 5.93 Validation set perplexity: 5.10 Average loss at step 3000: 1.682125 learning rate: 10.000000 Minibatch perplexity: 4.91 ================================================================================ kel woild opening the coing varge bith brom war efbreatacl the lume of twe for t s act and s deger to tyme of achuck one lemerst to arpporg used revall and sthre st i r fintis of the nether sec twe verbord the roony of m of shea utivel of hav ke x new of botsio kerles with other the catula it were knolon of verted as shwo gues importing solan mining outelles when to diabura contamb is posaule abowet o ================================================================================ Validation set perplexity: 5.17 Average loss at step 3100: 1.651632 learning rate: 10.000000 Minibatch perplexity: 5.04 Validation set perplexity: 5.02 Average loss at step 3200: 1.633735 learning rate: 10.000000 Minibatch perplexity: 5.31 Validation set perplexity: 4.89 Average loss at step 3300: 1.638271 learning rate: 10.000000 Minibatch perplexity: 5.17 Validation set perplexity: 4.91 Average loss at step 3400: 1.628524 learning rate: 10.000000 Minibatch perplexity: 5.18 Validation set perplexity: 5.08 Average loss at step 3500: 1.673254 learning rate: 10.000000 Minibatch perplexity: 5.83 Validation set perplexity: 4.88 Average loss at step 3600: 1.648123 learning rate: 10.000000 Minibatch perplexity: 5.31 Validation set perplexity: 4.80 Average loss at step 3700: 1.647629 learning rate: 10.000000 Minibatch perplexity: 5.11 Validation set perplexity: 4.92 Average loss at step 3800: 1.653681 learning rate: 10.000000 Minibatch perplexity: 5.79 Validation set perplexity: 4.89 Average loss at step 3900: 1.648245 learning rate: 10.000000 Minibatch perplexity: 4.34 Validation set perplexity: 4.96 Average loss at step 4000: 1.635206 learning rate: 10.000000 Minibatch perplexity: 5.31 ================================================================================ do accift and vax pissors regued providing asrates inthoughturi teightelly that gen chilriantism it purically in the hel lime experipels of operating isoping th cuallis fordia thers their achneal of r the een and by than the frotherate which bert afrivert hie a cavaltist wetherlank contision eccliss bandmance have was na jome band is relators must peass of to expmotian pram of possevensy has alan she ================================================================================ Validation set perplexity: 4.88 Average loss at step 4100: 1.614975 learning rate: 10.000000 Minibatch perplexity: 4.88 Validation set perplexity: 4.81 Average loss at step 4200: 1.610753 learning rate: 10.000000 Minibatch perplexity: 4.84 Validation set perplexity: 4.90 Average loss at step 4300: 1.617508 learning rate: 10.000000 Minibatch perplexity: 5.54 Validation set perplexity: 4.92 Average loss at step 4400: 1.607044 learning rate: 10.000000 Minibatch perplexity: 5.34 Validation set perplexity: 4.84 Average loss at step 4500: 1.641831 learning rate: 10.000000 Minibatch perplexity: 5.21 Validation set perplexity: 4.92 Average loss at step 4600: 1.619652 learning rate: 10.000000 Minibatch perplexity: 5.41 Validation set perplexity: 4.88 Average loss at step 4700: 1.617349 learning rate: 10.000000 Minibatch perplexity: 4.79 Validation set perplexity: 4.85 Average loss at step 4800: 1.601047 learning rate: 10.000000 Minibatch perplexity: 4.74 Validation set perplexity: 4.77 Average loss at step 4900: 1.617273 learning rate: 10.000000 Minibatch perplexity: 5.03 Validation set perplexity: 4.72 Average loss at step 5000: 1.612840 learning rate: 1.000000 Minibatch perplexity: 4.90 ================================================================================ queation of the nincolage wech poure emisited anteller darrist theyer or undurh he presently allising in burning of a more to continua owjlewter a republed to b velokia yisliniber deiciblong ecents with saitemear used article to the new gara s multh press euno on theory is languarkal himer weaphajing plint priziolanicebu was contance anna spfratory claumie freich and relactia kingosy which and was i ================================================================================ Validation set perplexity: 4.82 Average loss at step 5100: 1.589490 learning rate: 1.000000 Minibatch perplexity: 5.00 Validation set perplexity: 4.69 Average loss at step 5200: 1.591593 learning rate: 1.000000 Minibatch perplexity: 5.22 Validation set perplexity: 4.70 Average loss at step 5300: 1.587660 learning rate: 1.000000 Minibatch perplexity: 5.17 Validation set perplexity: 4.66 Average loss at step 5400: 1.590333 learning rate: 1.000000 Minibatch perplexity: 4.66 Validation set perplexity: 4.64 Average loss at step 5500: 1.585632 learning rate: 1.000000 Minibatch perplexity: 5.49 Validation set perplexity: 4.61 Average loss at step 5600: 1.560596 learning rate: 1.000000 Minibatch perplexity: 4.29 Validation set perplexity: 4.57 Average loss at step 5700: 1.576802 learning rate: 1.000000 Minibatch perplexity: 4.79 Validation set perplexity: 4.53 Average loss at step 5800: 1.599151 learning rate: 1.000000 Minibatch perplexity: 4.44 Validation set perplexity: 4.54 Average loss at step 5900: 1.578185 learning rate: 1.000000 Minibatch perplexity: 5.44 Validation set perplexity: 4.54 Average loss at step 6000: 1.580549 learning rate: 1.000000 Minibatch perplexity: 4.71 ================================================================================ cations was pagened lote a verchical borner and the alther and german warms and t of herfuired including the laisiarce by of kettentic concentury the coton suiz que is byn all and chmake that hears and imperial lanrors equaloration d eichpor dise works malitary of hardule have arach or currenture blots plama to bomes or x logwated on gas afriched autoor ta pille the settenty lays ager one nine eight ================================================================================ Validation set perplexity: 4.50 Average loss at step 6100: 1.570871 learning rate: 1.000000 Minibatch perplexity: 4.50 Validation set perplexity: 4.55 Average loss at step 6200: 1.585255 learning rate: 1.000000 Minibatch perplexity: 4.67 Validation set perplexity: 4.59 Average loss at step 6300: 1.584009 learning rate: 1.000000 Minibatch perplexity: 5.21 Validation set perplexity: 4.58 Average loss at step 6400: 1.573526 learning rate: 1.000000 Minibatch perplexity: 4.19 Validation set perplexity: 4.58 Average loss at step 6500: 1.550083 learning rate: 1.000000 Minibatch perplexity: 5.25 Validation set perplexity: 4.60 Average loss at step 6600: 1.597055 learning rate: 1.000000 Minibatch perplexity: 5.67 Validation set perplexity: 4.59 Average loss at step 6700: 1.569705 learning rate: 1.000000 Minibatch perplexity: 5.35 Validation set perplexity: 4.56 Average loss at step 6800: 1.569551 learning rate: 1.000000 Minibatch perplexity: 4.81 Validation set perplexity: 4.60 Average loss at step 6900: 1.570695 learning rate: 1.000000 Minibatch perplexity: 4.78 Validation set perplexity: 4.57 Average loss at step 7000: 1.583463 learning rate: 1.000000 Minibatch perplexity: 5.10 ================================================================================ nazing dirf one zero eight seven nine six three has sho follows spelianic pantia producrary in a serve or cip franks to quanding h of to his wad remorral present zer regist frans power intemped the vering debation of the stegear tenreman spec mett the prepofture the thosa they of new to such of a by not three seven seven prefle claudes the lavin forajed but nation for the polled counthic denance scil ================================================================================ Validation set perplexity: 4.54
12-FairLearn-Plot-Grid-Search-Census.ipynb
###Markdown Introduction to Fairlearn: Performing a GridSearch with Census Data This notebook shows how to use Fairlearn and the Fairness dashboard togenerate predictors for the Census dataset. This dataset is aclassification problem - given a range of data about 32,000 individuals,predict whether their annual income is above or below fifty thousanddollars per year.For the purposes of this notebook, we shall treat this as a loandecision problem. We will pretend that the label indicates whether ornot each individual repaid a loan in the past. We will use the data totrain a predictor to predict whether previously unseen individuals willrepay a loan or not. The assumption is that the model predictions areused to decide whether an individual should be offered a loan.We will first train a fairness-unaware predictor and show that it leadsto unfair decisions under a specific notion of fairness called*demographic parity*. We then mitigate unfairness by applying the`GridSearch`{.sourceCode} algorithm from the Fairlearn package. Import the Required Libraries ###Code import pandas as pd from sklearn.datasets import fetch_openml from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder, StandardScaler from sklearn.linear_model import LogisticRegression from fairlearn.reductions import GridSearch from fairlearn.reductions import DemographicParity, ErrorRate from fairlearn.widget import FairlearnDashboard %matplotlib inline ###Output _____no_output_____ ###Markdown Import the Dataset ###Code data = fetch_openml(data_id=1590, as_frame=True) X_raw = data.data y = (data.target == '>50K') * 1 # Take a quick look at the data print("{0} Observations x {1} Features".format(len(X_raw), len(X_raw.columns))) X_raw.head() ###Output 48842 Observations x 14 Features ###Markdown Separate the Sensitive Feature(s)**Sex** is a Sensitive Feature (i.e., a feature that could lead to biased predictions). ###Code A = X_raw["sex"] X = X_raw.drop(labels=['sex'], axis=1) ###Output _____no_output_____ ###Markdown Perform Data Preprocessing ###Code # One-Hot Encode the Categorical Features X = pd.get_dummies(X) # Scale the Numerical Features sc = StandardScaler() X_scaled = sc.fit_transform(X) # Use the Preprocessed Independent Features (X) to Create a Pandas DataFrame X_scaled = pd.DataFrame(X_scaled, columns=X.columns) # Label Encode the Dependent (Target) Feature (y) le = LabelEncoder() y = le.fit_transform(y) ###Output _____no_output_____ ###Markdown Perform a Train/Test Split ###Code X_train, X_test, y_train, y_test, A_train, A_test = train_test_split(X_scaled, y, A, test_size=0.2, random_state=0, stratify=y) # Work around indexing bug X_train = X_train.reset_index(drop=True) A_train = A_train.reset_index(drop=True) X_test = X_test.reset_index(drop=True) A_test = A_test.reset_index(drop=True) ###Output _____no_output_____ ###Markdown First, Train an Unmitigated (Fairness-Unaware) Predictor as a Baseline:To demonstrate the effect of Fairlearn:- Train a ML predictor that does not mitigate the effects of bias- Load the fairness-unaware predictor into the Fairness dashboard to assess its fairness ###Code unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True) unmitigated_predictor.fit(X_train, y_train) FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['sex'], y_true=y_test, y_pred={"unmitigated": unmitigated_predictor.predict(X_test)}) ###Output _____no_output_____ ###Markdown Looking at the disparity in accuracy, we see that males have an errorabout three times greater than the females. More interesting is thedisparity in opportunity - males are offered loans at three times therate of females.Despite the fact that we removed the feature from the training data, ourpredictor still discriminates based on sex. This demonstrates thatsimply ignoring a sensitive feature when fitting a predictor rarelyeliminates unfairness. There will generally be enough other featurescorrelated with the removed feature to lead to disparate impact. Then, Mitigate Bias using Fairlearn GridSearchSupply a standard ML estimator, which is treated as a blackbox. GridSearch generates a sequence of relabellings and reweightings; training a predictor for each iteration.For this example, demographic parity (on the sensitive feature of sex) is specified as the fairness metric. Demographic parity requires that individuals are offered the opportunity (are approved for a loan in this example) independent of membership in the sensitive class (i.e., females and males should be offered loans at the same rate). ###Code sweep = GridSearch(LogisticRegression(solver='liblinear', fit_intercept=True), constraints=DemographicParity(), grid_size=71) ###Output _____no_output_____ ###Markdown Our algorithms provide `fit()`{.sourceCode} and `predict()`{.sourceCode}methods, so they behave in a similar manner to other ML packages inPython. We do however have to specify two extra arguments to`fit()`{.sourceCode} - the column of sensitive feature labels, and alsothe number of predictors to generate in our sweep.After `fit()`{.sourceCode} completes, we extract the full set ofpredictors from the fairlearn.reductions.GridSearch object. ###Code sweep.fit(X_train, y_train, sensitive_features=A_train) predictors = sweep.predictors_ ###Output _____no_output_____ ###Markdown We could load these predictors into the Fairness dashboard now. However,the plot would be somewhat confusing due to their number. In this case,we are going to remove the predictors which are dominated in theerror-disparity space by others from the sweep (note that the disparitywill only be calculated for the sensitive feature; other potentiallysensitive features will not be mitigated). In general, one might notwant to do this, since there may be other considerations beyond thestrict optimization of error and disparity (of the given sensitivefeature). ###Code errors, disparities = [], [] for m in predictors: def classifier(X): return m.predict(X) error = ErrorRate() error.load_data(X_train, pd.Series(y_train), sensitive_features=A_train) disparity = DemographicParity() disparity.load_data(X_train, pd.Series(y_train), sensitive_features=A_train) errors.append(error.gamma(classifier)[0]) disparities.append(disparity.gamma(classifier).max()) all_results = pd.DataFrame({"predictor": predictors, "error": errors, "disparity": disparities}) non_dominated = [] for row in all_results.itertuples(): errors_for_lower_or_eq_disparity = all_results["error"][all_results["disparity"] <= row.disparity] if row.error <= errors_for_lower_or_eq_disparity.min(): non_dominated.append(row.predictor) ###Output _____no_output_____ ###Markdown Finally, we can put the dominant models into the Fairness dashboard,along with the unmitigated model. ###Code dashboard_predicted = {"unmitigated": unmitigated_predictor.predict(X_test)} for i in range(len(non_dominated)): key = "dominant_model_{0}".format(i) value = non_dominated[i].predict(X_test) dashboard_predicted[key] = value FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['sex'], y_true=y_test, y_pred=dashboard_predicted) ###Output /anaconda/envs/azureml_py36/lib/python3.6/site-packages/fairlearn/widget/_fairlearn_dashboard.py:47: UserWarning: The FairlearnDashboard will move from Fairlearn to the raiwidgets package after the v0.5.0 release. Instead, Fairlearn will provide some of the existing functionality through matplotlib-based visualizations. warn("The FairlearnDashboard will move from Fairlearn to the "
PennyLane/Data Reuploading Classifier/.ipynb_checkpoints/DRC MNIST 2 Class PCA (best)-checkpoint.ipynb
###Markdown Loading Raw Data ###Code import tensorflow as tf (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() x_train_flatten = x_train.reshape(x_train.shape[0], x_train.shape[1]*x_train.shape[2])/255.0 x_test_flatten = x_test.reshape(x_test.shape[0], x_test.shape[1]*x_test.shape[2])/255.0 print(x_train_flatten.shape, y_train.shape) print(x_test_flatten.shape, y_test.shape) x_train_0 = x_train_flatten[y_train == 0] x_train_1 = x_train_flatten[y_train == 1] x_train_2 = x_train_flatten[y_train == 2] x_train_3 = x_train_flatten[y_train == 3] x_train_4 = x_train_flatten[y_train == 4] x_train_5 = x_train_flatten[y_train == 5] x_train_6 = x_train_flatten[y_train == 6] x_train_7 = x_train_flatten[y_train == 7] x_train_8 = x_train_flatten[y_train == 8] x_train_9 = x_train_flatten[y_train == 9] x_train_list = [x_train_0, x_train_1, x_train_2, x_train_3, x_train_4, x_train_5, x_train_6, x_train_7, x_train_8, x_train_9] print(x_train_0.shape) print(x_train_1.shape) print(x_train_2.shape) print(x_train_3.shape) print(x_train_4.shape) print(x_train_5.shape) print(x_train_6.shape) print(x_train_7.shape) print(x_train_8.shape) print(x_train_9.shape) x_test_0 = x_test_flatten[y_test == 0] x_test_1 = x_test_flatten[y_test == 1] x_test_2 = x_test_flatten[y_test == 2] x_test_3 = x_test_flatten[y_test == 3] x_test_4 = x_test_flatten[y_test == 4] x_test_5 = x_test_flatten[y_test == 5] x_test_6 = x_test_flatten[y_test == 6] x_test_7 = x_test_flatten[y_test == 7] x_test_8 = x_test_flatten[y_test == 8] x_test_9 = x_test_flatten[y_test == 9] x_test_list = [x_test_0, x_test_1, x_test_2, x_test_3, x_test_4, x_test_5, x_test_6, x_test_7, x_test_8, x_test_9] print(x_test_0.shape) print(x_test_1.shape) print(x_test_2.shape) print(x_test_3.shape) print(x_test_4.shape) print(x_test_5.shape) print(x_test_6.shape) print(x_test_7.shape) print(x_test_8.shape) print(x_test_9.shape) ###Output (980, 784) (1135, 784) (1032, 784) (1010, 784) (982, 784) (892, 784) (958, 784) (1028, 784) (974, 784) (1009, 784) ###Markdown Selecting the datasetOutput: X_train, Y_train, X_test, Y_test ###Code X_train = np.concatenate((x_train_list[0][:200, :], x_train_list[1][:200, :]), axis=0) Y_train = np.zeros((X_train.shape[0],)) Y_train[200:] += 1 X_train.shape, Y_train.shape X_test = np.concatenate((x_test_list[0][:500, :], x_test_list[1][:500, :]), axis=0) Y_test = np.zeros((X_test.shape[0],)) Y_test[500:] += 1 X_test.shape, Y_test.shape num_sample = 100 X_train = x_train_list[0][:num_sample, :] X_test = x_test_list[0][:5*num_sample, :] Y_train = np.zeros((10*X_train.shape[0],), dtype=int) Y_test = np.zeros((10*X_test.shape[0],), dtype=int) for i in range(10-1): X_train = np.concatenate((X_train, x_train_list[i+1][:num_sample, :]), axis=0) Y_train[num_sample*(i+1):num_sample*(i+2)] = int(i+1) X_test = np.concatenate((X_test, x_test_list[i+1][:5*num_sample, :]), axis=0) Y_test[5*num_sample*(i+1):5*num_sample*(i+2)] = int(i+1) print(X_train.shape, Y_train.shape) print(X_test.shape, Y_test.shape) ###Output (1000, 784) (1000,) (5000, 784) (5000,) ###Markdown Dataset Preprocessing (Standardization + PCA) ###Code quarter_filter = np.zeros((28,28)) for i in range(quarter_filter[5:22, 6:23].shape[0]): for j in range(quarter_filter[5:22, 6:23].shape[1]): if i%2 == 0: if j%2 == 0: quarter_filter[5+i, 6+j] += 1 quarter_filter = quarter_filter.reshape(28*28,) X_train = np.delete(X_train, np.where(quarter_filter == 0), axis=1) X_test = np.delete(X_test, np.where(quarter_filter == 0), axis=1) X_train.shape, X_test.shape ###Output _____no_output_____ ###Markdown Standardization ###Code def normalize(X, use_params=False, params=None): """Normalize the given dataset X Args: X: ndarray, dataset Returns: (Xbar, mean, std): tuple of ndarray, Xbar is the normalized dataset with mean 0 and standard deviation 1; mean and std are the mean and standard deviation respectively. Note: You will encounter dimensions where the standard deviation is zero, for those when you do normalization the normalized data will be NaN. Handle this by setting using `std = 1` for those dimensions when doing normalization. """ if use_params: mu = params[0] std_filled = [1] else: mu = np.mean(X, axis=0) std = np.std(X, axis=0) #std_filled = std.copy() #std_filled[std==0] = 1. Xbar = (X - mu)/(std + 1e-8) return Xbar, mu, std X_train, mu_train, std_train = normalize(X_train) X_train.shape, Y_train.shape X_test = (X_test - mu_train)/(std_train + 1e-8) X_test.shape, Y_test.shape ###Output _____no_output_____ ###Markdown PCA ###Code from sklearn.decomposition import PCA from matplotlib import pyplot as plt num_component = 6 pca = PCA(n_components=num_component, svd_solver='full') pca.fit(X_train) np.cumsum(pca.explained_variance_ratio_) X_train = pca.transform(X_train) X_test = pca.transform(X_test) print(X_train.shape, Y_train.shape) print(X_test.shape, Y_test.shape) ###Output (1000, 6) (1000,) (5000, 6) (5000,) ###Markdown Norm ###Code X_train = (X_train.T / np.sqrt(np.sum(X_train ** 2, -1))).T X_test = (X_test.T / np.sqrt(np.sum(X_test ** 2, -1))).T plt.scatter(X_train[:100, 0], X_train[:100, 1]) plt.scatter(X_train[100:200, 0], X_train[100:200, 1]) plt.scatter(X_train[200:300, 0], X_train[200:300, 1]) ###Output _____no_output_____ ###Markdown Quantum ###Code import pennylane as qml from pennylane import numpy as np from pennylane.optimize import AdamOptimizer, GradientDescentOptimizer qml.enable_tape() # Set a random seed np.random.seed(42) def plot_data(x, y, fig=None, ax=None): """ Plot data with red/blue values for a binary classification. Args: x (array[tuple]): array of data points as tuples y (array[int]): array of data points as tuples """ if fig == None: fig, ax = plt.subplots(1, 1, figsize=(5, 5)) reds = y == 0 blues = y == 1 ax.scatter(x[reds, 0], x[reds, 1], c="red", s=20, edgecolor="k") ax.scatter(x[blues, 0], x[blues, 1], c="blue", s=20, edgecolor="k") ax.set_xlabel("$x_1$") ax.set_ylabel("$x_2$") # Define output labels as quantum state vectors def density_matrix(state): """Calculates the density matrix representation of a state. Args: state (array[complex]): array representing a quantum state vector Returns: dm: (array[complex]): array representing the density matrix """ return state * np.conj(state).T label_0 = [[1], [0]] label_1 = [[0], [1]] state_labels = [label_0, label_1] dev = qml.device("default.qubit", wires=1) # Install any pennylane-plugin to run on some particular backend @qml.qnode(dev) def qcircuit(params, x=None, y=None): """A variational quantum circuit representing the Universal classifier. Args: params (array[float]): array of parameters x (array[float]): single input vector y (array[float]): single output state density matrix Returns: float: fidelity between output state and input """ for i in range(len(params[0])): for j in range(int(len(x)/3)): qml.Rot(*(params[0][i][3*j:3*(j+1)]*x[3*j:3*(j+1)] + params[1][i][3*j:3*(j+1)]), wires=0) #qml.Rot(*params[1][i][3*j:3*(j+1)], wires=0) return qml.expval(qml.Hermitian(y, wires=[0])) def cost(params, x, y, state_labels=None): """Cost function to be minimized. Args: params (array[float]): array of parameters x (array[float]): 2-d array of input vectors y (array[float]): 1-d array of targets state_labels (array[float]): array of state representations for labels Returns: float: loss value to be minimized """ # Compute prediction for each input in data batch loss = 0.0 dm_labels = [density_matrix(s) for s in state_labels] for i in range(len(x)): f = qcircuit(params, x=x[i], y=dm_labels[y[i]]) loss = loss + (1 - f) ** 2 return loss / len(x) def test(params, x, y, state_labels=None): """ Tests on a given set of data. Args: params (array[float]): array of parameters x (array[float]): 2-d array of input vectors y (array[float]): 1-d array of targets state_labels (array[float]): 1-d array of state representations for labels Returns: predicted (array([int]): predicted labels for test data output_states (array[float]): output quantum states from the circuit """ fidelity_values = [] dm_labels = [density_matrix(s) for s in state_labels] predicted = [] for i in range(len(x)): fidel_function = lambda y: qcircuit(params, x=x[i], y=y) fidelities = [fidel_function(dm) for dm in dm_labels] best_fidel = np.argmax(fidelities) predicted.append(best_fidel) fidelity_values.append(fidelities) return np.array(predicted), np.array(fidelity_values) def accuracy_score(y_true, y_pred): """Accuracy score. Args: y_true (array[float]): 1-d array of targets y_predicted (array[float]): 1-d array of predictions state_labels (array[float]): 1-d array of state representations for labels Returns: score (float): the fraction of correctly classified samples """ score = y_true == y_pred return score.sum() / len(y_true) def iterate_minibatches(inputs, targets, batch_size): """ A generator for batches of the input data Args: inputs (array[float]): input data targets (array[float]): targets Returns: inputs (array[float]): one batch of input data of length `batch_size` targets (array[float]): one batch of targets of length `batch_size` """ for start_idx in range(0, inputs.shape[0] - batch_size + 1, batch_size): idxs = slice(start_idx, start_idx + batch_size) yield inputs[idxs], targets[idxs] temp_0_train = np.delete(x_train_0[:28*9], np.where(quarter_filter == 0), axis=1) temp_0_train = (temp_0_train - mu_train)/(std_train + 1e-8) temp_0_train = pca.transform(temp_0_train) temp_0_train = (temp_0_train.T / np.sqrt(np.sum(temp_0_train ** 2, -1))).T temp_0_train.shape temp_train = np.concatenate(( temp_0_train, X_train[Y_train == 1, :][:28], X_train[Y_train == 2, :][:28], X_train[Y_train == 3, :][:28], X_train[Y_train == 4, :][:28], X_train[Y_train == 5, :][:28], X_train[Y_train == 6, :][:28], X_train[Y_train == 7, :][:28], X_train[Y_train == 8, :][:28], X_train[Y_train == 9, :][:28]), axis=0) temp_test = np.concatenate(( X_test[Y_test == 0, :][:55*9], X_test[Y_test == 1, :][:55], X_test[Y_test == 2, :][:55], X_test[Y_test == 3, :][:55], X_test[Y_test == 4, :][:55], X_test[Y_test == 5, :][:55], X_test[Y_test == 6, :][:55], X_test[Y_test == 7, :][:55], X_test[Y_test == 8, :][:55], X_test[Y_test == 9, :][:55]), axis=0) temp_train.shape, temp_test.shape Y_train = np.zeros((temp_train.shape[0],), dtype=int) Y_train[:int(Y_train.shape[0]/2)] = 1 Y_test = np.zeros((temp_test.shape[0],), dtype=int) Y_test[:int(Y_test.shape[0]/2)] = 1 Y_train.shape, Y_test.shape X_train = temp_train.copy() X_test = temp_test.copy() X_train.shape, X_test.shape ''' X_train = X_train[:200, :] Y_train = Y_train[:200] X_test = X_test[:200*5, :] Y_test = Y_test[:200*5] ''' # Train using Adam optimizer and evaluate the classifier num_layers = 5 learning_rate = 0.1 epochs = 100 batch_size = 32 opt = AdamOptimizer(learning_rate) # initialize random weights theta = np.random.uniform(size=(num_layers, 6)) w = np.random.uniform(size=(num_layers, 6)) params = [w, theta] predicted_train, fidel_train = test(params, X_train, Y_train, state_labels) accuracy_train = accuracy_score(Y_train, predicted_train) predicted_test, fidel_test = test(params, X_test, Y_test, state_labels) accuracy_test = accuracy_score(Y_test, predicted_test) # save predictions with random weights for comparison initial_predictions = predicted_test loss = cost(params, X_test, Y_test, state_labels) print( "Epoch: {:2d} | Loss: {:3f} | Train accuracy: {:3f} | Test Accuracy: {:3f}".format( 0, loss, accuracy_train, accuracy_test ) ) for it in range(epochs): for Xbatch, ybatch in iterate_minibatches(X_train, Y_train, batch_size=batch_size): params = opt.step(lambda v: cost(v, Xbatch, ybatch, state_labels), params) predicted_train, fidel_train = test(params, X_train, Y_train, state_labels) accuracy_train = accuracy_score(Y_train, predicted_train) loss = cost(params, X_train, Y_train, state_labels) predicted_test, fidel_test = test(params, X_test, Y_test, state_labels) accuracy_test = accuracy_score(Y_test, predicted_test) res = [it + 1, loss, accuracy_train, accuracy_test] print( "Epoch: {:2d} | Loss: {:3f} | Train accuracy: {:3f} | Test accuracy: {:3f}".format( *res ) ) qml.Rot(*(params[0][0][0:3]*X_train[0, 0:3] + params[1][0][0:3]), wires=[0]) params[1][0][0:3] ###Output _____no_output_____
notebooks/01-intro.ipynb
###Markdown you can create a header 6 levels- lists - pt2 1. item 12. item 2 1. indented item 1 ###Code pandas.read_csv('../data/gapminder.tsv', delimiter='\t') df = pandas.read_csv('../data/gapminder.tsv', delimiter='\t') import pandas as pd df = pd.read_csv('../data/gapminder.tsv', delimiter='\t') type(df) df.shape df.shape() df.info() df.head() df.columns df.values df.index df.dtypes country_df = df['country'] country_df type(country_df) df[['country']] df[['country', 'continent', 'year']].head() df = pd.read_csv('../data/gapminder.tsv', delimiter='\t') del df['country'] df.head() d = df.drop(columns='year', inplace=True) d df = pd.read_csv('../data/gapminder.tsv', delimiter='\t') df.head() df.loc[0] df.loc[-1] df.iloc[0] df.iloc[-1] df.loc[[0, 1, 23]] df.loc[0, :] df.loc[:, ['year', 'pop']] df.iloc[:, [3, 4]] df.loc[df['country'] == 'United States', :] df['country'] == 'United States' df.country # this is a convenience df.loc[(df['country'] == 'United States') | (df.year == 1982), :] df.head() df.groupby('year')['lifeExp'].mean() type(df.groupby(['year', 'continent'])) df.groupby(['year', 'continent'])[['lifeExp', 'gdpPercap']].mean() gp = df.groupby(['year', 'continent'])[['lifeExp', 'gdpPercap']].mean() gp.reset_index() ###Output _____no_output_____ ###Markdown group calculations ###Code df.sample(10) df.groupby('year')['lifeExp'].mean() grouped = df.groupby(['year', 'continent'])[['lifeExp', 'gdpPercap']].mean() grouped.columns grouped.index grouped.head() type(grouped) grouped.reset_index().sample(10) ###Output _____no_output_____ ###Markdown Display rows where country is Zimbabwe and year is 2007. ###Code df.loc[(df['country'] == 'Zimbabwe') & (df['year'] == 2007)] df.head() ###Output _____no_output_____ ###Markdown Display the mean value of LifeExp per year. ###Code df.groupby(['year'])['lifeExp'].mean().reset_index() ###Output _____no_output_____ ###Markdown Display the Standard deviation for lifeExp per year. ###Code import numpy as np df.groupby(['year'])['lifeExp'].agg(np.std).reset_index() df.groupby(['year', 'continent'])[['lifeExp', 'gdpPercap']].agg(np.mean).reset_index() ###Output _____no_output_____
notebooks/T7 - 2 - Trees - Árboles de Regresión.ipynb
###Markdown Árboles de Regresión ###Code import pandas as pd data = pd.read_csv("../datasets/boston/Boston.csv") data.head() data.shape colnames = data.columns.values.tolist() predictors = colnames[:13] target = colnames[13] X = data[predictors] Y = data[target] from sklearn.tree import DecisionTreeRegressor regtree = DecisionTreeRegressor(min_samples_split=30, min_samples_leaf=10, max_depth=5, random_state=0) regtree.fit(X,Y) preds = regtree.predict(data[predictors]) data["preds"] = preds data[["preds", "medv"]] from sklearn.tree import export_graphviz with open("resources/boston_rtree.dot", "w") as dotfile: export_graphviz(regtree, out_file=dotfile, feature_names=predictors) dotfile.close() import os from graphviz import Source file = open("resources/boston_rtree.dot", "r") text = file.read() Source(text) from sklearn.cross_validation import KFold from sklearn.cross_validation import cross_val_score import numpy as np cv = KFold(n=X.shape[0], n_folds = 10, shuffle=True, random_state=1) scores = cross_val_score(regtree, X, Y, scoring="mean_squared_error", cv = cv, n_jobs=1) print(scores) score = np.mean(scores) print(score) list(zip(predictors,regtree.feature_importances_)) ###Output _____no_output_____ ###Markdown Random Forests ###Code from sklearn.ensemble import RandomForestRegressor forest = RandomForestRegressor(n_jobs=2, oob_score=True, n_estimators=10000) forest.fit(X,Y) data["rforest_pred"]= forest.oob_prediction_ data[["rforest_pred", "medv"]] data["rforest_error2"] = (data["rforest_pred"]-data["medv"])**2 sum(data["rforest_error2"])/len(data) forest.oob_score_ ###Output _____no_output_____
3_2_Search/BestFirstSearch/BestFirstSearch.ipynb
###Markdown BestFirstSearchThis notebook demonstrates the usage of the best first search algorithm Reaching the goal via best first:https://www.youtube.com/watch?v=ge_-o0RfrgMhttps://www.youtube.com/watch?v=YwAyqkznxa0https://www.youtube.com/watch?v=TPIFP4E7DVohttps://www.youtube.com/watch?v=cl8Kdkr4Gbg Expansion grid:https://www.youtube.com/watch?v=1l7bWfz8sJwhttps://www.youtube.com/watch?v=pH6sDfBalaw Printing the path:https://www.youtube.com/watch?v=6UJFZf40aBghttps://www.youtube.com/watch?v=CyQ2gl-9W4o ###Code # ---------- # User Instructions: # # Define a function, search() that returns a list # in the form of [optimal path length, row, col]. For # the grid shown below, your function should output # [11, 4, 5]. # # If there is no valid path from the start point # to the goal, your function should return the string # 'fail' # ---------- # Grid format: # 0 = Navigable space # 1 = Occupied space import numpy as np """ grid = [[0, 0, 1, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 1, 0], [0, 0, 1, 1, 1, 0], [0, 0, 0, 0, 1, 0]] """ grid = [[0, 0, 1, 0, 0, 0], [0, 0, 1, 0, 1, 0], [0, 0, 1, 0, 1, 0], [0, 0, 1, 0, 1, 0], [0, 0, 0, 0, 1, 0]] init = [0, 0] goal = [len(grid)-1, len(grid[0])-1] cost = 1 delta = [[-1, 0], # go up [ 0,-1], # go left [ 1, 0], # go down [ 0, 1]] # go right delta_name = ['^', '<', 'v', '>'] class BestFirstSearch: def __init__(self, grid,init,goal,cost): self.grid = grid self.init = init self.goal = goal self.cost = cost self.width = len(grid[0]) self.height = len(grid) print("{}x{}".format(self.width,self.height)) self.closed = [[0 for i in range(self.width)] for j in range(self.height)] self.expand_grid = [[-1 for i in range(self.width)] for j in range(self.height)] self.actions = [[-1 for i in range(self.width)] for j in range(self.height)] self.open = [[0, init[0], init[1]]] self.walker = np.copy(self.grid) self.path = None self.counter = 1 self.expand_grid[init[0]][init[1]] = 0 self.closed[init[0]][init[1]] = 1 while self.expand_cheapest() and self.closed[goal[0]][goal[1]]==0: pass def expand_cheapest(self): if len(self.open)==0: return False cheapest = self.open[0][0] cheapest_index = 0 for index in range(1,len(self.open)): if self.open[index][0]<cheapest: cheapest = self.open[index][0] cheapest_index = index bx = self.open[cheapest_index][2] by = self.open[cheapest_index][1] bc = self.open[cheapest_index][0] self.open.pop(cheapest_index) self.expand(bx,by,bc) return True def expand(self, x, y, costs): # print("Expanding {} {} with costs {}".format(x,y,costs)) self.walker[y][x] = 5 # print(np.array(self.closed)) # print(np.array(self.walker)) # print("\n") any = False for cdi in range(len(delta)): cd = delta[cdi] new_x = x + cd[1] new_y = y + cd[0] if new_x<0 or new_y<0 or new_x>=self.width or new_y>=self.height: # don't leave the world's grid continue if self.grid[new_y][new_x]!=0: continue if self.closed[new_y][new_x]==0: self.closed[new_y][new_x] = 1 self.walker[new_y][new_x] = 5 self.closed[new_y][new_x] = 1 self.actions[new_y][new_x] = cdi self.expand_grid[new_y][new_x] = self.counter self.counter += 1 self.open.append([costs+self.cost, new_y, new_x]) if new_x==self.goal[1] and new_y==self.goal[0]: self.path = [costs+self.cost, new_y, new_x] return any = True # print(self.open) return any def search(grid,init,goal,cost): # ---------------------------------------- # insert code here # ---------------------------------------- bfs = BestFirstSearch(grid,init,goal,cost) if bfs.path is None: return "fail" return bfs.path def search_g(grid,init,goal,cost): # ---------------------------------------- # insert code here # ---------------------------------------- bfs = BestFirstSearch(grid,init,goal,cost) return bfs.expand_grid bfs = BestFirstSearch(grid,init,goal,cost) # print(bfs.actions) directions = [[' ' for i in range(bfs.width)] for j in range(bfs.height)] directions = np.array(directions) directions[np.array(grid)==1] = '#' dy = goal[0] dx = goal[1] directions[dy][dx] = "*" print("{} {}".format(dx,dy)) while dy!=init[0] or dx!=init[1]: action = bfs.actions[dy][dx] if action==-1: break dt = delta[action] dx = dx - dt[1] dy = dy - dt[0] directions[dy][dx] = delta_name[action] for row in directions: print(row) ##### Do Not Modify ###### import grader try: response = grader.run_grader(search) print(response) except Exception as err: print(str(err)) ##### Do Not Modify ###### import grader_grid try: response = grader_grid.run_grader_grid(search_g) print(response) except Exception as err: print(str(err)) ###Output 2x2 5x5 7x5 6x5 Correct!
demo_openai.ipynb
###Markdown Data for fine-tuning & prediction ###Code ### Prepararing the data for fine tuning X_train, y_train = parse_train_data(data_path) print(X_train.shape) print(y_train.shape) ### Showing Images of the 3 classes after reshape 28x28x1 plt.figure(figsize=(20,20)) plt.subplot(131) imgplot = plt.imshow(X_train[0,:,:,0], cmap="gist_gray") plt.subplot(132) imgplot = plt.imshow(X_train[1,:,:,0], cmap="gist_gray") plt.subplot(133) imgplot = plt.imshow(X_train[2,:,:,0], cmap="gist_gray") X_predict = parse_predict_data(data_path) print(X_predict.shape) ###### Showing the image with class to predict plt.figure(figsize=(6,6)) imgplot = plt.imshow(X_predict[0,:,:,0], cmap="gist_gray") ###Output _____no_output_____ ###Markdown Preparing the model & prediction ###Code sess=tf.Session() model = OmniglotModelBisonai(num_classes=3, **{'learning_rate':learning_rate}) saver = tf.train.Saver() saver.restore(sess, checkpoint_path) for e in range(epochs): sess.run(model.minimize_op, feed_dict={model.input_ph: X_train.reshape(X_train.shape[:3]), model.label_ph: y_train}) result = sess.run(model.predictions, feed_dict={model.input_ph: X_predict.reshape(X_predict.shape[:3])}) print("The predicted class is {}.".format(result[0])) ###Output The predicted class is 2.
colab_apex_builder.ipynb
###Markdown **Pytorch Apex Builder****This notebook will help you build apex wheel for colaboratory runtime.**Built wheel will be automatically downloaded, and can be used with e.g. git, wget, and so on.* Made by Dongha Kim, in Yonsei University.* Github: https://github.com/kdha0727/colab-apex-builder/ ###Code !lsb_release -a gpu_info = !nvidia-smi gpu_info = '\n'.join(gpu_info) if gpu_info.find('failed') >= 0 or gpu_info.find('not found') >= 0: raise RuntimeError( 'Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ' 'and then re-execute this cell.' ) else: print(gpu_info) import torch torch.__version__ %%shell wget https://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb dpkg -i cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb apt-key add /var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub apt-get update apt-get install -qq cuda-toolkit-10-2 gcc-5 g++-5 -y apt-get clean ln -s /usr/bin/gcc-5 /usr/local/cuda/bin/gcc ln -s /usr/bin/g++-5 /usr/local/cuda/bin/g++ /usr/local/cuda/bin/nvcc -V export CUDA_HOME=/usr/local/cuda %%shell git clone https://github.com/NVIDIA/apex pip wheel -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex rm -rf apex/ from google.colab import files wheel_dir = !find apex-* assert len(wheel_dir) == 1 files.download(wheel_dir[0]) ###Output _____no_output_____
CoronaAffectedAreasSouthKorea.ipynb
###Markdown Reading Data from .csv file ###Code df = pd.read_csv('PatientInfo.csv') # Checking if data is read successfully df.head() # Counting cases in different Provinces cases_count = df['province'].value_counts() # Plotting graph for the results (cases_count).plot(kind="bar"); plt.title("Affected Cities In South Korea"); ###Output _____no_output_____
examples/money_creation/Ex2_Lending.ipynb
###Markdown [Money Creation Examples](http://www.siebenbrunner.com/moneycreation/) > **Example 2**: Money creation and destruction through lendingWe demonstrate how money is created through bank lending and destroyed through repayment. We also demonstrate the redistribution effects of interest payments and dividend payments by banks.We start by importing required utilities: ###Code import os import sys base_path = os.path.realpath(os.getcwd()+"/../..") sys.path.append(base_path) from abcFinance import Ledger, Account, AccountSide ###Output _____no_output_____ ###Markdown Declaration of agentsWe start by defining their agents and the accounts on their balance sheets: ###Code bank = Ledger(residual_account_name="Equity") private_agent = Ledger(residual_account_name="Equity") bank.make_asset_accounts(['Cash','Loans','Reserves']) bank.make_liability_accounts(['Deposits']) bank.make_flow_accounts(['Interest income']) private_agent.make_asset_accounts(['Cash','Deposits']) private_agent.make_liability_accounts(['Loans']) private_agent.make_flow_accounts(['Dividend income','Interest expenses']) ###Output _____no_output_____ ###Markdown We further define a function that computes the money stocks according to our defined taxonomy: ###Code from IPython.core.display import SVG from IPython.display import display_svg def print_money_stocks(): # Bank money: bank liabilities that are money bank_money = bank.get_balance('Deposits')[1] print("Total (Bank) Money:",bank_money) def print_balance_sheets_and_money_stocks(): bank_balance_sheet = SVG(bank.draw_balance_sheet("Bank Balance Sheet")) private_agent_balance_sheet = SVG(private_agent.draw_balance_sheet("Private Agent Balance Sheet")) display_svg(bank_balance_sheet, private_agent_balance_sheet) print_money_stocks() ###Output _____no_output_____ ###Markdown Start of the exampleWe start by endowing the bank and the private agent with money (note that for the sake of this example it does not matter whether the bank is a commercial or a central bank). ###Code bank.book(debit=[('Reserves',100)],credit=[('Equity',50),('Deposits',50)]) private_agent.book(debit=[('Deposits',50)],credit=[('Equity',50)]) bank.book_end_of_period() private_agent.book_end_of_period() print_balance_sheets_and_money_stocks() ###Output _____no_output_____ ###Markdown The bank now grants a loan to the private agent, thereby increasing the stock of total money: ###Code bank.book(debit=[('Loans',100)],credit=[('Deposits',100)]) private_agent.book(debit=[('Deposits',100)],credit=[('Loans',100)]) print_balance_sheets_and_money_stocks() ###Output _____no_output_____ ###Markdown The private agent now pays some interest on its loan to the bank. Note that the money stock (temporarily) decreases, since the private agent uses its deposits to pay interest, thereby contributing to the bank's profit (and in this case, since there are no other expenses, equity). ###Code private_agent.book(debit=[('Interest expenses',5)],credit=[('Deposits',5)]) bank.book(debit=[('Deposits',5)],credit=[('Interest income',5)]) print("Bank P&L and change in capital:") bank.print_profit_and_loss() print("Private agent P&L and change in capital:") private_agent.print_profit_and_loss() bank.book_end_of_period() private_agent.book_end_of_period() print_balance_sheets_and_money_stocks() ###Output Bank P&L and change in capital: Flow accounts: Interest income : 5 Profit for period: 5 -- Private agent P&L and change in capital: Flow accounts: Interest expenses : -5 Profit for period: -5 -- ###Markdown The private agent now repays the loan principal, thereby (permanently) destroying the money that was created through the loan granting: ###Code private_agent.book(debit=[('Loans',100)],credit=[('Deposits',100)]) bank.book(debit=[('Deposits',100)],credit=[('Loans',100)]) print_balance_sheets_and_money_stocks() ###Output _____no_output_____ ###Markdown The bank now transfers its profit from the period to the bank owners in the form of dividends, thereby increasing the money stock again. In this example the transfer goes back to the same private agent, but in practice the borrowers and owners of the bank would typically be different sets of agents. Note that in practice the bank's profit is also moved to other agents in a variety of other (expense) forms, e.g. as payment for purchases the bank makes and as salary to its employees. ###Code bank.book(debit=[('Equity',5)],credit=[('Deposits',5)],text='Dividend payout') private_agent.book(debit=[('Deposits',5)],credit=[('Dividend income',5)]) print("Bank P&L and change in capital:") bank.print_profit_and_loss() print("Private agent P&L and change in capital:") private_agent.print_profit_and_loss() bank.book_end_of_period() private_agent.book_end_of_period() print_balance_sheets_and_money_stocks() ###Output Bank P&L and change in capital: Flow accounts: Profit for period: 0 Profit distribution and capital actions Dividend payout : -5 -- Private agent P&L and change in capital: Flow accounts: Dividend income : 5 Profit for period: 5 --
05-homework/01-Animals.ipynb
###Markdown Homework 5, Part 1: Building a pandas cheat sheet**Use `animals.csv` to answer the following questions.** The data is small and the questions are pretty simple, so hopefully you can use this for pandas reference in the future. 0) SetupImport pandas **with the correct name** and set `matplotlib` to always display graphics in the notebook. ###Code import pandas as pd %matplotlib inline ###Output _____no_output_____ ###Markdown 1) Reading in a csv fileUse pandas to read in the animals CSV file, saving it as a variable with the normal name for a dataframe ###Code df = pd.read_csv('animals.csv') ###Output _____no_output_____ ###Markdown 2) Checking your dataDisplay the number of rows and columns in your data. Also display the names and data types of each column. ###Code df.shape df.dtypes df ###Output _____no_output_____ ###Markdown 3) Display the first 3 animalsHmmm, we know how to take the first 5, but maybe the first 3. Maybe there is an option to change how many you get? Use `?` to check the documentation on the command. ###Code df.head(2) ###Output _____no_output_____ ###Markdown 4) Sort the animals to show me the 3 longest animals> **TIP:** You can use `.head()` after you sort things! ###Code df.sort_values('length', ascending = False).head(3) ###Output _____no_output_____ ###Markdown 5) Get the mean and standard deviation of animal lengthsYou can do this with separate commands or with a single command. ###Code df.length.mean() df.length.std() ###Output _____no_output_____ ###Markdown 6) How many cats do we have and how many dogs?You only need one command to do this ###Code df.animal.value_counts() ###Output _____no_output_____ ###Markdown 7) Only display the dogs> **TIP:** It's probably easiest to make it display the list of `True`/`False` first, then wrap the `df[]` around it. ###Code df[df.animal == 'dog'] ###Output _____no_output_____ ###Markdown 8) Only display the animals that are longer than 40cm ###Code df[df.length > 40] ###Output _____no_output_____ ###Markdown 9) `length` is the animal's length in centimeters. Create a new column called `inches` that is the length in inches. ###Code df['inches'] = df.length * 0.393701 df ###Output _____no_output_____ ###Markdown 10) Save the cats to a separate variable called `cats`. Save the dogs to a separate variable called `dogs`.This is the same as listing them, but you just save the result to a variable instead of looking at it. Be sure to use `.head()` to make sure your data looks right.Once you do this, every time you use `cats` you'll only be talking about the cats, and same for the dogs. ###Code cats = df[df.animal == 'cat'] dogs = df[df.animal == 'dog'] cats dogs ###Output _____no_output_____ ###Markdown 11) Display all of the animals that are cats and above 12 inches long.First do it using the `cats` variable, then also do it using your `df` dataframe.> **TIP:** For multiple conditions, you use `df[(one condition) & (another condition)]` ###Code cats[cats.length > 12] df[(df.animal == 'cat') & (df.length > 20)] ###Output _____no_output_____ ###Markdown 12) What's the mean length of a cat? What's the mean length of a dog? ###Code cats.length.mean() dogs.length.mean() ###Output _____no_output_____ ###Markdown 13) If you didn't already, use `groupby` to do 12 all at once ###Code df.groupby(by = 'animal').length.mean() ###Output _____no_output_____ ###Markdown 14) Make a histogram of the length of dogs.We didn't talk about how to make a histogram in class! It **does not** use `plot()`. Imagine you're a programmer who doesn't want to type out `histogram` - what do you think you'd type instead?> **TIP:** The method is four letters long>> **TIP:** First you'll say "I want the length column," then you'll say "make a histogram">> **TIP:** This is the worst histogram ever ###Code df.length.hist() ###Output _____no_output_____ ###Markdown 15) Make a horizontal bar graph of the length of the animals, with the animal's name as the label> **TIP:** It isn't `df['length'].plot()`, because it needs *both* columns. Think about how we did the scatterplot in class.>> **TIP:** Which is the `x` axis and which is the `y` axis? You'll notice pandas is kind of weird and wrong.>> **TIP:** Make sure you specify the `kind` of graph or else it will be a weird line thing>> **TIP:** If you want, you can set a custom size for your plot by sending it something like `figsize=(15,2)` ###Code df.plot(x = 'name', y = 'length', kind = 'barh', figsize=(12,4)) ###Output _____no_output_____ ###Markdown 16) Make a sorted horizontal bar graph of the cats, with the larger cats on top> **TIP:** Think in steps, even though it's all on one line - first make sure you can sort it, then try to graph it. ###Code df[df.animal == 'cat'].sort_values('length').plot(x = 'name', y = 'length', kind = 'barh') ###Output _____no_output_____ ###Markdown 17) As a reward for getting down here: run the following code, then plot the number of dogs vs. the number of cats> **TIP:** Counting the number of dogs and number of cats does NOT use `.groupby`! That's only for calculations.>> **TIP:** You can set a title with `title="Number of animals"` ###Code import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') df.animal.value_counts().plot(kind = 'barh', title = 'Number of animals') ###Output _____no_output_____ ###Markdown Homework 5, Part 1: Building a pandas cheat sheet**Use `animals.csv` to answer the following questions.** The data is small and the questions are pretty simple, so hopefully you can use this for pandas reference in the future. 0) SetupImport pandas **with the correct name** and set `matplotlib` to always display graphics in the notebook. ###Code import pandas as pd %matplotlib inline ###Output _____no_output_____ ###Markdown 1) Reading in a csv fileUse pandas to read in the animals CSV file, saving it as a variable with the normal name for a dataframe ###Code df = pd.read_csv("animals.csv") df ###Output _____no_output_____ ###Markdown 2) Checking your dataDisplay the number of rows and columns in your data. Also display the names and data types of each column. ###Code df.shape df.dtypes ###Output _____no_output_____ ###Markdown 3) Display the first 3 animalsHmmm, we know how to take the first 5, but maybe the first 3. Maybe there is an option to change how many you get? Use `?` to check the documentation on the command. ###Code df.head(3) ###Output _____no_output_____ ###Markdown 4) Sort the animals to show me the 3 longest animals> **TIP:** You can use `.head()` after you sort things! ###Code df.sort_values(by='length', ascending=False).head(3) ###Output _____no_output_____ ###Markdown 5) Get the mean and standard deviation of animal lengthsYou can do this with separate commands or with a single command. ###Code df.length.median() df.length.std() ###Output _____no_output_____ ###Markdown 6) How many cats do we have and how many dogs?You only need one command to do this ###Code df.animal.value_counts() ###Output _____no_output_____ ###Markdown 7) Only display the dogs> **TIP:** It's probably easiest to make it display the list of `True`/`False` first, then wrap the `df[]` around it. ###Code dogs = df[df.animal == 'dog'] dogs ###Output _____no_output_____ ###Markdown 8) Only display the animals that are longer than 40cm ###Code df[df.length > 40].groupby(by='animal').head() ###Output _____no_output_____ ###Markdown 9) `length` is the animal's length in centimeters. Create a new column called `inches` that is the length in inches. ###Code df['length_in'] = df.length * 0.39 df.head() ###Output _____no_output_____ ###Markdown 10) Save the cats to a separate variable called `cats`. Save the dogs to a separate variable called `dogs`.This is the same as listing them, but you just save the result to a variable instead of looking at it. Be sure to use `.head()` to make sure your data looks right.Once you do this, every time you use `cats` you'll only be talking about the cats, and same for the dogs. ###Code cats = df[df.animal == 'cat'] dogs = df[df.animal == 'dog'] ###Output _____no_output_____ ###Markdown 11) Display all of the animals that are cats and above 12 inches long.First do it using the `cats` variable, then also do it using your `df` dataframe.> **TIP:** For multiple conditions, you use `df[(one condition) & (another condition)]` ###Code cats[(cats.length_in > 12)] ###Output _____no_output_____ ###Markdown 12) What's the mean length of a cat? What's the mean length of a dog? ###Code cats.mean() dogs.mean() ###Output _____no_output_____ ###Markdown 13) If you didn't already, use `groupby` to do 12 all at once ###Code df.groupby(by='animal').length.median() ###Output _____no_output_____ ###Markdown 14) Make a histogram of the length of dogs.We didn't talk about how to make a histogram in class! It **does not** use `plot()`. Imagine you're a programmer who doesn't want to type out `histogram` - what do you think you'd type instead?> **TIP:** The method is four letters long>> **TIP:** First you'll say "I want the length column," then you'll say "make a histogram">> **TIP:** This is the worst histogram ever ###Code dogs.length.hist() ###Output _____no_output_____ ###Markdown 15) Make a horizontal bar graph of the length of the animals, with the animal's name as the label> **TIP:** It isn't `df['length'].plot()`, because it needs *both* columns. Think about how we did the scatterplot in class.>> **TIP:** Which is the `x` axis and which is the `y` axis? You'll notice pandas is kind of weird and wrong.>> **TIP:** Make sure you specify the `kind` of graph or else it will be a weird line thing>> **TIP:** If you want, you can set a custom size for your plot by sending it something like `figsize=(15,2)` ###Code df.plot(x='name', y='length', kind='barh', figsize=(15,2)) ###Output _____no_output_____ ###Markdown 16) Make a sorted horizontal bar graph of the cats, with the larger cats on top> **TIP:** Think in steps, even though it's all on one line - first make sure you can sort it, then try to graph it. ###Code # df.groupby(by='Continent').GDP_per_capita.median().plot(kind='barh') cats = df[df.animal == 'cat'] cats cats_sort = cats.sort_values(by='length').plot(x='name', y='length', kind='barh') cats_sort ###Output _____no_output_____ ###Markdown 17) As a reward for getting down here: run the following code, then plot the number of dogs vs. the number of cats> **TIP:** Counting the number of dogs and number of cats does NOT use `.groupby`! That's only for calculations.>> **TIP:** You can set a title with `title="Number of animals"` ###Code import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') df.animal.value_counts().plot(x='cats', y='dogs', kind='barh') ###Output _____no_output_____ ###Markdown Homework 5, Part 1: Building a pandas cheat sheet**Use `animals.csv` to answer the following questions.** The data is small and the questions are pretty simple, so hopefully you can use this for pandas reference in the future. 0) SetupImport pandas **with the correct name** and set `matplotlib` to always display graphics in the notebook. ###Code import pandas as pd import matplotlib %matplotlib inline ###Output _____no_output_____ ###Markdown 1) Reading in a csv fileUse pandas to read in the animals CSV file, saving it as a variable with the normal name for a dataframe ###Code df = pd.read_csv("animals.csv") df ###Output _____no_output_____ ###Markdown 2) Checking your dataDisplay the number of rows and columns in your data. Also display the names and data types of each column. ###Code df.shape df.dtypes ###Output _____no_output_____ ###Markdown 3) Display the first 3 animalsHmmm, we know how to take the first 5, but maybe the first 3. Maybe there is an option to change how many you get? Use `?` to check the documentation on the command. ###Code df.head(3) ###Output _____no_output_____ ###Markdown 4) Sort the animals to show me the 3 longest animals> **TIP:** You can use `.head()` after you sort things! ###Code df.sort_values(by='length', ascending=False).head(3) ###Output _____no_output_____ ###Markdown 5) Get the mean and standard deviation of animal lengthsYou can do this with separate commands or with a single command. ###Code df.length.mean() df.length.std() ###Output _____no_output_____ ###Markdown 6) How many cats do we have and how many dogs?You only need one command to do this ###Code df.animal.value_counts() ###Output _____no_output_____ ###Markdown 7) Only display the dogs> **TIP:** It's probably easiest to make it display the list of `True`/`False` first, then wrap the `df[]` around it. ###Code dogs_df = df[df.animal == 'dog'] dogs_df ###Output _____no_output_____ ###Markdown 8) Only display the animals that are longer than 40cm ###Code long_df = df[df.length > 40] long_df ###Output _____no_output_____ ###Markdown 9) `length` is the animal's length in centimeters. Create a new column called `inches` that is the length in inches. ###Code df['inches'] = df.length * 0.39 df ###Output _____no_output_____ ###Markdown 10) Save the cats to a separate variable called `cats`. Save the dogs to a separate variable called `dogs`.This is the same as listing them, but you just save the result to a variable instead of looking at it. Be sure to use `.head()` to make sure your data looks right.Once you do this, every time you use `cats` you'll only be talking about the cats, and same for the dogs. ###Code cats = df[df.animal == 'cat'] cats.head() dogs = df[df.animal == 'dog'] dogs.head() ###Output _____no_output_____ ###Markdown 11) Display all of the animals that are cats and above 12 inches long.First do it using the `cats` variable, then also do it using your `df` dataframe.> **TIP:** For multiple conditions, you use `df[(one condition) & (another condition)]` ###Code cats[(cats.length > 12)] ###Output _____no_output_____ ###Markdown 12) What's the mean length of a cat? What's the mean length of a dog? ###Code cats.length.mean() dogs.length.mean() ###Output _____no_output_____ ###Markdown 13) If you didn't already, use `groupby` to do 12 all at once ###Code df.groupby("animal").length.mean() ###Output _____no_output_____ ###Markdown 14) Make a histogram of the length of dogs.We didn't talk about how to make a histogram in class! It **does not** use `plot()`. Imagine you're a programmer who doesn't want to type out `histogram` - what do you think you'd type instead?> **TIP:** The method is four letters long>> **TIP:** First you'll say "I want the length column," then you'll say "make a histogram">> **TIP:** This is the worst histogram ever ###Code %matplotlib inline dogs.length.hist() ###Output _____no_output_____ ###Markdown 15) Make a horizontal bar graph of the length of the animals, with the animal's name as the label> **TIP:** It isn't `df['length'].plot()`, because it needs *both* columns. Think about how we did the scatterplot in class.>> **TIP:** Which is the `x` axis and which is the `y` axis? You'll notice pandas is kind of weird and wrong.>> **TIP:** Make sure you specify the `kind` of graph or else it will be a weird line thing>> **TIP:** If you want, you can set a custom size for your plot by sending it something like `figsize=(15,2)` ###Code df.plot(x='name', y='length', kind='barh', figsize=(10,8)) ###Output _____no_output_____ ###Markdown 16) Make a sorted horizontal bar graph of the cats, with the larger cats on top> **TIP:** Think in steps, even though it's all on one line - first make sure you can sort it, then try to graph it. ###Code cats.sort_values(by='length', ascending=False) cats.sort_values(by='length', ascending=True).plot(x='name', y='length', kind='barh', figsize=(8,6)) ###Output _____no_output_____ ###Markdown 17) As a reward for getting down here: run the following code, then plot the number of dogs vs. the number of cats> **TIP:** Counting the number of dogs and number of cats does NOT use `.groupby`! That's only for calculations.>> **TIP:** You can set a title with `title="Number of animals"` ###Code import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') df.animal.value_counts() df.animal.value_counts().plot(title='Number of animals', kind='barh', figsize=(8,6)) ###Output _____no_output_____ ###Markdown Homework 5, Part 1: Building a pandas cheat sheet**Use `animals.csv` to answer the following questions.** The data is small and the questions are pretty simple, so hopefully you can use this for pandas reference in the future. 0) SetupImport pandas **with the correct name** and set `matplotlib` to always display graphics in the notebook. ###Code import pandas as pd %matplotlib inline ###Output _____no_output_____ ###Markdown 1) Reading in a csv fileUse pandas to read in the animals CSV file, saving it as a variable with the normal name for a dataframe ###Code df = pd.read_csv("animals.csv") #must name df (dataframe) df ###Output _____no_output_____ ###Markdown 2) Checking your dataDisplay the number of rows and columns in your data. Also display the names and data types of each column. ###Code df.shape df.dtypes ###Output _____no_output_____ ###Markdown 3) Display the first 3 animalsHmmm, we know how to take the first 5, but maybe the first 3. Maybe there is an option to change how many you get? Use `?` to check the documentation on the command. ###Code df.head(3) ###Output _____no_output_____ ###Markdown 4) Sort the animals to show me the 3 longest animals> **TIP:** You can use `.head()` after you sort things! ###Code df.length.sort_values(ascending=False) ###Output _____no_output_____ ###Markdown 5) Get the mean and standard deviation of animal lengthsYou can do this with separate commands or with a single command. ###Code df.length.mean() df.length.std() ###Output _____no_output_____ ###Markdown 6) How many cats do we have and how many dogs?You only need one command to do this ###Code cats = df[df.animal == 'cat'] cats.count() dogs = df[df.animal == 'dog'] dogs.count() ###Output _____no_output_____ ###Markdown 7) Only display the dogs> **TIP:** It's probably easiest to make it display the list of `True`/`False` first, then wrap the `df[]` around it. ###Code dogs ###Output _____no_output_____ ###Markdown 8) Only display the animals that are longer than 40cm ###Code df[df.length > 40] ###Output _____no_output_____ ###Markdown 9) `length` is the animal's length in centimeters. Create a new column called `inches` that is the length in inches. ###Code # Making a new column, must use ['column'] df['length_inches'] = df.length / 2.54 df ###Output _____no_output_____ ###Markdown 10) Save the cats to a separate variable called `cats`. Save the dogs to a separate variable called `dogs`.This is the same as listing them, but you just save the result to a variable instead of looking at it. Be sure to use `.head()` to make sure your data looks right.Once you do this, every time you use `cats` you'll only be talking about the cats, and same for the dogs. ###Code cats #done in Q6 dogs #done in Q6 ###Output _____no_output_____ ###Markdown 11) Display all of the animals that are cats and above 12 inches long.First do it using the `cats` variable, then also do it using your `df` dataframe.> **TIP:** For multiple conditions, you use `df[(one condition) & (another condition)]` ###Code df[(df.animal == 'cat') & (df.length_inches > 12)] cat = df.animal == 'cat' len_greater_12 = df.length_inches > 12 df[cat & len_greater_12] ###Output _____no_output_____ ###Markdown 12) What's the mean length of a cat? What's the mean length of a dog? ###Code cats.length.mean() dogs.length.mean() ###Output _____no_output_____ ###Markdown 13) If you didn't already, use `groupby` to do 12 all at once ###Code df.groupby(by='animal').length.mean() ###Output _____no_output_____ ###Markdown 14) Make a histogram of the length of dogs.We didn't talk about how to make a histogram in class! It **does not** use `plot()`. Imagine you're a programmer who doesn't want to type out `histogram` - what do you think you'd type instead?> **TIP:** The method is four letters long>> **TIP:** First you'll say "I want the length column," then you'll say "make a histogram">> **TIP:** This is the worst histogram ever ###Code dogs.length.hist() #histogram ###Output _____no_output_____ ###Markdown 15) Make a horizontal bar graph of the length of the animals, with the animal's name as the label> **TIP:** It isn't `df['length'].plot()`, because it needs *both* columns. Think about how we did the scatterplot in class.>> **TIP:** Which is the `x` axis and which is the `y` axis? You'll notice pandas is kind of weird and wrong.>> **TIP:** Make sure you specify the `kind` of graph or else it will be a weird line thing>> **TIP:** If you want, you can set a custom size for your plot by sending it something like `figsize=(15,2)` ###Code #df.groupby(by='animal').length.plot(kind='barh') #horizontal bar chart df.plot(x='animal', y='length', kind ='barh') ###Output _____no_output_____ ###Markdown 16) Make a sorted horizontal bar graph of the cats, with the larger cats on top> **TIP:** Think in steps, even though it's all on one line - first make sure you can sort it, then try to graph it. ###Code #df.sort_values('c', ascending=False)[['a','b']].plot.bar(stacked=True) cats.sort_values(by='length').plot(kind ='barh') ###Output _____no_output_____ ###Markdown 17) As a reward for getting down here: run the following code, then plot the number of dogs vs. the number of cats> **TIP:** Counting the number of dogs and number of cats does NOT use `.groupby`! That's only for calculations.>> **TIP:** You can set a title with `title="Number of animals"` ###Code import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') df.animal.value_counts().plot(kind='barh', title="Number of animals") ###Output _____no_output_____
Assignment6/.ipynb_checkpoints/ConnectedComponents-checkpoint.ipynb
###Markdown 6.6.2019 Image Processing in Physics Julia Herzen, Klaus Achterhold, Fabio De Marco, Manuel Schultheiß Exercise 6, Task 2: Connected ComponentsHave you ever woundered how a battery looks inside?This exercise will answer all your urgent quesions!As batteries are produced on a large scale nowadays, non-destructive testing to maintain battery safety can be performed by computed tomography, for example (Further information not needed to solve the exercise: https://www.nature.com/articles/ncomms7924).We performed a CT scan of a 9V block battery for you and your task is to segment the battery cells using a connected component algorithm and thresholding. Afterwards you determine the median and mean intensity for each battery cell and plot them.Please note you need to install scikit image to solve this exercise (https://scikit-image.org/docs/dev/install.html). ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt from ipywidgets import interactive from skimage.measure import label, regionprops # Load a CT scan of a battery battery = np.load("battery.npy")[:, ::2, ::2] ###Output _____no_output_____ ###Markdown **Task 1: Min-Max Normlization: **First we want to normalize the intensity values to a $[0, 1]$ range (e.g. the highest value in the array should be 1 and the lowest value should be 0).In the original CT scan, the intesities may also have negative values such as -2. Use `battery.min()` and `battery.max()` to find the minimum and maximum. Mathematically, from each voxel $v_i$ the minimum instensity of the whole scan is subtracted and afterwards it is divided by the intensity range:$$\mathrm{v}_i=\frac{\mathrm{v}_i-\min(\mathrm{battery})}{\max(\mathrm{battery})-\min(\mathrm{battery})}$$ ###Code battery = (battery - np.min(battery))/(np.max(battery) - np.min(battery)) ###Output _____no_output_____ ###Markdown Some assertion code to ensure everything was implemented correctely: ###Code assert battery.max()==1 and battery.min()==0 ###Output _____no_output_____ ###Markdown The following function displays a 3D scan for you, where you can inspect the slice stack by using a slider. ###Code def show_ct(ctscan, colors=False): def f( ct_slice_index): fig, ax=plt.subplots(dpi=200) ax.imshow(ctscan[ct_slice_index], cmap="gray" if not colors else "viridis", vmin=0, vmax=1) interactive_plot = interactive(f, ct_slice_index=(0,9)) output = interactive_plot.children[-1] display(interactive_plot) show_ct(battery) ###Output _____no_output_____ ###Markdown **Task 2: Binary Thresholding** Your task is to threshold the scan to a value above 0.42. `thresholded_battery` battery should contain `True` for values > 0.42 and `False` for other voxels. ###Code thresholded_battery = battery>0.42 show_ct(thresholded_battery.astype(np.int32)) ###Output _____no_output_____ ###Markdown **Task 3: Connected components ** Use the label function from skimage to assign an unique integer value to each connected group of voxels ###Code label_image = label(thresholded_battery) ###Output _____no_output_____ ###Markdown We can inspect the result using our `plt.imshow` function for the 4th slice: ###Code plt.imshow(label_image[3]) ###Output _____no_output_____ ###Markdown **Task 4: Extract Battery Cells: ** Battery cells in our scan have between 4000 and 6000 voxels. Add the `region.bbox` property of regions with a voxel count within that range to the list `regions`. You can access the voxel count of each connected component using `region.area`. ###Code regions = [] for region in regionprops(label_image): if region.area >= 4000 and region.area < 6000: # ??? regions.append(region.bbox) ###Output _____no_output_____ ###Markdown Next we want to show each battery cell. This helper function will return a subvolume when providing a battery cell number. ###Code def get_cell(cell_index): """ Args: cell_index: The cell number. Can be 1,2,3,4,5 or 6 """ start_dim0 = regions[cell_index][0] end_dim0 = regions[cell_index][3] start_dim1 = regions[cell_index][1] end_dim1 = regions[cell_index][4] start_dim2 = regions[cell_index][2] end_dim2 = regions[cell_index][5] return battery[start_dim0:end_dim0,start_dim1:end_dim1,start_dim2:end_dim2] # Show the 3D volume of cell 1 show_ct(get_cell(1)) ###Output _____no_output_____ ###Markdown **Task 5: Plot Median and Mean ** Next, we want to extract mean intensity and maximum intensity for each cell and plot it into a scatterplot. Hereby, we create a colormap first. Your task is to extract the mean and median intensity from each slice in each cell (Consequently you need to have 48 values for mean and median each, as there are 6 cells with 8 slices each). Plot these values using a scatterplot, wherby the x-axis defines the mean instensiy and the y-axis defines the median intensity. ###Code import matplotlib.cm as cm colormap = cm.rainbow(np.linspace(0, 1, 6)) means = [] medians = [] colors = [] for cell in range(0,6): for slice_index in range(1,9): # We do not use the first and the last slice means.append(np.mean(label_image[slice_index][cell])) medians.append(np.median(label_image[slice_index][cell])) colors.append(colormap[cell]) plt.title("Battery Features") plt.xlabel("Mean Intensity") plt.ylabel("Median Intensity") plt.scatter(means, medians, color=colors) plt.show() ###Output _____no_output_____
Gena/test_sentinel2.ipynb
###Markdown View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). ###Code # Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import geemap.eefolium as emap except: import geemap as emap # Authenticates and initializes Earth Engine import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ###Output _____no_output_____ ###Markdown Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function. ###Code Map = emap.Map(center=[40,-100], zoom=4) Map.add_basemap('ROADMAP') # Add Google Map Map ###Output _____no_output_____ ###Markdown Add Earth Engine Python script ###Code # Add Earth Engine dataset image = ee.ImageCollection('COPERNICUS/S2') \ .filterDate('2017-01-01', '2017-01-02').median() \ .divide(10000).visualize(**{'bands': ['B12', 'B8', 'B4'], 'min': 0.05, 'max': 0.5}) Map.setCenter(35.2, 31, 13) Map.addLayer(image, {}, 'Sentinel-2 images January, 2018') ###Output _____no_output_____ ###Markdown Display Earth Engine data layers ###Code Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ###Output _____no_output_____ ###Markdown View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). ###Code # Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import geemap.eefolium as geemap except: import geemap # Authenticates and initializes Earth Engine import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ###Output _____no_output_____ ###Markdown Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function. ###Code Map = geemap.Map(center=[40,-100], zoom=4) Map ###Output _____no_output_____ ###Markdown Add Earth Engine Python script ###Code # Add Earth Engine dataset image = ee.ImageCollection('COPERNICUS/S2') \ .filterDate('2017-01-01', '2017-01-02').median() \ .divide(10000).visualize(**{'bands': ['B12', 'B8', 'B4'], 'min': 0.05, 'max': 0.5}) Map.setCenter(35.2, 31, 13) Map.addLayer(image, {}, 'Sentinel-2 images January, 2018') ###Output _____no_output_____ ###Markdown Display Earth Engine data layers ###Code Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ###Output _____no_output_____ ###Markdown Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages: ###Code from pydeck_earthengine_layers import EarthEngineLayer import pydeck as pdk import requests import ee ###Output _____no_output_____ ###Markdown AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication. ###Code try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ###Output _____no_output_____ ###Markdown Create MapNext it's time to create a map. Here we create an `ee.Image` object ###Code # Initialize objects ee_layers = [] view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45) # %% # Add Earth Engine dataset image = ee.ImageCollection('COPERNICUS/S2') \ .filterDate('2017-01-01', '2017-01-02').median() \ .divide(10000).visualize(**{'bands': ['B12', 'B8', 'B4'], 'min': 0.05, 'max': 0.5}) view_state = pdk.ViewState(longitude=35.2, latitude=31, zoom=13) ee_layers.append(EarthEngineLayer(ee_object=image, vis_params={})) ###Output _____no_output_____ ###Markdown Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map: ###Code r = pdk.Deck(layers=ee_layers, initial_view_state=view_state) r.show() ###Output _____no_output_____ ###Markdown View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). ###Code # Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import geemap.eefolium as emap except: import geemap as emap # Authenticates and initializes Earth Engine import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ###Output _____no_output_____ ###Markdown Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function. ###Code Map = emap.Map(center=[40,-100], zoom=4) Map.add_basemap('ROADMAP') # Add Google Map Map ###Output _____no_output_____ ###Markdown Add Earth Engine Python script ###Code # Add Earth Engine dataset image = ee.ImageCollection('COPERNICUS/S2') \ .filterDate('2017-01-01', '2017-01-02').median() \ .divide(10000).visualize(**{'bands': ['B12', 'B8', 'B4'], 'min': 0.05, 'max': 0.5}) Map.setCenter(35.2, 31, 13) Map.addLayer(image, {}, 'Sentinel-2 images January, 2018') ###Output _____no_output_____ ###Markdown Display Earth Engine data layers ###Code Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ###Output _____no_output_____ ###Markdown View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. ###Code # %%capture # !pip install earthengine-api # !pip install geehydro ###Output _____no_output_____ ###Markdown Import libraries ###Code import ee import folium import geehydro ###Output _____no_output_____ ###Markdown Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error. ###Code # ee.Authenticate() ee.Initialize() ###Output _____no_output_____ ###Markdown Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`. ###Code Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID') ###Output _____no_output_____ ###Markdown Add Earth Engine Python script ###Code image = ee.ImageCollection('COPERNICUS/S2') \ .filterDate('2017-01-01', '2017-01-02').median() \ .divide(10000).visualize(**{'bands': ['B12', 'B8', 'B4'], 'min': 0.05, 'max': 0.5}) Map.setCenter(35.2, 31, 13) Map.addLayer(image, {}, 'Sentinel-2 images January, 2018') ###Output _____no_output_____ ###Markdown Display Earth Engine data layers ###Code Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map ###Output _____no_output_____ ###Markdown View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time. ###Code # %%capture # !pip install earthengine-api # !pip install geehydro ###Output _____no_output_____ ###Markdown Import libraries ###Code import ee import folium import geehydro ###Output _____no_output_____ ###Markdown Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error. ###Code # ee.Authenticate() ee.Initialize() ###Output _____no_output_____ ###Markdown Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`. ###Code Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID') ###Output _____no_output_____ ###Markdown Add Earth Engine Python script ###Code image = ee.ImageCollection('COPERNICUS/S2') \ .filterDate('2017-01-01', '2017-01-02').median() \ .divide(10000).visualize(**{'bands': ['B12', 'B8', 'B4'], 'min': 0.05, 'max': 0.5}) Map.setCenter(35.2, 31, 13) Map.addLayer(image, {}, 'Sentinel-2 images January, 2018') ###Output _____no_output_____ ###Markdown Display Earth Engine data layers ###Code Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map ###Output _____no_output_____ ###Markdown View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium. ###Code import subprocess try: import geehydro except ImportError: print('geehydro package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro']) ###Output _____no_output_____ ###Markdown Import libraries ###Code import ee import folium import geehydro ###Output _____no_output_____ ###Markdown Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. ###Code try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ###Output _____no_output_____ ###Markdown Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`. ###Code Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID') ###Output _____no_output_____ ###Markdown Add Earth Engine Python script ###Code image = ee.ImageCollection('COPERNICUS/S2') \ .filterDate('2017-01-01', '2017-01-02').median() \ .divide(10000).visualize(**{'bands': ['B12', 'B8', 'B4'], 'min': 0.05, 'max': 0.5}) Map.setCenter(35.2, 31, 13) Map.addLayer(image, {}, 'Sentinel-2 images January, 2018') ###Output _____no_output_____ ###Markdown Display Earth Engine data layers ###Code Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True) Map ###Output _____no_output_____ ###Markdown View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet. ###Code # Installs geemap package import subprocess try: import geemap except ImportError: print('Installing geemap ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) import ee import geemap ###Output _____no_output_____ ###Markdown Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function. ###Code Map = geemap.Map(center=[40,-100], zoom=4) Map ###Output _____no_output_____ ###Markdown Add Earth Engine Python script ###Code # Add Earth Engine dataset image = ee.ImageCollection('COPERNICUS/S2') \ .filterDate('2017-01-01', '2017-01-02').median() \ .divide(10000).visualize(**{'bands': ['B12', 'B8', 'B4'], 'min': 0.05, 'max': 0.5}) Map.setCenter(35.2, 31, 13) Map.addLayer(image, {}, 'Sentinel-2 images January, 2018') ###Output _____no_output_____ ###Markdown Display Earth Engine data layers ###Code Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ###Output _____no_output_____
.ipynb_checkpoints/Version 4 RUL multiple_models-checkpoint.ipynb
###Markdown Version 03 -> Pred RUL ###Code !pip install texttable from platform import python_version print(python_version()) # importing required libraries from scipy.io import loadmat import matplotlib.pyplot as plt import numpy as np from pprint import pprint as pp from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from pprint import pprint from sklearn.linear_model import LinearRegression, Ridge, Lasso, BayesianRidge, ARDRegression, SGDRegressor from texttable import Texttable import math from sklearn.metrics import r2_score # getting the battery data # bs_all = [ # 'B0005', 'B0006', 'B0007', 'B0018', 'B0025', 'B0026', 'B0027', 'B0028', 'B0029', 'B0030', 'B0031', 'B0032', # 'B0042', 'B0043', 'B0044', 'B0045', 'B0046', # 'B0047', 'B0048' # ] bs_all = [ 'B0005', 'B0006', 'B0007', 'B0018' ] ds = {} for b in bs_all: ds[b] = loadmat(f'DATA/{b}.mat') types = {} times = {} ambient_temperatures = {} datas = {} for b in bs_all: x = ds[b][b]["cycle"][0][0][0] ambient_temperatures[b] = x['ambient_temperature'] types[b] = x['type'] times[b] = x['time'] datas[b] = x['data'] # clubbing all the compatible batteries together # Batteries are compatible if they were recorded under similar conditions # And their data size match up bs_compt = {} for b in bs_all: sz = 0 for j in range(datas[b].size): if types[b][j] == 'discharge': sz += 1 if bs_compt.get(sz): bs_compt[sz].append(b) else: bs_compt[sz] = [ b ] pp(bs_compt) BSSS = bs_compt ## CRITICAL TIME POINTS FOR A CYCLE ## We will only these critical points for furthur training ## TEMPERATURE_MEASURED ## => Time at highest temperature ## VOLTAGE_MEASURED ## => Time at lowest Voltage ## VOLTAGE_LOAD ## => First time it drops below 1 volt after 1500 time def getTemperatureMeasuredCritical(tm, time): high = 0 critical = 0 for i in range(len(tm)): if (tm[i] > high): high = tm[i] critical = time[i] return critical def getVoltageMeasuredCritical(vm, time): low = 1e9 critical = 0 for i in range(len(vm)): if (vm[i] < low): low = vm[i] critical = time[i] return critical def getVoltageLoadCritical(vl, time): for i in range(len(vl)): if (time[i] > 1500 and vl[i] < 1): return time[i] return -1 ###Output _____no_output_____ ###Markdown MODEL* Considering 1 Cycle for RUL estimation Features* [CP1, CP2, CP3, Capacity] -> RUL Remaining Useful Life* n = number of cycles above threshold* RUL of Battery after (cycle x) = (1 - (x / n)) * 100 ###Code ## X: Features ## y: RUL ## x: no. of cycles to merge def merge(X, y, x): XX = [] yy = [] sz = len(X) for i in range(sz - x + 1): curr = [] for j in range(x): for a in X[i + j]: curr.append(a) XX.append(curr) # val = 0 # for j in range(x): # val += y[i + j] # val /= x yy.append(y[i + x - 1]) return XX, yy ## Data Structure # Cycles[battery][param][cycle] # Cycles[battery][Capacity][cycle] Cycles = {} params = ['Temperature_measured', 'Voltage_measured', 'Voltage_load', 'Time'] rmses = [] for bs_cmpt in bs_compt: rmses.append([]) # iterate over the merge hyper parameter for xx in range(1, 10): results = Texttable() results.add_row(['Compatible Batteries', 'Cycles', 'MAE', 'RMSE', 'R2 Score' ]) loc = 0 # iterate over all the battery sets for bs_cmpt in bs_compt: # getting data for a given set # y contains RUL after current cycle # model will train for y y = [] bs = bs_compt[bs_cmpt] for b in bs: Cycles[b] = {} for param in params: Cycles[b][param] = [] for j in range(datas[b].size): if types[b][j] == 'discharge': Cycles[b][param].append(datas[b][j][param][0][0][0]) cap = [] for j in range(datas[b].size): if types[b][j] == 'discharge': cap.append(datas[b][j]['Capacity'][0][0][0][0]) Cycles[b]['Capacity'] = np.array(cap) Cycles[b]['count'] = len(Cycles[b][params[0]]) effective_cycle_count = 0 for x in Cycles[b]['Capacity']: if (x < 1.4): break effective_cycle_count += 1 for i in range(len(Cycles[b]['Capacity'])): if (i < effective_cycle_count): y.append((1 - ((i + 1) / effective_cycle_count)) * 100) else: y.append(0) # preparing data for regression model temperature_measured = [] voltage_measured = [] voltage_load = [] capacity = [] for b in bs: for c in Cycles[b]['Capacity']: capacity.append(c) for i in range(Cycles[b]['count']): temperature_measured.append(getTemperatureMeasuredCritical(Cycles[b]['Temperature_measured'][i], Cycles[b]['Time'][i])) voltage_measured.append(getVoltageMeasuredCritical(Cycles[b]['Voltage_measured'][i], Cycles[b]['Time'][i])) voltage_load.append(getVoltageLoadCritical(Cycles[b]['Voltage_load'][i], Cycles[b]['Time'][i])) # creating the model X = [] for i in range(len(temperature_measured)): X.append(np.array([temperature_measured[i], voltage_measured[i], voltage_load[i], capacity[i]])) # X.append(np.array(capacity)) X = np.array(X) y = np.array(y) # merge cycles X, y = merge(X, y, xx) # creating train test split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0) # fitting the model regressor = LinearRegression() regressor.fit(X_train, y_train) # test y_pred = regressor.predict(X_test) # model evaluation diff = 0 total = 0 rmse = 0 for i in range(len(y_test)): diff += abs(y_test[i] - y_pred[i]) rmse += ((y_test[i] - y_pred[i]) * (y_test[i] - y_pred[i])) total += y_test[i] diff /= len(y_test) total /= len(y_test) rmse = math.sqrt(rmse / len(y_test)) # accuracy = ((total - diff) / total) * 100 accuracy = r2_score(y_test, y_pred) # Adding evaluation to result array to print in a table results.add_row([ str(bs), str(Cycles[bs[0]]['count']), diff, rmse, accuracy ]) rmses[loc].append(rmse) loc += 1 # printing results # print(f'Evaluation: Clubbing Compatible Batteries for cycle param: {xx}\n{results.draw()}') # print(rmses) for rm in rmses: plt.plot(range(1, len(rm) + 1), rm) plt.ylabel("Error") plt.show() def removeFromGroup(x): loc = 0 y = {} for a in x: for b in x[a]: y[loc] = [ b ] loc += 1 return y !pip install scikit-elm !pip install --upgrade pyswarm !pip install dask !pip install fsspec>=0.3.3 ## Data Structure # Cycles[battery][param][cycle] # Cycles[battery][Capacity][cycle] ranges_l = [0, 20, 50, 70] ranges_r = [20, 50, 70, 90] # iterate over range for iiiiii in range(len(ranges_l)): ## example values 0, 20, 50, 70 xxxx = 20 from pyswarm import pso from sklearn.ensemble import RandomForestRegressor from tqdm import tqdm Cycles = {} params = ['Temperature_measured', 'Voltage_measured', 'Voltage_load', 'Time'] # remove batteries from group bs_compt = BSSS bs_compt = removeFromGroup(bs_compt) final_results = [] final_results_train = [] # iterate over seed for seed in tqdm(range(25)): rmses = [] rmses_train = [] for bs_cmpt in bs_compt: rmses.append([bs_compt[bs_cmpt][0]]) rmses_train.append([bs_compt[bs_cmpt][0]]) hyper_params = [] ######################################## CHANGE THISSSSSS ############################################### # example values (1, 21) (21, 51) (51, 71) (71, 91) # iterate over the merge hyper parameter for xx in range(21, 51): results = Texttable() results.add_row([ 'Compatible Batteries', 'Cycles', 'MAE', 'RMSE', 'R2 Score' ]) loc = 0 # iterate over all the battery sets for bs_cmpt in bs_compt: # getting data for a given set # y contains RUL after current cycle # model will train for y y = [] bs = bs_compt[bs_cmpt] for b in bs: Cycles[b] = {} for param in params: Cycles[b][param] = [] for j in range(datas[b].size): if types[b][j] == 'discharge': Cycles[b][param].append(datas[b][j][param][0][0][0]) cap = [] for j in range(datas[b].size): if types[b][j] == 'discharge': cap.append(datas[b][j]['Capacity'][0][0][0][0]) Cycles[b]['Capacity'] = np.array(cap) Cycles[b]['count'] = len(Cycles[b][params[0]]) effective_cycle_count = 0 for x in Cycles[b]['Capacity']: if (x < 1.4): break effective_cycle_count += 1 for i in range(len(Cycles[b]['Capacity'])): if (i < effective_cycle_count): y.append((1 - ((i + 1) / effective_cycle_count)) * 100) else: y.append(0) # preparing data for regression model temperature_measured = [] voltage_measured = [] voltage_load = [] capacity = [] for b in bs: for c in Cycles[b]['Capacity']: capacity.append(c) for i in range(Cycles[b]['count']): temperature_measured.append(getTemperatureMeasuredCritical(Cycles[b]['Temperature_measured'][i], Cycles[b]['Time'][i])) voltage_measured.append(getVoltageMeasuredCritical(Cycles[b]['Voltage_measured'][i], Cycles[b]['Time'][i])) voltage_load.append(getVoltageLoadCritical(Cycles[b]['Voltage_load'][i], Cycles[b]['Time'][i])) # creating the model X = [] for i in range(len(temperature_measured)): X.append(np.array([temperature_measured[i], voltage_measured[i], voltage_load[i], capacity[i]])) # X.append(np.array(capacity)) X = np.array(X) y = np.array(y) # merge cycles X, y = merge(X, y, xx) # creating train test split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = seed) ############## ------------------ MODEL ------------------- #################### # ## [1. linear reg] ### # regressor = Ridge(alpha=1000000) # regressor.fit(X_train, y_train) # y_pred = regressor.predict(X_test) # y_pred_train = regressor.predict(X_train) # ## [2. ARDRegresssor] ### # regressor = ARDRegression() # regressor.fit(X_train, y_train) # y_pred = regressor.predict(X_test) # y_pred_train = regressor.predict(X_train) # ## [3. Bayes Model] ### # regressor = BayesianRidge() # regressor.fit(X_train, y_train) # y_pred = regressor.predict(X_test) # y_pred_train = regressor.predict(X_train) # # ## ELM Regressor with pso ### # from skelm import ELMRegressor # def objective(aaa): # estimator = ELMRegressor(alpha = aaa[0], # n_neurons = aaa[1], # ufunc='relu', # include_original_features = False) # estimator.fit(X_train, y_train) # y_pred = estimator.predict(X_test) # rmse = 0 # for i in range(len(y_test)): # rmse += ((y_test[i] - y_pred[i]) * (y_test[i] - y_pred[i])) # rmse = math.sqrt(rmse / len(y_test)) # return rmse # # bounds for hyper param # lb = [1, 10] # ub = [1e6, 1000] # # optimizing # xopt, fopt = pso(objective, lb, ub) # # hyper_params.append(xopt) ### [4. ELM] ###### # estimator = ELMRegressor(alpha = 1e6, # n_neurons = 800, # ufunc='relu', # include_original_features = False) # estimator.fit(X_train, y_train) # y_pred = estimator.predict(X_test) # y_pred_train = estimator.predict(X_train) # ### [5. Decision Tree] ### # from sklearn import tree # regressor = tree.DecisionTreeRegressor() # regressor.fit(X_train, y_train) # y_pred = regressor.predict(X_test) # y_pred_train = regressor.predict(X_train) ### [6. Random Forest Regressor] ### (BEST) regressor = RandomForestRegressor(max_depth =10, random_state= 0) regressor.fit(X_train, y_train) y_pred = regressor.predict(X_test) y_pred_train = regressor.predict(X_train) ############# ----------------- MODEL -------------------- ##################### # model evaluation diff = 0 total = 0 rmse = 0 for i in range(len(y_test)): diff += abs(y_test[i] - y_pred[i]) rmse += ((y_test[i] - y_pred[i]) * (y_test[i] - y_pred[i])) total += y_test[i] diff /= len(y_test) total /= len(y_test) rmse = math.sqrt(rmse / len(y_test)) / 100 accuracy2 = ((total - diff) / total) * 100 accuracy = r2_score(y_test, y_pred) # Adding evaluation to result array to print in a table # results.add_row([ str(bs), str(Cycles[bs[0]]['count']), diff, rmse, accuracy, accuracy2 ]) rmses[loc].append(rmse) #### adding rmses of the train rmse_train = 0 for i in range(len(y_train)): rmse_train += ((y_train[i] - y_pred_train[i]) * (y_train[i] - y_pred_train[i])) rmse_train = math.sqrt(rmse_train / len(y_train)) / 100 rmses_train[loc].append(rmse_train) loc += 1 final_results.append(rmses) final_results_train.append(rmses_train) ## --- STORING RESULTS TO THE FILE # %matplotlib from statistics import stdev, mode mns = { 'B0005': [], 'B0006': [], 'B0007': [], 'B0018': [] } mns_train = { 'B0005': [], 'B0006': [], 'B0007': [], 'B0018': [] } cycles = { 'B0005': [], 'B0006': [], 'B0007': [], 'B0018': [] } cycles_train = { 'B0005': [], 'B0006': [], 'B0007': [], 'B0018': [] } fo = open("/home/yash/Documents/RUL/RESULTS_25_ITR(STD, AVG)/Random Forest Reg/21-50/result.txt", "w") stddevs_rmses = [] avgs_rmses = [] for ii in range(len(final_results)): rmses = final_results[ii] rmses_train = final_results_train[ii] for i in range(len(rmses)): rm = rmses[i] rm_train = rmses_train[i] mn = 100000 mn_train = 100000 loc, loc_train = -1, -1 for i in range(1, len(rm)): if (mn > rm[i]): mn = rm[i] loc = i if (mn_train > rm_train[i]): mn_train = rm_train[i] loc_train = i fo.write(f"{rm[0]}\n") fo.write("Minima Test: {:.16f}, Param (x): {}\n".format(mn, loc + xxxx)) fo.write("Minima Train: {:.16f}, Param (x): {}\n".format(mn_train, loc_train + xxxx)) fo.write("\n") mns[rm[0]].append(mn) mns_train[rm[0]].append(mn_train) cycles[rm[0]].append(loc + xxxx) cycles_train[rm[0]].append(loc_train + xxxx) # fig, ax = plt.subplots() # ax.plot(range(1 + xxxx, len(rm) + xxxx), rm[1:], "-b", label="test set") # ax.plot(range(1 + xxxx, len(rm) + xxxx), rm_train[1:], "-r", label="train set") # plt.legend(loc="upper right") # plt.ylabel(rm[0]) # plt.show() fo.write("-----------------------------------------------------------------------------------\n") fo.write("\n") for battery in mns: fo.write(f"Battery: {battery} StdDevRMSE: {stdev(mns[battery])}\n") fo.write(f"Battery: {battery} AvgRMSE: {sum(mns[battery]) / len(mns[battery])}\n") fo.write(f"Battery: {battery} ModeCycleParam: {mode(cycles[battery])}\n") fo.write("\n") # print(f"Standard Dev RMSE: {stddevs_rmses}") # print(f"Average RMSE: {avgs_rmses}") # Close opend file fo.close() # %matplotlib from statistics import stdev, mode mns = { 'B0005': [], 'B0006': [], 'B0007': [], 'B0018': [] } mns_train = { 'B0005': [], 'B0006': [], 'B0007': [], 'B0018': [] } cycles = { 'B0005': [], 'B0006': [], 'B0007': [], 'B0018': [] } cycles_train = { 'B0005': [], 'B0006': [], 'B0007': [], 'B0018': [] } fo = open("/home/yash/Documents/RUL/RESULTS_25_ITR(STD, AVG)/Random Forest Reg/21-50/result.txt", "w") stddevs_rmses = [] avgs_rmses = [] for ii in range(len(final_results)): rmses = final_results[ii] rmses_train = final_results_train[ii] for i in range(len(rmses)): rm = rmses[i] rm_train = rmses_train[i] mn = 100000 mn_train = 100000 loc, loc_train = -1, -1 for i in range(1, len(rm)): if (mn > rm[i]): mn = rm[i] loc = i if (mn_train > rm_train[i]): mn_train = rm_train[i] loc_train = i fo.write(f"{rm[0]}\n") fo.write("Minima Test: {:.16f}, Param (x): {}\n".format(mn, loc + xxxx)) fo.write("Minima Train: {:.16f}, Param (x): {}\n".format(mn_train, loc_train + xxxx)) fo.write("\n") mns[rm[0]].append(mn) mns_train[rm[0]].append(mn_train) cycles[rm[0]].append(loc + xxxx) cycles_train[rm[0]].append(loc_train + xxxx) # fig, ax = plt.subplots() # ax.plot(range(1 + xxxx, len(rm) + xxxx), rm[1:], "-b", label="test set") # ax.plot(range(1 + xxxx, len(rm) + xxxx), rm_train[1:], "-r", label="train set") # plt.legend(loc="upper right") # plt.ylabel(rm[0]) # plt.show() fo.write("-----------------------------------------------------------------------------------\n") fo.write("\n") for battery in mns: fo.write(f"Battery: {battery} StdDevRMSE: {stdev(mns[battery])}\n") fo.write(f"Battery: {battery} AvgRMSE: {sum(mns[battery]) / len(mns[battery])}\n") fo.write(f"Battery: {battery} ModeCycleParam: {mode(cycles[battery])}\n") fo.write("\n") # print(f"Standard Dev RMSE: {stddevs_rmses}") # print(f"Average RMSE: {avgs_rmses}") # Close opend file fo.close() ###Output _____no_output_____
programming.ipynb
###Markdown &emsp; &emsp; LOGO &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; Home &emsp; &emsp; &emsp; Data Science &emsp; &emsp; &emsp; Machine Learning &emsp; &emsp; &emsp; Programming &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; &emsp; Programming &emsp; &emsp; &emsp; &emsp; "A set of instructions that produce various kinds of output (Wikipedia)" ###Code import numpy as np import matplotlib.pyplot as plt plt.rcParams['text.usetex'] = True plt.rcParams['font.size'] = 15 plt.rcParams['font.family'] = "serif" N = 9 x = np.linspace(0, 6*np.pi, N) # Define a function using lambda stock = lambda A, amp, angle, phase: A * angle + amp * np.sin(angle + phase) mean_stock = (stock(.1, .2, x, 1.2)) np.random.seed(100) upper_stock = mean_stock + np.random.randint(N) * 0.02 lower_stock = mean_stock - np.random.randint(N) * 0.015 plt.figure(figsize=(9, 6)) plt.plot(x, mean_stock, color = 'darkorchid', label = r'$y = \gamma \sin(\theta + \phi_0)$') plt.fill_between(x, upper_stock, lower_stock, alpha = .1, color = 'darkorchid') plt.grid(alpha = .2) plt.xlabel(r'$\theta$ (rad)', labelpad = 15) plt.ylabel('y', labelpad = 15) plt.legend(); ###Output _____no_output_____
notebooks/13-raster-processing.ipynb
###Markdown Raster operations and raster-vector tools> *DS Python for GIS and Geoscience* > *October, 2021*>> *© 2021, Joris Van den Bossche and Stijn Van Hoey. Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*--- In the previous notebooks, we focused either on vector data or raster data. Often you encounter both types of data and want to combine them. In this notebook, we show *some* examples of typical raster/vector interactions. ###Code import pandas as pd import numpy as np import geopandas import rasterio import xarray as xr import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown `rioxarray`: xarray extension based on rasterio In the previous notebooks, we already used `rasterio` (https://rasterio.readthedocs.io/en/latest/) to read raster files such as GeoTIFFs (through the `xarray.open_rasterio()` function). Rasterio provides support for reading and writing geospatial raster data as numpy N-D arrays, mainly through bindings to the GDAL library. In addition, rasterio provides a Python API to perform some GIS raster operations (clip, mask, warp, merge, transformation,...) and can be used to only load a subset of a large dataset into memory. However, the main complexity in using `rasterio`, is that the spatial information is decoupled from the data itself (i.e. the numpy array). This means that you need to keep track and organize the extent and metadata throughout the operations (e.g. the "transform") and you need to keep track of what each dimension represents (y-first, as arrays are organized along rows first). Notebook [91_package_rasterio](./91_package_rasterio.ipynb) goes into more depth on the rasterio package itself. Enter `rioxarray` (https://corteva.github.io/rioxarray/stable/index.html), which extends xarray with geospatial functionalities powered by rasterio. ###Code import rioxarray data_file = "./data/herstappe/raster/2020-09-17_Sentinel_2_L1C_True_color.tiff" data = rioxarray.open_rasterio(data_file) data ###Output _____no_output_____ ###Markdown The `rioxarray.open_rasterio` function is similar to `xarray.open_rasterio`, but in addition adds a `spatial_ref` coordinate to keep track of the spatial reference information.Once `rioxarray` is imported, it provides a `.rio` accessor on the xarray.DataArray object, which gives access to some properties of the raster data: ###Code data.rio.crs data.rio.bounds() data.rio.resolution() data.rio.nodata data.rio.transform() ###Output _____no_output_____ ###Markdown Reprojecting rasters `rioxarray` gives access to a set of raster processing functions from rasterio/GDAL. One of those is to reproject (transform and resample) rasters, for example to reproject to different coordinate reference system, up/downsample to a different resolution, etc. In all those case, in the transformation of a source raster to a destination raster, pixel values need to be recalculated. There are different "resampling" methods this can be done: taking the nearest pixel value, calculating the average, a (non-)linear interpolation, etc.The functionality is available through the `reproject()` method. Let's start with reprojecting the Herstappe tiff to a different CRS: ###Code data.rio.crs data.rio.reproject("EPSG:31370").plot.imshow(figsize=(10,6)) ###Output _____no_output_____ ###Markdown The default resampling method is "nearest", which is often not a suitable method (especially for continuous data). We can change the method using the `rasterio.enums.Resampling` enumeration (see [docs](https://rasterio.readthedocs.io/en/latest/api/rasterio.enums.htmlrasterio.enums.Resampling) for a overview of all methods): ###Code from rasterio.enums import Resampling data.rio.reproject("EPSG:31370", resampling=Resampling.bilinear).plot.imshow(figsize=(10,6)) ###Output _____no_output_____ ###Markdown The method can also be used to downsample at the same time: ###Code data.rio.reproject(data.rio.crs, resolution=120, resampling=Resampling.cubic).plot.imshow(figsize=(10,6)) ###Output _____no_output_____ ###Markdown Extract the data you need In many applications, a specific research area is used. Extracting the data you need from a given raster data set by a vector (polygon) file is a common operation in GIS analysis. We use the clipping example to explain the typical workflow with rioxarray / rasterio.For our Herstappe example, the study area is available as vector data `./data/herstappe/vector/herstappe.geojson`: ###Code herstappe_vect = geopandas.read_file("./data/herstappe/vector/herstappe.geojson") herstappe_vect herstappe_vect.plot() herstappe_vect.crs ###Output _____no_output_____ ###Markdown Make sure both data sets are defined in the same CRS and extracting the geometry can be used as input for the masking: ###Code herstappe_vect = herstappe_vect.to_crs(epsg=3857) clipped = data.rio.clip(herstappe_vect.geometry) fig, (ax0, ax1) = plt.subplots(ncols=2, figsize=(10,4)) data.plot.imshow(ax=ax0) herstappe_vect.plot(ax=ax0, facecolor="none", edgecolor="red") clipped.plot.imshow(ax=ax1) herstappe_vect.plot(ax=ax1, facecolor="none", edgecolor="red") fig.tight_layout() ###Output _____no_output_____ ###Markdown The above uses the `rasterio` package (with the `mask` and `geometry_mask` / `rasterize` functionality) under the hood. This simplifies the operation compared to directly using `rasterio`.```python cfr. The Rasterio workflowfrom rasterio.mask import mask 1 - Open a data set using the context managerwith rasterio.open(data_file) as src: 2 - Read and transform the data set by clipping out_image, out_transform = mask(src, herstappe_vect.geometry, crop=True) 3 - Update the spatial metadata/profile of the data set herstappe_profile = src.profile herstappe_profile.update({"height": out_image.shape[1], "width": out_image.shape[2], "transform": out_transform}) 4 - Save the new data set with the updated metadata/profile with rasterio.open("./herstappe_masked.tiff", "w", **herstappe_profile) as dest: dest.write(out_image)```The [91_package_rasterio](./91_package_rasterio.ipynb) notebook explains this workflow in more detail.One important difference, though, is that the above `rasterio` workflow will not load the full raster into memory when only loading (clipping) a small part of it. This can also be achieved in `rioxarray` with the `from_disk` keyword. Convert vector to raster Load DEM raster and river vector data As example, we are using data from the Zwalm river area in Flanders. The digital elevation model (DEM) can be downloaded via the [governmental website](https://download.vlaanderen.be/Producten/Detail?id=936&title=Digitaal_Hoogtemodel_Vlaanderen_II_DSM_raster_5_m) ([download link](https://downloadagiv.blob.core.windows.net/dhm-vlaanderen-ii-dsm-raster-5m/DHMVIIDSMRAS5m_k30.zip), extracted in the `/data` directory for this example)/ ###Code dem_zwalm_file = "data/DHMVIIDSMRAS5m_k30/GeoTIFF/DHMVIIDSMRAS5m_k30.tif" ###Output _____no_output_____ ###Markdown _Make sure you have downloaded the data set ([download link](https://downloadagiv.blob.core.windows.net/dhm-vlaanderen-ii-dsm-raster-5m/DHMVIIDSMRAS5m_k30.zip)), saved it in the `./data` subfolder and unzipped the folder_ ###Code dem_zwalm = xr.open_rasterio(dem_zwalm_file).sel(band=1) img = dem_zwalm.plot.imshow( cmap="terrain", figsize=(10, 4), interpolation='antialiased') img.axes.set_aspect("equal") ###Output _____no_output_____ ###Markdown Next, we download the shapes of the rivers in the area through a WFS (Web Feature Service): ###Code import json import requests wfs_rivers = "https://geoservices.informatievlaanderen.be/overdrachtdiensten/VHAWaterlopen/wfs" params = dict(service='WFS', version='1.1.0', request='GetFeature', typeName='VHAWaterlopen:Wlas', outputFormat='json', cql_filter="(VHAZONENR=460)OR(VHAZONENR=461)", srs="31370") # Fetch data from WFS using requests r = requests.get(wfs_rivers, params=params) ###Output _____no_output_____ ###Markdown __Note__: A WFS is a standardized way to share vector GIS data sets on the internet, typically also used by web application, see ['A bit more about WFS'](a_bit_more_about_WFS) section for more info. And convert the output of the wfs call to a GeoDataFrame: ###Code # Create GeoDataFrame from geojson segments = geopandas.GeoDataFrame.from_features(json.loads(r.content), crs="epsg:31370") segments.head() segments.plot(figsize=(8, 7)) ###Output _____no_output_____ ###Markdown Clip raster with vectorThe catchment extent is much smaller than the DEM file, so clipping the data first will make the computation less heavy. Let's first download the catchment area of the Zwalm river from the Flemish government (using WFS again): ###Code import json import requests wfs_bekkens = "https://geoservices.informatievlaanderen.be/overdrachtdiensten/Watersystemen/wfs" params = dict(service='WFS', version='1.1.0', request='GetFeature', typeName='Watersystemen:WsDeelbek', outputFormat='json', cql_filter="DEELBEKNM='Zwalm'", srs="31370") # Fetch data from WFS using requests r = requests.get(wfs_bekkens, params=params) catchment = geopandas.GeoDataFrame.from_features(json.loads(r.content), crs="epsg:31370") catchment ###Output _____no_output_____ ###Markdown Save to a file for later reuse: ###Code # save to file catchment = catchment.to_crs('epsg:4326') # geojson is default 4326 catchment.to_file("./data/zwalmbekken.geojson", driver="GeoJSON") geopandas.read_file("./data/zwalmbekken.geojson").plot() ###Output _____no_output_____ ###Markdown 1. Using rioxarray (rasterio) As shown above, we can use rioxarray to clip the raster file: ###Code dem_zwalm = xr.open_rasterio(dem_zwalm_file).sel(band=1) dem_zwalm clipped = dem_zwalm.rio.clip(catchment.to_crs('epsg:31370').geometry) ###Output _____no_output_____ ###Markdown Using rioxarray's `to_raster()` method, we can also save the result to a new GeoTIFF file: ###Code clipped.rio.to_raster("./dem_masked_rio.tiff") ###Output _____no_output_____ ###Markdown This DEM raster file used -9999 as the NODATA value, and this is therefore also used for the clipped result: ###Code clipped.rio.nodata img = clipped.where(clipped != -9999).plot.imshow( cmap='terrain', figsize=(10, 6), interpolation='antialiased') img.axes.set_aspect("equal") ###Output _____no_output_____ ###Markdown With rioxarray, we can also convert nodata values to NaNs (and thus using float dtype) when loading the raster data: ###Code dem_zwalm2 = rioxarray.open_rasterio(dem_zwalm_file, masked=True).sel(band=1) dem_zwalm2.rio.nodata dem_zwalm2.rio.clip(catchment.to_crs('epsg:31370').geometry) ###Output _____no_output_____ ###Markdown If we want to avoid loading the full original raster data, the `from_disk` keyword can be used. ###Code dem_zwalm2.rio.clip(catchment.to_crs('epsg:31370').geometry, from_disk=True) ###Output _____no_output_____ ###Markdown 2. Using GDAL CLI If we have the raster and vector files on disk, [`gdal CLI`](https://gdal.org/programs/index.html) will be very fast to work with (note that GDAL automatically handles the CRS difference of the raster and vector). ###Code rm ./dem_masked_gdal.tiff !gdalwarp -cutline ./data/zwalmbekken.geojson -crop_to_cutline data/DHMVIIDSMRAS5m_k30/GeoTIFF/DHMVIIDSMRAS5m_k30.tif ./dem_masked_gdal.tiff clipped_gdal = rioxarray.open_rasterio("./dem_masked_gdal.tiff", masked=True).sel(band=1) img = clipped_gdal.plot.imshow( cmap="terrain", figsize=(10, 6), interpolation='antialiased') img.axes.set_aspect("equal") ###Output _____no_output_____ ###Markdown **TIP**: In the GIS world, also other libraries do provide a large set of functionalities as command line instructions with a `FILE IN` -> `RUN COMMAND` -> `FILE OUT` approach, with some of them providing a Python interface as well. Some important once are: - The [`gdal` library](https://gdal.org/programs/index.htmlraster-programs) is the open source Swiss Army knife for raster and vector geospatial data handling. - The [SAGA GIS](http://www.saga-gis.org/en/index.html) has a [huge set](http://www.saga-gis.org/saga_tool_doc/8.0.0/a2z.html) of CLI commands, going from flow accumulation to classification algorithms.- The [WhiteboxTools](https://www.whiteboxgeo.com/geospatial-software/) is another example of a library with a lot of functionalities, e.g. hydrological, agricultural and terrain analysis tools. Other important initiatives like [Grass](https://grasswiki.osgeo.org/wiki/GRASS-Wiki) and [PCRaster](https://pcraster.geo.uu.nl/) are worthwhile to check out. Most of these libraries of tools can also be used with QGIS. __NOTE:__ You can run a CLI command inside a Jupyter Notebook by prefixing it with the `!` character. Convert vector to raster To create a raster with the vector "burned in", we can use the `rasterio.features.rasterize` function. This expects a list of (shape, value) tuples, and an output image shape and transform. Here, we will create a new raster image with the same shape and extent as the DEM above. And we first take a buffer of the river lines: ###Code import rasterio.features segments_buffered = segments.geometry.buffer(100) img = rasterio.features.rasterize( segments_buffered, out_shape=clipped.shape, transform=clipped.rio.transform()) img fig, (ax0, ax1) = plt.subplots(1, 2) ax0.imshow(img*50) ax1.imshow(clipped.values - img*20, vmin=0, cmap="terrain") # just as an example fig.tight_layout() ###Output _____no_output_____ ###Markdown Let's practice!For these exercises, a set of raster and vector datasets for the region of Ghent is available. Throughout the exercises, the goal is to map preferential locations to live given a set of conditions (certain level above sea-level, quiet and green neighbourhood, ..).We start with the Digital Elevation Model (DEM) for Flanders. The data is available at https://overheid.vlaanderen.be/informatie-vlaanderen/producten-diensten/digitaal-hoogtemodel-dhmv, We downloaded the 25m resolution raster image, and provided a subset of this dataset as a zipped Tiff file in the course material. **EXERCISE**:* Read the DEM using `rioxarray`. The zip file is available at `data/gent/DHMVIIDTMRAS25m.zip"`. You can either unzip the file, and use the path to the unzipped file, or prepend `zip://` to the path. * What is the CRS of this dataset?* The dataset has a third dimension with a single band. This doesn't work for plotting, so create a new `DataArray` by selecting the single band. Assign the result to a variable `dem`.* Make a quick plot of the dataset.Hints* Rasterio can directly read from a zip-file. If one wants read the file `./data/gent/FILENAME.zip`, add the `zip://.data/gent/FILENAME.zip` to read the zip file directly.* The `crs` is an attribute, not a function, so no `()` required.* Selecting in xarray is done with `sel`* We will improve the plot in the following exercises, but as a quick solution, we already learnt about the `robust=True` plot option of xarray. * Just pick a color map that you like. ###Code # %load _solutions/13-raster-processing1.py # %load _solutions/13-raster-processing2.py # %load _solutions/13-raster-processing3.py # %load _solutions/13-raster-processing4.py ###Output _____no_output_____ ###Markdown **EXERCISE**:The dataset uses a large negative value to denote the "nodata" value (in this case meaning "outside of Flanders"). * Check the value that is used as "nodata" value.* Repeat the plot from the previous exercise, but now set a fixed minimum value of 0 for the colorbar, to ignore the negative "nodata" in the color scheme.* Replace the "nodata" value with `np.nan` using the `where()` method. Hints* The `nodata` attribute is provided by the rioxarray package. Rioxarray loads this information from the geotiff file metadata and makes it available as `.rio.nodata`. * The `.rio.nodata` is a class attribute, not a function, so no `()` required.* `vmin` and `vmax` define the colormap limits.* `ẁhere` expects a condition (i.e. boolean values), e.g. `... != dem.rio.nodata`. ###Code # %load _solutions/13-raster-processing5.py # %load _solutions/13-raster-processing6.py # %load _solutions/13-raster-processing7.py # %load _solutions/13-raster-processing8.py ###Output _____no_output_____ ###Markdown Alternatively to masking the nodata value yourself, you can do this directly when loading the data as well, using the `masked=True` keyword of `open_rasterio()`: ###Code dem_masked = rioxarray.open_rasterio("zip://./data/gent/DHMVIIDTMRAS25m.zip", masked=True).sel(band=1) dem_masked.plot.imshow(robust=True, cmap="terrain") ###Output _____no_output_____ ###Markdown **EXERCISE**:We want to limit our search for locations to the surroundings of the centre of Ghent.* Create a `Point` object for the centre of Ghent. Latitude/longitude coordinates for the Korenmarkt are: 51.05393, 3.72174* We need our point in the same Coordinate Reference System as the DEM raster (i.e. EPSG:31370, or Belgian Lambert 72). Use GeoPandas to reproject the point: * Create a GeoSeries with this single point and specify its CRS with the `crs` keyword. * Reproject this series with the `to_crs` method. Assign the resulting GeoSeries to a variable `gent_centre_31370`.* Calculcate a buffer of 10km radius around the point.* Get the bounding box coordinates of this buffer. Assign this to `gent_bounds`. Hints* Remember the introduction on geospatial data and the shapely objects, e.g. `shapely.geometry.Point`?* Use `geopandas.GeoSeries` to create a new GeoSeries and add the `crs` parameter. The lat/lon of the Kornmarkt are provided as EPSG:4326. * In EPSG:31370, the unit is meter, so make sure to use meter to define the buffer size.* `.total_bounds` is a class attribute. ###Code # %load _solutions/13-raster-processing9.py # %load _solutions/13-raster-processing10.py # %load _solutions/13-raster-processing11.py # %load _solutions/13-raster-processing12.py ###Output _____no_output_____ ###Markdown **EXERCISE**:With this bounding box, we can now clip a subset of the DEM raster for the area of interest. * Clip the `dem` raster layer. To clip with a bounding box instead of a geometry, you can use the `rio.clip_box()` method instead of `rio.clip()`.* Make a plot. Use the "terrain" color map, and set the bounds of the color scale to 0 - 30.Hints* The `gent_bounds` is an array of 4 elements, whereas `clip_box` requires these as seperate input parameters... Did you know that in Python you can unpack these 4 values with the `*`: `*gent_bounds` will unpack to 4 individual input. ###Code # %load _solutions/13-raster-processing13.py # %load _solutions/13-raster-processing14.py ###Output _____no_output_____ ###Markdown The CORINE Land Cover (https://land.copernicus.eu/pan-european/corine-land-cover) is a program by the European Environment Agency (EEA) to provide an inventory of land cover in 44 classes of the European Union. The data is provided in both raster as vector format and with a resolution of 100m.The data for the whole of Europe can be downloaded from the website (latest version: https://land.copernicus.eu/pan-european/corine-land-cover/clc2018?tab=download). This is however a large dataset, so we downloaded the raster file and cropped it to cover Flanders, and this subset is included in the repo as `data/CLC2018_V2020_20u1_flanders.tif` (the code to do this cropping can be see at [data/preprocess_data.ipynbCORINE-Land-Cover](data/preprocess_data.ipynbCORINE-Land-Cover)). **EXERCISE**:* Read the land use data provided as a tif (`data/CLC2018_V2020_20u1_flanders.tif`). * Make a quick plot. The raster is using a negative value as "nodata", consider using the `robust=True` option.* What is the resolution of this raster?* What is the CRS?Hints* `rio.resolution()` is a method, so it requires the `()`.* `rio.crs` is an attribute, so it does not require the `()`. ###Code # %load _solutions/13-raster-processing15.py # %load _solutions/13-raster-processing16.py # %load _solutions/13-raster-processing17.py # %load _solutions/13-raster-processing18.py ###Output _____no_output_____ ###Markdown **EXERCISE**:The land use dataset is a European dataset and uses a Europe-wide projected CRS (https://epsg.io/3035).* Reproject the land use raster to the same CRS as the DEM raster (EPSG:31370), and plot the result.Hints* For the sake of the exercise, pick any resampling algorithm or just the default option. ###Code # %load _solutions/13-raster-processing19.py # %load _solutions/13-raster-processing20.py ###Output _____no_output_____ ###Markdown **EXERCISE**:In addition to reprojecting to a new CRS, we also want to upsample the land use dataset to the same resolution as the DEM raster, and to use the exact same grid layout, so we can compare and combine those rasters pixel-by-pixel.Such a reprojection can be done with the `reproject()` method by providing the target geospatial "transform" (which has the information for the bounds and resolution) and shape for the result. `rioxarray` provides a short-cut for this operation with the `reproject_match()` method, to reproject one raster to the CRS, extent and resolution of another raster. * Reproject the land use raster to the same CRS and extent as the DEM subset for Ghent (`dem_gent`). Call the result `land_use_gent`.* Make a plot of the result.Hints* Check the help of the `.rio.reproject_match` method (SHIFT + TAB) to know which input ou need. For the sake of the exercise, pick any resampling algorithm or just the default (nearest) option.* The data calling the method is being transformed, and the input parameter is the target to match. ###Code # %load _solutions/13-raster-processing21.py # %load _solutions/13-raster-processing22.py ###Output _____no_output_____ ###Markdown The land use dataset is a raster with discrete values (i.e. different land use classes). The [CurieuzeNeuzen case study](case-curieuzeneuzen-air-quality.ipynbCombining-with-Land-Use-data) goes into more depth on those values, but for this exercise it is sufficient to know that values 1 and 2 are the Continuous and Discontinuous urban fabric (residential areas). **EXERCISE**:Let's find the preferential locations to live, assuming we want to be future-proof and live at least 10m above sea level in a residential area.* Create a new array denoting the residential areas (where `land_use_gent` is equal to 1 or 2). Call this `land_use_residential`, and make a quick plot.* Plot those locations that are 10m above sea-level.* Combine the residential areas and areas > 10m in a single array called `suitable_locations`, and plot the result. Hints* To select for multiple options, one can either combine multiple conditions using `|` (or) or us the `isin([...,...])` option, both will do.* The output of a condition is a Boolean map that can be plot just like other maps, e.g. `(dem_gent > 10).plot.imshow()`.* Fo the `suitable_locations`, both boolean conditions need to be True, so combine them with either `&` or just multiply them with `*`. ###Code # %load _solutions/13-raster-processing23.py # %load _solutions/13-raster-processing24.py # %load _solutions/13-raster-processing25.py # %load _solutions/13-raster-processing26.py # %load _solutions/13-raster-processing27.py # %load _solutions/13-raster-processing28.py ###Output _____no_output_____ ###Markdown **EXERCISE**:In addition to the previous conditions, assume we also don't want to live close to major roads.We downloaded the road segments open data from Ghent (https://data.stad.gent/explore/dataset/wegsegmenten-gent/) as a GeoJSON file, and provided this in the course materials: `/data/gent/vector/wegsegmenten-gent.geojson.zip` * Read the GeoJSON road segments file into a variable `roads` and check the first few rows.* The column "frc_omschrijving" contains a description of the type of road for each segment. Get an overview of the available segments and types by doing a "value counts" of this column.Hints* GeoPandas does NOT need the `zip://...` to read in zip files.* The first few rows are also the `head` of a data set.* The `value_counts` provides the number of records for each of the different values in a column. ###Code # %load _solutions/13-raster-processing29.py # %load _solutions/13-raster-processing30.py # %load _solutions/13-raster-processing31.py ###Output _____no_output_____ ###Markdown **EXERCISE**:We are interested in the big roads, as these are the ones we want to avoid: "Motorway or Freeway", "Major Road" and "Other Major Road".* Filter the `roads` table based on the provided list of road types: select those rows where the "frc_omschrijving" column is equal to one of those values, and call this `roads_subset`.* Make a quick plot of this subset and use the "frc_omschrijving" colum to color the lines.Hints* Selecting multiple options at the same time is most convenient with the `isin()` method.* Use the GeoPandas `.plot` method and specifh the `frc_omschrijving` column to the `column` parameter. ###Code road_types = [ "Motorway, Freeway, or Other Major Road", "a Major Road Less Important than a Motorway", "Other Major Road", ] # %load _solutions/13-raster-processing32.py # %load _solutions/13-raster-processing33.py ###Output _____no_output_____ ###Markdown **EXERCISE**:Before we convert the vector data to a raster, we want to buffer the roads. We will use a larger buffer radius for the larger roads.* Using the defined `buffer_per_roadtype` dictionary, create a new Series by replacing the values in the "frc_omschrijving" column with the matching buffer radius.* Convert the `roads_subset` GeoDataFrame to CRS `EPSG:31370`, and create buffered lines (polygons) with the calculated buffer radius distances. Call the result `roads_buffer`. Hints* Use the `replace` method to replace the data using the provided mapping `buffer_per_roadtype`.* The conversion to EPSG:31370 is important to be able to work with the meters to define the buffer size.* The `buffer` method can take a single value to apply to all values, but also a Series of values, with a buffer size defined for each element. ###Code buffer_per_roadtype = { "Motorway, Freeway, or Other Major Road": 750, "a Major Road Less Important than a Motorway": 500, "Other Major Road": 150, } # %load _solutions/13-raster-processing34.py # %load _solutions/13-raster-processing35.py ###Output _____no_output_____ ###Markdown **EXERCISE**:Convert the buffered road segments to a raster: * Use `features.features.rasterize()` to convert the `roads_buffer` GeoDataFrame to a raster: * Pass the geometry column as the first argument. * Pass the shape and transform of the `dem_gent` to specify the desired spatial extent and resolution of the output raster.* Invert the values of the raster by doing `1 - arr`, and plot the array with `plt.imshow(..)`.* Recalculate the `suitable_locations` variable, using 1/ land_use_residential, 2/ dem > 10 and 3/ outside the road buffers. Hints* Access the geometry column using the `.geometry` attribute.* `shape` is also an attribute.* `.rio.transform()` is a method, so it requires the `()`.* Previously, suitable locations were `land_use_residential * (dem_gent > 10)`. Combine this with the `(1 - roads_buffer_arr)` output. ###Code import rasterio.features # %load _solutions/13-raster-processing36.py # %load _solutions/13-raster-processing37.py # %load _solutions/13-raster-processing38.py # %load _solutions/13-raster-processing39.py # %load _solutions/13-raster-processing40.py ###Output _____no_output_____ ###Markdown **EXERCISE**:Make a plot with a background map of the selected locations. * Plot the provided `gent` GeoDataFrame (a single row table with the are of the Ghent municipality). Use a low "alpha" to give it a light color.* Add a background map using contextily.* Plot the `suitable_locations` raster on top of this figure: first mask the array to select only those values larger than zero (so the other values becomes NaN, and are not plotted). Then plot the result, adding it to the existing figure using the `ax` keyword.Hints* The `fig, ax = plt.subplots(figsize=(15, 15))` is a convenient shortcut to prepare a Matplotlib Figure and Axes. * Make sure to define the `crs="EPSG:31370"` for contextily.* `where(...)` is a powerfull way to exclude data as it - by default - adds NaN values for piels where the condition is not True. ###Code import contextily gent = geopandas.read_file("data/gent/vector/gent.geojson") # %load _solutions/13-raster-processing41.py ###Output _____no_output_____ ###Markdown Advanced exercises **EXERCISE**:We downloaded the data about urban green areas in Ghent (https://data.stad.gent/explore/dataset/parken-gent).* Read in the data at `data/gent/vector/parken-gent.geojson` into a variable `green`.* Check the content (first rows, quick plot)* Convert this vector layer to a raster using the spatial extent and resolution of `dem_gent` as the targer raster.* The `rasterio.features.rasterize` results in a numpy array. Convert this to a DataArray using the `xr.DataArray(..)` constructor, specifying the coordinates of `dem_gent` (`dem_gent.coords`) for the coordinates of the new array. ###Code # %load _solutions/13-raster-processing42.py # %load _solutions/13-raster-processing43.py # %load _solutions/13-raster-processing44.py # %load _solutions/13-raster-processing45.py # %load _solutions/13-raster-processing46.py ###Output _____no_output_____ ###Markdown **EXERCISE**:For the urban green areas, we want to calculate a statistic for a neighbourhood around each pixel ("focal" statistics). This can be expressed as a convolution with a defined kernel. The [xarray-spatial](https://xarray-spatial.org/index.html) package includes functionality for such focal statistics and convolutions.* Use the `focal.focal_stats()` function from xarray-spatial to calculate the sum of green are in a neighborhood of 500m around each point. Check the help of this function to see which arguments to specify.* Make a plot of the resulting `green_area` array. ###Code from xrspatial import focal, convolution x, y = convolution.calc_cellsize(green_arr) kernel = convolution.circle_kernel(x, y, 500) # %load _solutions/13-raster-processing47.py # %load _solutions/13-raster-processing48.py ###Output _____no_output_____ ###Markdown The `scipy` package also provides optimized convolution algorithms. In case of the "sum" statistic, this is equivalent: ###Code from scipy import signal # %load _solutions/13-raster-processing49.py # %load _solutions/13-raster-processing50.py ###Output _____no_output_____ ###Markdown **EXERCISE**:Make a plot with a background map of the selected locations, i.e. land_use_residential, dem > 10, outside road buffers and 'sufficient'as green area. Define sufficient green area as the `green_area` pixels where the sum of the convolution (previous exercise) is larger as 10 (keep those values and convert pixels with values lower than 10 to 0 values). Multiply the different conditions/categories, so the green area score is included in the final result. ###Code # %load _solutions/13-raster-processing51.py # %load _solutions/13-raster-processing52.py import contextily gent = geopandas.read_file("data/gent/vector/gent.geojson") fig, ax = plt.subplots(figsize=(10, 10)) gent.to_crs("EPSG:31370").plot(ax=ax, alpha=0.1) ax.set(ylim=(190_000, 200_000), xlim=(100_000, 110_000)) contextily.add_basemap(ax, crs="EPSG:31370") suitable_locations.where(suitable_locations>0).plot.imshow(ax=ax, alpha=0.5, add_colorbar=False) ax.set_aspect("equal") ###Output _____no_output_____ ###Markdown Extracting values from rasters based on vector dataThe **rasterstats** package provides methods to calculate summary statistics of geospatial raster datasets based on vector geometries (https://github.com/perrygeo/python-rasterstats) To illustrate this, we are reading a raster file with elevation data of the full world (the file contains a single band for the elevation, save the file in the `data` subdirectory; [download link](https://www.eea.europa.eu/data-and-maps/data/world-digital-elevation-model-etopo5/zipped-dem-geotiff-raster-geographic-tag-image-file-format-raster-data/zipped-dem-geotiff-raster-geographic-tag-image-file-format-raster-data/at_download/file)): ###Code countries = geopandas.read_file("./data/ne_110m_admin_0_countries.zip") cities = geopandas.read_file("./data/ne_110m_populated_places.zip") dem_geotiff = "data/dem_geotiff/DEM_geotiff/alwdgg.tif" img = xr.open_rasterio(dem_geotiff).sel(band=1).plot.imshow(cmap="terrain", figsize=(10, 4), ) img.axes.set_aspect("equal") ###Output _____no_output_____ ###Markdown Given this raster of the elevation, we might want to know the elevation at a certain location or for each country.For the countries example, we want to extract the pixel values that fall within a country polygon, and calculate a statistic for it, such as the mean or the maximum.Such functionality to extract information from a raster for given vector data is provided by the rasterstats package. ###Code import rasterstats ###Output _____no_output_____ ###Markdown For extracting the pixel values for polygons, we use the `zonal_stats` function, passing it the GeoSeries, the path to the raster file, and the method to compute the statistics. ###Code result = rasterstats.zonal_stats(countries.geometry, dem_geotiff, stats=['min', 'mean', 'max'], nodata=-999) ###Output _____no_output_____ ###Markdown The results can be assigned to new columns: ###Code countries[['min', 'max', 'mean']] = pd.DataFrame(result) countries.head() ###Output _____no_output_____ ###Markdown And then we can sort by the average elevation of the country: ###Code countries.sort_values('mean', ascending=False).head() ###Output _____no_output_____ ###Markdown For points, a similar function called `point_query` can be used (specifying the interpolation method): ###Code cities["elevation"] = rasterstats.point_query(cities.geometry, dem_geotiff, interpolate='bilinear', nodata=-999) cities.sort_values(by="elevation", ascending=False).head() ###Output _____no_output_____ ###Markdown ----------- A bit more about WFS> The Web Feature Service (WFS) represents a change in the way geographic information is created, modified and exchanged on the Internet. Rather than sharing geographic information at the file level using File Transfer Protocol (FTP), for example, the WFS offers direct fine-grained...(https://www.ogc.org/standards/wfs)In brief, the WFS is the specification to __access and download vector datasets__.To access WFS data, you need the following information:- URL of the service, e.g. `https://geoservices.informatievlaanderen.be/overdrachtdiensten/VHAWaterlopen/wfs`. Looking for these URLS, check [WFS page of Michel Stuyts](https://wfs.michelstuyts.be/?lang=en)- The available projections and layers, also check [WFS page of Michel Stuyts](https://wfs.michelstuyts.be/?lang=en) or start looking into the `GetCapabilities`, e.g. [vha waterlopen](https://geoservices.informatievlaanderen.be/overdrachtdiensten/VHAWaterlopen/wfs?REQUEST=GetCapabilities&SERVICE=WFS)Instead of downloading the entire data set, filtering the request itself (only downloading what you need) is a good idea, using the `cql_filter` filter. Finding out these is sometimes a bit of hazzle... E.g. quickly [preview the data in QGIS](https://docs.qgis.org/3.10/en/docs/training_manual/online_resources/wfs.html?highlight=wfs).You can also use the [`OWSLib` library](https://geopython.github.io/OWSLib/wfs). But as WFS is a webservice, the `requests` package will be sufficient for simple queries. As an example - municipalities in Belgium, see https://wfs.michelstuyts.be/service.php?id=140&lang=en, _WFS Voorlopig referentiebestand gemeentegrenzen 2019_- URL of the service: https://geoservices.informatievlaanderen.be/overdrachtdiensten/VRBG2019/wfs- Available projections: EPSG:4258, EPSG:3812,...- Available layers: VRBG2019:Refgem:, VRBG2019:Refarr:,...- Column `Naam` contains the municipatility, e.g. `Gent` ###Code wfs_municipality = "https://geoservices.informatievlaanderen.be/overdrachtdiensten/VRBG2019/wfs" params = dict(service='WFS', version='1.1.0', request='GetFeature', typeName='VRBG2019:Refgem', outputFormat='json', cql_filter="NAAM='Gent'", srs="31370") # Fetch data from WFS using requests r = requests.get(wfs_municipality, params=params) gent = geopandas.GeoDataFrame.from_features(json.loads(r.content), crs="epsg:31370") gent.plot() ###Output _____no_output_____ ###Markdown Cloud: only download what you needRasterio/rioxarray only reads the data from disk that is requested to overcome loading entire data sets into memory. The same applies to downloading data, overcoming entire downloads when only a fraction is required (when the online resource supports this). An example is https://zenodo.org/record/2654620, which is available as [Cloud Optimized Geotiff (COG)](https://www.cogeo.org/). Also cloud providers (AWS, google,...) do support COG files, e.g. [Landstat images](https://docs.opendata.aws/landsat-pds/readme.html).These files are typically very large to download, whereas we might only need a small subset of the data. COG files support downloading a subset of the data you need using a masking approach.Let's use the Averbode nature reserve data as an example, available at the URL: http://s3-eu-west-1.amazonaws.com/lw-remote-sensing/cogeo/20160401_ABH_1_Ortho.tif ###Code averbode_cog_rgb = 'http://s3-eu-west-1.amazonaws.com/lw-remote-sensing/cogeo/20160401_ABH_1_Ortho.tif' ###Output _____no_output_____ ###Markdown Check the metadata, without downloading the data itself: ###Code averbode_data = rioxarray.open_rasterio(averbode_cog_rgb) averbode_data ###Output _____no_output_____ ###Markdown Downloading the entire data set would be 37645*35405\*4 pixels of 1 byte, so more or less 5.3 GByte ###Code 37645*35405*4 / 1e9 # Gb averbode_data.size / 1e9 # Gb ###Output _____no_output_____ ###Markdown Assume that we have a study area which is much smaller than the total extent of the available image: ###Code left, bottom, right, top = averbode_data.rio.bounds() averbode_study_area = geopandas.read_file("./data/averbode/study_area.geojson") ax = averbode_study_area.plot(); ax.set_xlim(left, right); ax.set_ylim(bottom, top); ###Output _____no_output_____ ###Markdown In the case of COG data, the data can sometimes be requested on different resolution levels when stored as such. So, to get a very broad overview of the data, we can request the coarsest resolution by resampling and download the data at the resampled resolution: ###Code with rasterio.open(averbode_cog_rgb) as src: # check available overviews for band 1 print(f"Available resolutions are {src.overviews(1)}") averbode_64 = rioxarray.open_rasterio(averbode_cog_rgb, overview_level=5) averbode_64.size / 1e6 # Mb averbode_64.rio.resolution() ###Output _____no_output_____ ###Markdown Compare the thumbnail version of the data with our study area: ###Code fig, ax = plt.subplots() averbode_64.sel(band=[1, 2, 3]).plot.imshow(ax=ax) averbode_study_area.plot(ax=ax, color='None', edgecolor='red', linewidth=2); ###Output _____no_output_____ ###Markdown Downloading the entire data file would be overkill. Instead, we only want to download the data of the study area. This can be done with the `clip()` method using the `from_disk` option. The resulting data set will still be around 100MB and will take a bit of time to download, but this is only a fraction of the original data file: ###Code %%time # Only run this cell when sufficient band width ;-) averbode_subset = averbode_data.rio.clip(averbode_study_area.geometry, from_disk=True) averbode_subset.size / 1e6 # Mb averbode_subset.sel(band=[1, 2, 3]).plot.imshow(figsize=(10, 10)) ###Output _____no_output_____
site/en/r1/tutorials/eager/custom_training_walkthrough.ipynb
###Markdown Copyright 2018 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Custom training: walkthrough Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This guide uses machine learning to *categorize* Iris flowers by species. It uses TensorFlow's [eager execution](https://www.tensorflow.org/r1/guide/eager) to:1. Build a model,2. Train this model on example data, and3. Use the model to make predictions about unknown data. TensorFlow programmingThis guide uses these high-level TensorFlow concepts:* Enable an [eager execution](https://www.tensorflow.org/r1/guide/eager) development environment,* Import data with the [Datasets API](https://www.tensorflow.org/r1/guide/datasets),* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).This tutorial is structured like many TensorFlow programs:1. Import and parse the data sets.2. Select the type of model.3. Train the model.4. Evaluate the model's effectiveness.5. Use the trained model to make predictions. Setup program Configure imports and eager executionImport the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/r1/guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar. Eager execution is available in [TensorFlow >=1.8](https://www.tensorflow.org/install/).Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/r1/guide/eager) for more details. ###Code import os import matplotlib.pyplot as plt import tensorflow.compat.v1 as tf print("TensorFlow version: {}".format(tf.__version__)) print("Eager execution: {}".format(tf.executing_eagerly())) ###Output _____no_output_____ ###Markdown The Iris classification problemImagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).The Iris genus entails about 300 species, but our program will only classify the following three:* Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> Figure 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0).&nbsp; Fortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems. Import and parse the training datasetDownload the dataset file and convert it into a structure that can be used by this Python program. Download the datasetDownload the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file. ###Code train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv" train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url), origin=train_dataset_url) print("Local copy of the dataset file: {}".format(train_dataset_fp)) ###Output _____no_output_____ ###Markdown Inspect the dataThis dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries: ###Code !head -n5 {train_dataset_fp} ###Output _____no_output_____ ###Markdown From this view of the dataset, notice the following:1. The first line is a header containing information about the dataset: * There are 120 total examples. Each example has four features and one of three possible label names.2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/example)* per line, where: * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements. * The last column is the *[label](https://developers.google.com/machine-learning/glossary/label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.Let's write that out in code: ###Code # column order in CSV file column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'] feature_names = column_names[:-1] label_name = column_names[-1] print("Features: {}".format(feature_names)) print("Label: {}".format(label_name)) ###Output _____no_output_____ ###Markdown Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginicaFor more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology). ###Code class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica'] ###Output _____no_output_____ ###Markdown Create a `tf.data.Dataset`TensorFlow's [Dataset API](https://www.tensorflow.org/r1/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.Since the dataset is a CSV-formatted text file, use the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/batch_size) parameter. ###Code batch_size = 32 train_dataset = tf.data.experimental.make_csv_dataset( train_dataset_fp, batch_size, column_names=column_names, label_name=label_name, num_epochs=1) ###Output _____no_output_____ ###Markdown The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}`With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features: ###Code features, labels = next(iter(train_dataset)) features ###Output _____no_output_____ ###Markdown Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays.You can start to see some clusters by plotting a few features from the batch: ###Code plt.scatter(features['petal_length'].numpy(), features['sepal_length'].numpy(), c=labels.numpy(), cmap='viridis') plt.xlabel("Petal length") plt.ylabel("Sepal length") plt.show() ###Output _____no_output_____ ###Markdown To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`.This function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension. ###Code def pack_features_vector(features, labels): """Pack the features into a single array.""" features = tf.stack(list(features.values()), axis=1) return features, labels ###Output _____no_output_____ ###Markdown Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset: ###Code train_dataset = train_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown The features element of the `Dataset` are now arrays with shape `(batch_size, num_features)`. Let's look at the first few examples: ###Code features, labels = next(iter(train_dataset)) print(features[:5]) ###Output _____no_output_____ ###Markdown Select the type of model Why model?A *[model](https://developers.google.com/machine-learning/crash-course/glossarymodel)* is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.Could you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you. Select the modelWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer: <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> Figure 2. A neural network with features, hidden layers, and predictions.&nbsp; When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossaryinference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.02` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.03` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*. Create a model using KerasThe TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the number of features from the dataset, and is required. ###Code model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3) ]) ###Output _____no_output_____ ###Markdown The *[activation function](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU) is common for hidden layers.The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. Using the modelLet's have a quick look at what this model does to a batch of features: ###Code predictions = model(features) predictions[:5] ###Output _____no_output_____ ###Markdown Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossarylogits) for each class.To convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) function: ###Code tf.nn.softmax(predictions[:5]) ###Output _____no_output_____ ###Markdown Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions. ###Code print("Prediction: {}".format(tf.argmax(predictions, axis=1))) print(" Labels: {}".format(labels)) ###Output _____no_output_____ ###Markdown Train the model*[Training](https://developers.google.com/machine-learning/crash-course/glossarytraining)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.The Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features. Define the loss and gradient functionBoth training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossaryloss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.Our model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples. ###Code def loss(model, x, y): y_ = model(x) return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) l = loss(model, features, labels) print("Loss test: {}".format(l)) ###Output _____no_output_____ ###Markdown Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossarygradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/r1/guide/eager). ###Code def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return loss_value, tape.gradient(loss_value, model.trainable_variables) ###Output _____no_output_____ ###Markdown Create an optimizerAn *[optimizer](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> Figure 3. Optimization algorithms visualized over time in 3D space.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results. Let's setup the optimizer and the `global_step` counter: ###Code optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) global_step = tf.Variable(0) ###Output _____no_output_____ ###Markdown We'll use this to calculate a single optimization step: ###Code loss_value, grads = grad(model, features, labels) print("Step: {}, Initial Loss: {}".format(global_step.numpy(), loss_value.numpy())) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) print("Step: {}, Loss: {}".format(global_step.numpy(), loss(model, features, labels).numpy())) ###Output _____no_output_____ ###Markdown Training loopWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:1. Iterate each *epoch*. An epoch is one pass through the dataset.2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.4. Use an `optimizer` to update the model's variables.5. Keep track of some stats for visualization.6. Repeat for each epoch.The `num_epochs` variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation. ###Code ## Note: Rerunning this cell uses the same model variables # keep results for plotting train_loss_results = [] train_accuracy_results = [] num_epochs = 201 for epoch in range(num_epochs): epoch_loss_avg = tf.keras.metrics.Mean() epoch_accuracy = tf.keras.metrics.Accuracy() # Training loop - using batches of 32 for x, y in train_dataset: # Optimize the model loss_value, grads = grad(model, x, y) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) # Track progress epoch_loss_avg(loss_value) # add current batch loss # compare predicted label to actual label epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y) # end epoch train_loss_results.append(epoch_loss_avg.result()) train_accuracy_results.append(epoch_accuracy.result()) if epoch % 50 == 0: print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result())) ###Output _____no_output_____ ###Markdown Visualize the loss function over time While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://tensorflow.org/tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up. ###Code fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8)) fig.suptitle('Training Metrics') axes[0].set_ylabel("Loss", fontsize=14) axes[0].plot(train_loss_results) axes[1].set_ylabel("Accuracy", fontsize=14) axes[1].set_xlabel("Epoch", fontsize=14) axes[1].plot(train_accuracy_results) plt.show() ###Output _____no_output_____ ###Markdown Evaluate the model's effectivenessNow that the model is trained, we can get some statistics on its performance.*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy: Example features Label Model prediction 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 Figure 4. An Iris classifier that is 80% accurate.&nbsp; Setup the test datasetEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.The setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle: ###Code test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv" test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url), origin=test_url) test_dataset = tf.data.experimental.make_csv_dataset( test_fp, batch_size, column_names=column_names, label_name='species', num_epochs=1, shuffle=False) test_dataset = test_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown Evaluate the model on the test datasetUnlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set. ###Code test_accuracy = tf.keras.metrics.Accuracy() for (x, y) in test_dataset: logits = model(x) prediction = tf.argmax(logits, axis=1, output_type=tf.int32) test_accuracy(prediction, y) print("Test set accuracy: {:.3%}".format(test_accuracy.result())) ###Output _____no_output_____ ###Markdown We can see on the last batch, for example, the model is usually correct: ###Code tf.stack([y,prediction],axis=1) ###Output _____no_output_____ ###Markdown Use the trained model to make predictionsWe've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/unlabeled_example); that is, on examples that contain features but not a label.In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica ###Code predict_dataset = tf.convert_to_tensor([ [5.1, 3.3, 1.7, 0.5,], [5.9, 3.0, 4.2, 1.5,], [6.9, 3.1, 5.4, 2.1] ]) predictions = model(predict_dataset) for i, logits in enumerate(predictions): class_idx = tf.argmax(logits).numpy() p = tf.nn.softmax(logits)[class_idx] name = class_names[class_idx] print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p)) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Custom training: walkthrough Run in Google Colab View source on GitHub This guide uses machine learning to *categorize* Iris flowers by species. It uses TensorFlow's [eager execution](https://www.tensorflow.org/r1/guide/eager) to:1. Build a model,2. Train this model on example data, and3. Use the model to make predictions about unknown data. TensorFlow programmingThis guide uses these high-level TensorFlow concepts:* Enable an [eager execution](https://www.tensorflow.org/r1/guide/eager) development environment,* Import data with the [Datasets API](https://www.tensorflow.org/r1/guide/datasets),* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).This tutorial is structured like many TensorFlow programs:1. Import and parse the data sets.2. Select the type of model.3. Train the model.4. Evaluate the model's effectiveness.5. Use the trained model to make predictions. Setup program Configure imports and eager executionImport the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/r1/guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar. Eager execution is available in [Tensorlow >=1.8](https://www.tensorflow.org/install/).Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/r1/guide/eager) for more details. ###Code from __future__ import absolute_import, division, print_function, unicode_literals import os import matplotlib.pyplot as plt import tensorflow as tf tf.enable_eager_execution() print("TensorFlow version: {}".format(tf.__version__)) print("Eager execution: {}".format(tf.executing_eagerly())) ###Output _____no_output_____ ###Markdown The Iris classification problemImagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).The Iris genus entails about 300 species, but our program will only classify the following three:* Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> Figure 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0).&nbsp; Fortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems. Import and parse the training datasetDownload the dataset file and convert it into a structure that can be used by this Python program. Download the datasetDownload the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file. ###Code train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv" train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url), origin=train_dataset_url) print("Local copy of the dataset file: {}".format(train_dataset_fp)) ###Output _____no_output_____ ###Markdown Inspect the dataThis dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries: ###Code !head -n5 {train_dataset_fp} ###Output _____no_output_____ ###Markdown From this view of the dataset, notice the following:1. The first line is a header containing information about the dataset: * There are 120 total examples. Each example has four features and one of three possible label names.2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/example)* per line, where: * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements. * The last column is the *[label](https://developers.google.com/machine-learning/glossary/label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.Let's write that out in code: ###Code # column order in CSV file column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'] feature_names = column_names[:-1] label_name = column_names[-1] print("Features: {}".format(feature_names)) print("Label: {}".format(label_name)) ###Output _____no_output_____ ###Markdown Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginicaFor more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology). ###Code class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica'] ###Output _____no_output_____ ###Markdown Create a `tf.data.Dataset`TensorFlow's [Dataset API](https://www.tensorflow.org/r1/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.Since the dataset is a CSV-formatted text file, use the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/batch_size) parameter. ###Code batch_size = 32 train_dataset = tf.contrib.data.make_csv_dataset( train_dataset_fp, batch_size, column_names=column_names, label_name=label_name, num_epochs=1) ###Output _____no_output_____ ###Markdown The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}`With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features: ###Code features, labels = next(iter(train_dataset)) features ###Output _____no_output_____ ###Markdown Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays.You can start to see some clusters by plotting a few features from the batch: ###Code plt.scatter(features['petal_length'].numpy(), features['sepal_length'].numpy(), c=labels.numpy(), cmap='viridis') plt.xlabel("Petal length") plt.ylabel("Sepal length") plt.show() ###Output _____no_output_____ ###Markdown To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`.This function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension. ###Code def pack_features_vector(features, labels): """Pack the features into a single array.""" features = tf.stack(list(features.values()), axis=1) return features, labels ###Output _____no_output_____ ###Markdown Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset: ###Code train_dataset = train_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown The features element of the `Dataset` are now arrays with shape `(batch_size, num_features)`. Let's look at the first few examples: ###Code features, labels = next(iter(train_dataset)) print(features[:5]) ###Output _____no_output_____ ###Markdown Select the type of model Why model?A *[model](https://developers.google.com/machine-learning/crash-course/glossarymodel)* is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.Could you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you. Select the modelWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer: <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> Figure 2. A neural network with features, hidden layers, and predictions.&nbsp; When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossaryinference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.02` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.03` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*. Create a model using KerasThe TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the number of features from the dataset, and is required. ###Code model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3) ]) ###Output _____no_output_____ ###Markdown The *[activation function](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU) is common for hidden layers.The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. Using the modelLet's have a quick look at what this model does to a batch of features: ###Code predictions = model(features) predictions[:5] ###Output _____no_output_____ ###Markdown Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossarylogits) for each class.To convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) function: ###Code tf.nn.softmax(predictions[:5]) ###Output _____no_output_____ ###Markdown Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions. ###Code print("Prediction: {}".format(tf.argmax(predictions, axis=1))) print(" Labels: {}".format(labels)) ###Output _____no_output_____ ###Markdown Train the model*[Training](https://developers.google.com/machine-learning/crash-course/glossarytraining)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.The Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features. Define the loss and gradient functionBoth training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossaryloss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.Our model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples. ###Code def loss(model, x, y): y_ = model(x) return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) l = loss(model, features, labels) print("Loss test: {}".format(l)) ###Output _____no_output_____ ###Markdown Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossarygradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/r1/guide/eager). ###Code def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return loss_value, tape.gradient(loss_value, model.trainable_variables) ###Output _____no_output_____ ###Markdown Create an optimizerAn *[optimizer](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> Figure 3. Optimization algorithms visualized over time in 3D space.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results. Let's setup the optimizer and the `global_step` counter: ###Code optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) global_step = tf.Variable(0) ###Output _____no_output_____ ###Markdown We'll use this to calculate a single optimization step: ###Code loss_value, grads = grad(model, features, labels) print("Step: {}, Initial Loss: {}".format(global_step.numpy(), loss_value.numpy())) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) print("Step: {}, Loss: {}".format(global_step.numpy(), loss(model, features, labels).numpy())) ###Output _____no_output_____ ###Markdown Training loopWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:1. Iterate each *epoch*. An epoch is one pass through the dataset.2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.4. Use an `optimizer` to update the model's variables.5. Keep track of some stats for visualization.6. Repeat for each epoch.The `num_epochs` variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation. ###Code ## Note: Rerunning this cell uses the same model variables from tensorflow import contrib tfe = contrib.eager # keep results for plotting train_loss_results = [] train_accuracy_results = [] num_epochs = 201 for epoch in range(num_epochs): epoch_loss_avg = tfe.metrics.Mean() epoch_accuracy = tfe.metrics.Accuracy() # Training loop - using batches of 32 for x, y in train_dataset: # Optimize the model loss_value, grads = grad(model, x, y) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) # Track progress epoch_loss_avg(loss_value) # add current batch loss # compare predicted label to actual label epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y) # end epoch train_loss_results.append(epoch_loss_avg.result()) train_accuracy_results.append(epoch_accuracy.result()) if epoch % 50 == 0: print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result())) ###Output _____no_output_____ ###Markdown Visualize the loss function over time While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://www.tensorflow.org/r1/guide/summaries_and_tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up. ###Code fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8)) fig.suptitle('Training Metrics') axes[0].set_ylabel("Loss", fontsize=14) axes[0].plot(train_loss_results) axes[1].set_ylabel("Accuracy", fontsize=14) axes[1].set_xlabel("Epoch", fontsize=14) axes[1].plot(train_accuracy_results) plt.show() ###Output _____no_output_____ ###Markdown Evaluate the model's effectivenessNow that the model is trained, we can get some statistics on its performance.*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy: Example features Label Model prediction 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 Figure 4. An Iris classifier that is 80% accurate.&nbsp; Setup the test datasetEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.The setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle: ###Code test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv" test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url), origin=test_url) test_dataset = tf.contrib.data.make_csv_dataset( test_fp, batch_size, column_names=column_names, label_name='species', num_epochs=1, shuffle=False) test_dataset = test_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown Evaluate the model on the test datasetUnlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set. ###Code test_accuracy = tfe.metrics.Accuracy() for (x, y) in test_dataset: logits = model(x) prediction = tf.argmax(logits, axis=1, output_type=tf.int32) test_accuracy(prediction, y) print("Test set accuracy: {:.3%}".format(test_accuracy.result())) ###Output _____no_output_____ ###Markdown We can see on the last batch, for example, the model is usually correct: ###Code tf.stack([y,prediction],axis=1) ###Output _____no_output_____ ###Markdown Use the trained model to make predictionsWe've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/unlabeled_example); that is, on examples that contain features but not a label.In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica ###Code predict_dataset = tf.convert_to_tensor([ [5.1, 3.3, 1.7, 0.5,], [5.9, 3.0, 4.2, 1.5,], [6.9, 3.1, 5.4, 2.1] ]) predictions = model(predict_dataset) for i, logits in enumerate(predictions): class_idx = tf.argmax(logits).numpy() p = tf.nn.softmax(logits)[class_idx] name = class_names[class_idx] print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p)) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Custom training: walkthrough Run in Google Colab View source on GitHub This guide uses machine learning to *categorize* Iris flowers by species. It uses TensorFlow's [eager execution](https://www.tensorflow.org/r1/guide/eager) to:1. Build a model,2. Train this model on example data, and3. Use the model to make predictions about unknown data. TensorFlow programmingThis guide uses these high-level TensorFlow concepts:* Enable an [eager execution](https://www.tensorflow.org/r1/guide/eager) development environment,* Import data with the [Datasets API](https://www.tensorflow.org/r1/guide/datasets),* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).This tutorial is structured like many TensorFlow programs:1. Import and parse the data sets.2. Select the type of model.3. Train the model.4. Evaluate the model's effectiveness.5. Use the trained model to make predictions. Setup program Configure imports and eager executionImport the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/r1/guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar. Eager execution is available in [TensorFlow >=1.8](https://www.tensorflow.org/install/).Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/r1/guide/eager) for more details. ###Code from __future__ import absolute_import, division, print_function, unicode_literals import os import matplotlib.pyplot as plt try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow.compat.v1 as tf print("TensorFlow version: {}".format(tf.__version__)) print("Eager execution: {}".format(tf.executing_eagerly())) ###Output _____no_output_____ ###Markdown The Iris classification problemImagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).The Iris genus entails about 300 species, but our program will only classify the following three:* Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> Figure 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0).&nbsp; Fortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems. Import and parse the training datasetDownload the dataset file and convert it into a structure that can be used by this Python program. Download the datasetDownload the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file. ###Code train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv" train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url), origin=train_dataset_url) print("Local copy of the dataset file: {}".format(train_dataset_fp)) ###Output _____no_output_____ ###Markdown Inspect the dataThis dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries: ###Code !head -n5 {train_dataset_fp} ###Output _____no_output_____ ###Markdown From this view of the dataset, notice the following:1. The first line is a header containing information about the dataset: * There are 120 total examples. Each example has four features and one of three possible label names.2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/example)* per line, where: * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements. * The last column is the *[label](https://developers.google.com/machine-learning/glossary/label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.Let's write that out in code: ###Code # column order in CSV file column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'] feature_names = column_names[:-1] label_name = column_names[-1] print("Features: {}".format(feature_names)) print("Label: {}".format(label_name)) ###Output _____no_output_____ ###Markdown Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginicaFor more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology). ###Code class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica'] ###Output _____no_output_____ ###Markdown Create a `tf.data.Dataset`TensorFlow's [Dataset API](https://www.tensorflow.org/r1/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.Since the dataset is a CSV-formatted text file, use the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/batch_size) parameter. ###Code batch_size = 32 train_dataset = tf.data.experimental.make_csv_dataset( train_dataset_fp, batch_size, column_names=column_names, label_name=label_name, num_epochs=1) ###Output _____no_output_____ ###Markdown The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}`With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features: ###Code features, labels = next(iter(train_dataset)) features ###Output _____no_output_____ ###Markdown Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays.You can start to see some clusters by plotting a few features from the batch: ###Code plt.scatter(features['petal_length'].numpy(), features['sepal_length'].numpy(), c=labels.numpy(), cmap='viridis') plt.xlabel("Petal length") plt.ylabel("Sepal length") plt.show() ###Output _____no_output_____ ###Markdown To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`.This function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension. ###Code def pack_features_vector(features, labels): """Pack the features into a single array.""" features = tf.stack(list(features.values()), axis=1) return features, labels ###Output _____no_output_____ ###Markdown Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset: ###Code train_dataset = train_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown The features element of the `Dataset` are now arrays with shape `(batch_size, num_features)`. Let's look at the first few examples: ###Code features, labels = next(iter(train_dataset)) print(features[:5]) ###Output _____no_output_____ ###Markdown Select the type of model Why model?A *[model](https://developers.google.com/machine-learning/crash-course/glossarymodel)* is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.Could you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you. Select the modelWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer: <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> Figure 2. A neural network with features, hidden layers, and predictions.&nbsp; When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossaryinference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.02` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.03` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*. Create a model using KerasThe TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the number of features from the dataset, and is required. ###Code model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3) ]) ###Output _____no_output_____ ###Markdown The *[activation function](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU) is common for hidden layers.The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. Using the modelLet's have a quick look at what this model does to a batch of features: ###Code predictions = model(features) predictions[:5] ###Output _____no_output_____ ###Markdown Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossarylogits) for each class.To convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) function: ###Code tf.nn.softmax(predictions[:5]) ###Output _____no_output_____ ###Markdown Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions. ###Code print("Prediction: {}".format(tf.argmax(predictions, axis=1))) print(" Labels: {}".format(labels)) ###Output _____no_output_____ ###Markdown Train the model*[Training](https://developers.google.com/machine-learning/crash-course/glossarytraining)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.The Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features. Define the loss and gradient functionBoth training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossaryloss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.Our model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples. ###Code def loss(model, x, y): y_ = model(x) return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) l = loss(model, features, labels) print("Loss test: {}".format(l)) ###Output _____no_output_____ ###Markdown Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossarygradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/r1/guide/eager). ###Code def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return loss_value, tape.gradient(loss_value, model.trainable_variables) ###Output _____no_output_____ ###Markdown Create an optimizerAn *[optimizer](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> Figure 3. Optimization algorithms visualized over time in 3D space.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results. Let's setup the optimizer and the `global_step` counter: ###Code optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) global_step = tf.Variable(0) ###Output _____no_output_____ ###Markdown We'll use this to calculate a single optimization step: ###Code loss_value, grads = grad(model, features, labels) print("Step: {}, Initial Loss: {}".format(global_step.numpy(), loss_value.numpy())) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) print("Step: {}, Loss: {}".format(global_step.numpy(), loss(model, features, labels).numpy())) ###Output _____no_output_____ ###Markdown Training loopWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:1. Iterate each *epoch*. An epoch is one pass through the dataset.2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.4. Use an `optimizer` to update the model's variables.5. Keep track of some stats for visualization.6. Repeat for each epoch.The `num_epochs` variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation. ###Code ## Note: Rerunning this cell uses the same model variables # keep results for plotting train_loss_results = [] train_accuracy_results = [] num_epochs = 201 for epoch in range(num_epochs): epoch_loss_avg = tf.keras.metrics.Mean() epoch_accuracy = tf.keras.metrics.Accuracy() # Training loop - using batches of 32 for x, y in train_dataset: # Optimize the model loss_value, grads = grad(model, x, y) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) # Track progress epoch_loss_avg(loss_value) # add current batch loss # compare predicted label to actual label epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y) # end epoch train_loss_results.append(epoch_loss_avg.result()) train_accuracy_results.append(epoch_accuracy.result()) if epoch % 50 == 0: print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result())) ###Output _____no_output_____ ###Markdown Visualize the loss function over time While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://tensorflow.org/tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up. ###Code fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8)) fig.suptitle('Training Metrics') axes[0].set_ylabel("Loss", fontsize=14) axes[0].plot(train_loss_results) axes[1].set_ylabel("Accuracy", fontsize=14) axes[1].set_xlabel("Epoch", fontsize=14) axes[1].plot(train_accuracy_results) plt.show() ###Output _____no_output_____ ###Markdown Evaluate the model's effectivenessNow that the model is trained, we can get some statistics on its performance.*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy: Example features Label Model prediction 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 Figure 4. An Iris classifier that is 80% accurate.&nbsp; Setup the test datasetEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.The setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle: ###Code test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv" test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url), origin=test_url) test_dataset = tf.data.experimental.make_csv_dataset( test_fp, batch_size, column_names=column_names, label_name='species', num_epochs=1, shuffle=False) test_dataset = test_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown Evaluate the model on the test datasetUnlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set. ###Code test_accuracy = tf.keras.metrics.Accuracy() for (x, y) in test_dataset: logits = model(x) prediction = tf.argmax(logits, axis=1, output_type=tf.int32) test_accuracy(prediction, y) print("Test set accuracy: {:.3%}".format(test_accuracy.result())) ###Output _____no_output_____ ###Markdown We can see on the last batch, for example, the model is usually correct: ###Code tf.stack([y,prediction],axis=1) ###Output _____no_output_____ ###Markdown Use the trained model to make predictionsWe've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/unlabeled_example); that is, on examples that contain features but not a label.In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica ###Code predict_dataset = tf.convert_to_tensor([ [5.1, 3.3, 1.7, 0.5,], [5.9, 3.0, 4.2, 1.5,], [6.9, 3.1, 5.4, 2.1] ]) predictions = model(predict_dataset) for i, logits in enumerate(predictions): class_idx = tf.argmax(logits).numpy() p = tf.nn.softmax(logits)[class_idx] name = class_names[class_idx] print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p)) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Custom training: walkthrough Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This guide uses machine learning to *categorize* Iris flowers by species. It uses TensorFlow's [eager execution](https://www.tensorflow.org/r1/guide/eager) to:1. Build a model,2. Train this model on example data, and3. Use the model to make predictions about unknown data. TensorFlow programmingThis guide uses these high-level TensorFlow concepts:* Enable an [eager execution](https://www.tensorflow.org/r1/guide/eager) development environment,* Import data with the [Datasets API](https://www.tensorflow.org/r1/guide/datasets),* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).This tutorial is structured like many TensorFlow programs:1. Import and parse the data sets.2. Select the type of model.3. Train the model.4. Evaluate the model's effectiveness.5. Use the trained model to make predictions. Setup program Configure imports and eager executionImport the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/r1/guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar. Eager execution is available in [TensorFlow >=1.8](https://www.tensorflow.org/install/).Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/r1/guide/eager) for more details. ###Code import os import matplotlib.pyplot as plt import tensorflow.compat.v1 as tf print("TensorFlow version: {}".format(tf.__version__)) print("Eager execution: {}".format(tf.executing_eagerly())) ###Output _____no_output_____ ###Markdown The Iris classification problemImagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).The Iris genus entails about 300 species, but our program will only classify the following three:* Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> Figure 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0).&nbsp; Fortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems. Import and parse the training datasetDownload the dataset file and convert it into a structure that can be used by this Python program. Download the datasetDownload the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file. ###Code train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv" train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url), origin=train_dataset_url) print("Local copy of the dataset file: {}".format(train_dataset_fp)) ###Output _____no_output_____ ###Markdown Inspect the dataThis dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries: ###Code !head -n5 {train_dataset_fp} ###Output _____no_output_____ ###Markdown From this view of the dataset, notice the following:1. The first line is a header containing information about the dataset: * There are 120 total examples. Each example has four features and one of three possible label names.2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/example)* per line, where: * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements. * The last column is the *[label](https://developers.google.com/machine-learning/glossary/label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.Let's write that out in code: ###Code # column order in CSV file column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'] feature_names = column_names[:-1] label_name = column_names[-1] print("Features: {}".format(feature_names)) print("Label: {}".format(label_name)) ###Output _____no_output_____ ###Markdown Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginicaFor more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology). ###Code class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica'] ###Output _____no_output_____ ###Markdown Create a `tf.data.Dataset`TensorFlow's [Dataset API](https://www.tensorflow.org/r1/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.Since the dataset is a CSV-formatted text file, use the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/batch_size) parameter. ###Code batch_size = 32 train_dataset = tf.data.experimental.make_csv_dataset( train_dataset_fp, batch_size, column_names=column_names, label_name=label_name, num_epochs=1) ###Output _____no_output_____ ###Markdown The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}`With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features: ###Code features, labels = next(iter(train_dataset)) features ###Output _____no_output_____ ###Markdown Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays.You can start to see some clusters by plotting a few features from the batch: ###Code plt.scatter(features['petal_length'].numpy(), features['sepal_length'].numpy(), c=labels.numpy(), cmap='viridis') plt.xlabel("Petal length") plt.ylabel("Sepal length") plt.show() ###Output _____no_output_____ ###Markdown To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`.This function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension. ###Code def pack_features_vector(features, labels): """Pack the features into a single array.""" features = tf.stack(list(features.values()), axis=1) return features, labels ###Output _____no_output_____ ###Markdown Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset: ###Code train_dataset = train_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown The features element of the `Dataset` are now arrays with shape `(batch_size, num_features)`. Let's look at the first few examples: ###Code features, labels = next(iter(train_dataset)) print(features[:5]) ###Output _____no_output_____ ###Markdown Select the type of model Why model?A *[model](https://developers.google.com/machine-learning/crash-course/glossarymodel)* is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.Could you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you. Select the modelWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer: <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> Figure 2. A neural network with features, hidden layers, and predictions.&nbsp; When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossaryinference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.02` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.03` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*. Create a model using KerasThe TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the number of features from the dataset, and is required. ###Code model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3) ]) ###Output _____no_output_____ ###Markdown The *[activation function](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU) is common for hidden layers.The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. Using the modelLet's have a quick look at what this model does to a batch of features: ###Code predictions = model(features) predictions[:5] ###Output _____no_output_____ ###Markdown Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossarylogits) for each class.To convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) function: ###Code tf.nn.softmax(predictions[:5]) ###Output _____no_output_____ ###Markdown Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions. ###Code print("Prediction: {}".format(tf.argmax(predictions, axis=1))) print(" Labels: {}".format(labels)) ###Output _____no_output_____ ###Markdown Train the model*[Training](https://developers.google.com/machine-learning/crash-course/glossarytraining)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.The Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features. Define the loss and gradient functionBoth training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossaryloss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.Our model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples. ###Code def loss(model, x, y): y_ = model(x) return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) l = loss(model, features, labels) print("Loss test: {}".format(l)) ###Output _____no_output_____ ###Markdown Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossarygradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/r1/guide/eager). ###Code def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return loss_value, tape.gradient(loss_value, model.trainable_variables) ###Output _____no_output_____ ###Markdown Create an optimizerAn *[optimizer](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> Figure 3. Optimization algorithms visualized over time in 3D space.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results. Let's setup the optimizer and the `global_step` counter: ###Code optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) global_step = tf.Variable(0) ###Output _____no_output_____ ###Markdown We'll use this to calculate a single optimization step: ###Code loss_value, grads = grad(model, features, labels) print("Step: {}, Initial Loss: {}".format(global_step.numpy(), loss_value.numpy())) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) print("Step: {}, Loss: {}".format(global_step.numpy(), loss(model, features, labels).numpy())) ###Output _____no_output_____ ###Markdown Training loopWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:1. Iterate each *epoch*. An epoch is one pass through the dataset.2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.4. Use an `optimizer` to update the model's variables.5. Keep track of some stats for visualization.6. Repeat for each epoch.The `num_epochs` variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation. ###Code ## Note: Rerunning this cell uses the same model variables # keep results for plotting train_loss_results = [] train_accuracy_results = [] num_epochs = 201 for epoch in range(num_epochs): epoch_loss_avg = tf.keras.metrics.Mean() epoch_accuracy = tf.keras.metrics.Accuracy() # Training loop - using batches of 32 for x, y in train_dataset: # Optimize the model loss_value, grads = grad(model, x, y) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) # Track progress epoch_loss_avg(loss_value) # add current batch loss # compare predicted label to actual label epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y) # end epoch train_loss_results.append(epoch_loss_avg.result()) train_accuracy_results.append(epoch_accuracy.result()) if epoch % 50 == 0: print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result())) ###Output _____no_output_____ ###Markdown Visualize the loss function over time While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://tensorflow.org/tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up. ###Code fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8)) fig.suptitle('Training Metrics') axes[0].set_ylabel("Loss", fontsize=14) axes[0].plot(train_loss_results) axes[1].set_ylabel("Accuracy", fontsize=14) axes[1].set_xlabel("Epoch", fontsize=14) axes[1].plot(train_accuracy_results) plt.show() ###Output _____no_output_____ ###Markdown Evaluate the model's effectivenessNow that the model is trained, we can get some statistics on its performance.*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy: Example features Label Model prediction 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 Figure 4. An Iris classifier that is 80% accurate.&nbsp; Setup the test datasetEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.The setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle: ###Code test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv" test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url), origin=test_url) test_dataset = tf.data.experimental.make_csv_dataset( test_fp, batch_size, column_names=column_names, label_name='species', num_epochs=1, shuffle=False) test_dataset = test_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown Evaluate the model on the test datasetUnlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set. ###Code test_accuracy = tf.keras.metrics.Accuracy() for (x, y) in test_dataset: logits = model(x) prediction = tf.argmax(logits, axis=1, output_type=tf.int32) test_accuracy(prediction, y) print("Test set accuracy: {:.3%}".format(test_accuracy.result())) ###Output _____no_output_____ ###Markdown We can see on the last batch, for example, the model is usually correct: ###Code tf.stack([y,prediction],axis=1) ###Output _____no_output_____ ###Markdown Use the trained model to make predictionsWe've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/unlabeled_example); that is, on examples that contain features but not a label.In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica ###Code predict_dataset = tf.convert_to_tensor([ [5.1, 3.3, 1.7, 0.5,], [5.9, 3.0, 4.2, 1.5,], [6.9, 3.1, 5.4, 2.1] ]) predictions = model(predict_dataset) for i, logits in enumerate(predictions): class_idx = tf.argmax(logits).numpy() p = tf.nn.softmax(logits)[class_idx] name = class_names[class_idx] print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p)) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Custom training: walkthrough Run in Google Colab View source on GitHub This guide uses machine learning to *categorize* Iris flowers by species. It uses TensorFlow's [eager execution](https://www.tensorflow.org/r1/guide/eager) to:1. Build a model,2. Train this model on example data, and3. Use the model to make predictions about unknown data. TensorFlow programmingThis guide uses these high-level TensorFlow concepts:* Enable an [eager execution](https://www.tensorflow.org/r1/guide/eager) development environment,* Import data with the [Datasets API](https://www.tensorflow.org/r1/guide/datasets),* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).This tutorial is structured like many TensorFlow programs:1. Import and parse the data sets.2. Select the type of model.3. Train the model.4. Evaluate the model's effectiveness.5. Use the trained model to make predictions. Setup program Configure imports and eager executionImport the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/r1/guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar. Eager execution is available in [TensorFlow >=1.8](https://www.tensorflow.org/install/).Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/r1/guide/eager) for more details. ###Code from __future__ import absolute_import, division, print_function, unicode_literals import os import matplotlib.pyplot as plt import tensorflow as tf tf.enable_eager_execution() print("TensorFlow version: {}".format(tf.__version__)) print("Eager execution: {}".format(tf.executing_eagerly())) ###Output _____no_output_____ ###Markdown The Iris classification problemImagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).The Iris genus entails about 300 species, but our program will only classify the following three:* Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> Figure 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0).&nbsp; Fortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems. Import and parse the training datasetDownload the dataset file and convert it into a structure that can be used by this Python program. Download the datasetDownload the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file. ###Code train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv" train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url), origin=train_dataset_url) print("Local copy of the dataset file: {}".format(train_dataset_fp)) ###Output _____no_output_____ ###Markdown Inspect the dataThis dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries: ###Code !head -n5 {train_dataset_fp} ###Output _____no_output_____ ###Markdown From this view of the dataset, notice the following:1. The first line is a header containing information about the dataset: * There are 120 total examples. Each example has four features and one of three possible label names.2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/example)* per line, where: * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements. * The last column is the *[label](https://developers.google.com/machine-learning/glossary/label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.Let's write that out in code: ###Code # column order in CSV file column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'] feature_names = column_names[:-1] label_name = column_names[-1] print("Features: {}".format(feature_names)) print("Label: {}".format(label_name)) ###Output _____no_output_____ ###Markdown Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginicaFor more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology). ###Code class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica'] ###Output _____no_output_____ ###Markdown Create a `tf.data.Dataset`TensorFlow's [Dataset API](https://www.tensorflow.org/r1/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.Since the dataset is a CSV-formatted text file, use the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/batch_size) parameter. ###Code batch_size = 32 train_dataset = tf.contrib.data.make_csv_dataset( train_dataset_fp, batch_size, column_names=column_names, label_name=label_name, num_epochs=1) ###Output _____no_output_____ ###Markdown The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}`With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features: ###Code features, labels = next(iter(train_dataset)) features ###Output _____no_output_____ ###Markdown Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays.You can start to see some clusters by plotting a few features from the batch: ###Code plt.scatter(features['petal_length'].numpy(), features['sepal_length'].numpy(), c=labels.numpy(), cmap='viridis') plt.xlabel("Petal length") plt.ylabel("Sepal length") plt.show() ###Output _____no_output_____ ###Markdown To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`.This function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension. ###Code def pack_features_vector(features, labels): """Pack the features into a single array.""" features = tf.stack(list(features.values()), axis=1) return features, labels ###Output _____no_output_____ ###Markdown Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset: ###Code train_dataset = train_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown The features element of the `Dataset` are now arrays with shape `(batch_size, num_features)`. Let's look at the first few examples: ###Code features, labels = next(iter(train_dataset)) print(features[:5]) ###Output _____no_output_____ ###Markdown Select the type of model Why model?A *[model](https://developers.google.com/machine-learning/crash-course/glossarymodel)* is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.Could you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you. Select the modelWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer: <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> Figure 2. A neural network with features, hidden layers, and predictions.&nbsp; When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossaryinference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.02` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.03` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*. Create a model using KerasThe TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the number of features from the dataset, and is required. ###Code model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3) ]) ###Output _____no_output_____ ###Markdown The *[activation function](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU) is common for hidden layers.The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. Using the modelLet's have a quick look at what this model does to a batch of features: ###Code predictions = model(features) predictions[:5] ###Output _____no_output_____ ###Markdown Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossarylogits) for each class.To convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) function: ###Code tf.nn.softmax(predictions[:5]) ###Output _____no_output_____ ###Markdown Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions. ###Code print("Prediction: {}".format(tf.argmax(predictions, axis=1))) print(" Labels: {}".format(labels)) ###Output _____no_output_____ ###Markdown Train the model*[Training](https://developers.google.com/machine-learning/crash-course/glossarytraining)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.The Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features. Define the loss and gradient functionBoth training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossaryloss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.Our model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples. ###Code def loss(model, x, y): y_ = model(x) return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) l = loss(model, features, labels) print("Loss test: {}".format(l)) ###Output _____no_output_____ ###Markdown Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossarygradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/r1/guide/eager). ###Code def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return loss_value, tape.gradient(loss_value, model.trainable_variables) ###Output _____no_output_____ ###Markdown Create an optimizerAn *[optimizer](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> Figure 3. Optimization algorithms visualized over time in 3D space.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results. Let's setup the optimizer and the `global_step` counter: ###Code optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) global_step = tf.Variable(0) ###Output _____no_output_____ ###Markdown We'll use this to calculate a single optimization step: ###Code loss_value, grads = grad(model, features, labels) print("Step: {}, Initial Loss: {}".format(global_step.numpy(), loss_value.numpy())) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) print("Step: {}, Loss: {}".format(global_step.numpy(), loss(model, features, labels).numpy())) ###Output _____no_output_____ ###Markdown Training loopWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:1. Iterate each *epoch*. An epoch is one pass through the dataset.2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.4. Use an `optimizer` to update the model's variables.5. Keep track of some stats for visualization.6. Repeat for each epoch.The `num_epochs` variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation. ###Code ## Note: Rerunning this cell uses the same model variables from tensorflow import contrib tfe = contrib.eager # keep results for plotting train_loss_results = [] train_accuracy_results = [] num_epochs = 201 for epoch in range(num_epochs): epoch_loss_avg = tfe.metrics.Mean() epoch_accuracy = tfe.metrics.Accuracy() # Training loop - using batches of 32 for x, y in train_dataset: # Optimize the model loss_value, grads = grad(model, x, y) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) # Track progress epoch_loss_avg(loss_value) # add current batch loss # compare predicted label to actual label epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y) # end epoch train_loss_results.append(epoch_loss_avg.result()) train_accuracy_results.append(epoch_accuracy.result()) if epoch % 50 == 0: print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result())) ###Output _____no_output_____ ###Markdown Visualize the loss function over time While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://tensorflow.org/tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up. ###Code fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8)) fig.suptitle('Training Metrics') axes[0].set_ylabel("Loss", fontsize=14) axes[0].plot(train_loss_results) axes[1].set_ylabel("Accuracy", fontsize=14) axes[1].set_xlabel("Epoch", fontsize=14) axes[1].plot(train_accuracy_results) plt.show() ###Output _____no_output_____ ###Markdown Evaluate the model's effectivenessNow that the model is trained, we can get some statistics on its performance.*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy: Example features Label Model prediction 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 Figure 4. An Iris classifier that is 80% accurate.&nbsp; Setup the test datasetEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.The setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle: ###Code test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv" test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url), origin=test_url) test_dataset = tf.contrib.data.make_csv_dataset( test_fp, batch_size, column_names=column_names, label_name='species', num_epochs=1, shuffle=False) test_dataset = test_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown Evaluate the model on the test datasetUnlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set. ###Code test_accuracy = tfe.metrics.Accuracy() for (x, y) in test_dataset: logits = model(x) prediction = tf.argmax(logits, axis=1, output_type=tf.int32) test_accuracy(prediction, y) print("Test set accuracy: {:.3%}".format(test_accuracy.result())) ###Output _____no_output_____ ###Markdown We can see on the last batch, for example, the model is usually correct: ###Code tf.stack([y,prediction],axis=1) ###Output _____no_output_____ ###Markdown Use the trained model to make predictionsWe've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/unlabeled_example); that is, on examples that contain features but not a label.In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica ###Code predict_dataset = tf.convert_to_tensor([ [5.1, 3.3, 1.7, 0.5,], [5.9, 3.0, 4.2, 1.5,], [6.9, 3.1, 5.4, 2.1] ]) predictions = model(predict_dataset) for i, logits in enumerate(predictions): class_idx = tf.argmax(logits).numpy() p = tf.nn.softmax(logits)[class_idx] name = class_names[class_idx] print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p)) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Custom training: walkthrough Run in Google Colab View source on GitHub This guide uses machine learning to *categorize* Iris flowers by species. It uses TensorFlow's [eager execution](https://www.tensorflow.org/r1/guide/eager) to:1. Build a model,2. Train this model on example data, and3. Use the model to make predictions about unknown data. TensorFlow programmingThis guide uses these high-level TensorFlow concepts:* Enable an [eager execution](https://www.tensorflow.org/r1/guide/eager) development environment,* Import data with the [Datasets API](https://www.tensorflow.org/r1/guide/datasets),* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).This tutorial is structured like many TensorFlow programs:1. Import and parse the data sets.2. Select the type of model.3. Train the model.4. Evaluate the model's effectiveness.5. Use the trained model to make predictions. Setup program Configure imports and eager executionImport the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/r1/guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar. Eager execution is available in [TensorFlow >=1.8](https://www.tensorflow.org/install/).Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/r1/guide/eager) for more details. ###Code from __future__ import absolute_import, division, print_function, unicode_literals import os import matplotlib.pyplot as plt try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow.compat.v1 as tf print("TensorFlow version: {}".format(tf.__version__)) print("Eager execution: {}".format(tf.executing_eagerly())) ###Output _____no_output_____ ###Markdown The Iris classification problemImagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).The Iris genus entails about 300 species, but our program will only classify the following three:* Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> Figure 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0).&nbsp; Fortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems. Import and parse the training datasetDownload the dataset file and convert it into a structure that can be used by this Python program. Download the datasetDownload the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file. ###Code train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv" train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url), origin=train_dataset_url) print("Local copy of the dataset file: {}".format(train_dataset_fp)) ###Output _____no_output_____ ###Markdown Inspect the dataThis dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries: ###Code !head -n5 {train_dataset_fp} ###Output _____no_output_____ ###Markdown From this view of the dataset, notice the following:1. The first line is a header containing information about the dataset: * There are 120 total examples. Each example has four features and one of three possible label names.2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/example)* per line, where: * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements. * The last column is the *[label](https://developers.google.com/machine-learning/glossary/label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.Let's write that out in code: ###Code # column order in CSV file column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'] feature_names = column_names[:-1] label_name = column_names[-1] print("Features: {}".format(feature_names)) print("Label: {}".format(label_name)) ###Output _____no_output_____ ###Markdown Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginicaFor more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology). ###Code class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica'] ###Output _____no_output_____ ###Markdown Create a `tf.data.Dataset`TensorFlow's [Dataset API](https://www.tensorflow.org/r1/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.Since the dataset is a CSV-formatted text file, use the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/batch_size) parameter. ###Code batch_size = 32 train_dataset = tf.data.experimental.make_csv_dataset( train_dataset_fp, batch_size, column_names=column_names, label_name=label_name, num_epochs=1) ###Output _____no_output_____ ###Markdown The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}`With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features: ###Code features, labels = next(iter(train_dataset)) features ###Output _____no_output_____ ###Markdown Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays.You can start to see some clusters by plotting a few features from the batch: ###Code plt.scatter(features['petal_length'].numpy(), features['sepal_length'].numpy(), c=labels.numpy(), cmap='viridis') plt.xlabel("Petal length") plt.ylabel("Sepal length") plt.show() ###Output _____no_output_____ ###Markdown To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`.This function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension. ###Code def pack_features_vector(features, labels): """Pack the features into a single array.""" features = tf.stack(list(features.values()), axis=1) return features, labels ###Output _____no_output_____ ###Markdown Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset: ###Code train_dataset = train_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown The features element of the `Dataset` are now arrays with shape `(batch_size, num_features)`. Let's look at the first few examples: ###Code features, labels = next(iter(train_dataset)) print(features[:5]) ###Output _____no_output_____ ###Markdown Select the type of model Why model?A *[model](https://developers.google.com/machine-learning/crash-course/glossarymodel)* is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.Could you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you. Select the modelWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer: <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> Figure 2. A neural network with features, hidden layers, and predictions.&nbsp; When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossaryinference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.02` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.03` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*. Create a model using KerasThe TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the number of features from the dataset, and is required. ###Code model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3) ]) ###Output _____no_output_____ ###Markdown The *[activation function](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU) is common for hidden layers.The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. Using the modelLet's have a quick look at what this model does to a batch of features: ###Code predictions = model(features) predictions[:5] ###Output _____no_output_____ ###Markdown Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossarylogits) for each class.To convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) function: ###Code tf.nn.softmax(predictions[:5]) ###Output _____no_output_____ ###Markdown Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions. ###Code print("Prediction: {}".format(tf.argmax(predictions, axis=1))) print(" Labels: {}".format(labels)) ###Output _____no_output_____ ###Markdown Train the model*[Training](https://developers.google.com/machine-learning/crash-course/glossarytraining)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.The Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features. Define the loss and gradient functionBoth training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossaryloss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.Our model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples. ###Code def loss(model, x, y): y_ = model(x) return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) l = loss(model, features, labels) print("Loss test: {}".format(l)) ###Output _____no_output_____ ###Markdown Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossarygradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/r1/guide/eager). ###Code def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return loss_value, tape.gradient(loss_value, model.trainable_variables) ###Output _____no_output_____ ###Markdown Create an optimizerAn *[optimizer](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> Figure 3. Optimization algorithms visualized over time in 3D space.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results. Let's setup the optimizer and the `global_step` counter: ###Code optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) global_step = tf.Variable(0) ###Output _____no_output_____ ###Markdown We'll use this to calculate a single optimization step: ###Code loss_value, grads = grad(model, features, labels) print("Step: {}, Initial Loss: {}".format(global_step.numpy(), loss_value.numpy())) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) print("Step: {}, Loss: {}".format(global_step.numpy(), loss(model, features, labels).numpy())) ###Output _____no_output_____ ###Markdown Training loopWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:1. Iterate each *epoch*. An epoch is one pass through the dataset.2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.4. Use an `optimizer` to update the model's variables.5. Keep track of some stats for visualization.6. Repeat for each epoch.The `num_epochs` variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation. ###Code ## Note: Rerunning this cell uses the same model variables # keep results for plotting train_loss_results = [] train_accuracy_results = [] num_epochs = 201 for epoch in range(num_epochs): epoch_loss_avg = tf.keras.metrics.Mean() epoch_accuracy = tf.keras.metrics.Accuracy() # Training loop - using batches of 32 for x, y in train_dataset: # Optimize the model loss_value, grads = grad(model, x, y) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) # Track progress epoch_loss_avg(loss_value) # add current batch loss # compare predicted label to actual label epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y) # end epoch train_loss_results.append(epoch_loss_avg.result()) train_accuracy_results.append(epoch_accuracy.result()) if epoch % 50 == 0: print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result())) ###Output _____no_output_____ ###Markdown Visualize the loss function over time While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://tensorflow.org/tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up. ###Code fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8)) fig.suptitle('Training Metrics') axes[0].set_ylabel("Loss", fontsize=14) axes[0].plot(train_loss_results) axes[1].set_ylabel("Accuracy", fontsize=14) axes[1].set_xlabel("Epoch", fontsize=14) axes[1].plot(train_accuracy_results) plt.show() ###Output _____no_output_____ ###Markdown Evaluate the model's effectivenessNow that the model is trained, we can get some statistics on its performance.*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy: Example features Label Model prediction 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 Figure 4. An Iris classifier that is 80% accurate.&nbsp; Setup the test datasetEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.The setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle: ###Code test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv" test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url), origin=test_url) test_dataset = tf.data.experimental.make_csv_dataset( test_fp, batch_size, column_names=column_names, label_name='species', num_epochs=1, shuffle=False) test_dataset = test_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown Evaluate the model on the test datasetUnlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set. ###Code test_accuracy = tf.keras.metrics.Accuracy() for (x, y) in test_dataset: logits = model(x) prediction = tf.argmax(logits, axis=1, output_type=tf.int32) test_accuracy(prediction, y) print("Test set accuracy: {:.3%}".format(test_accuracy.result())) ###Output _____no_output_____ ###Markdown We can see on the last batch, for example, the model is usually correct: ###Code tf.stack([y,prediction],axis=1) ###Output _____no_output_____ ###Markdown Use the trained model to make predictionsWe've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/unlabeled_example); that is, on examples that contain features but not a label.In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica ###Code predict_dataset = tf.convert_to_tensor([ [5.1, 3.3, 1.7, 0.5,], [5.9, 3.0, 4.2, 1.5,], [6.9, 3.1, 5.4, 2.1] ]) predictions = model(predict_dataset) for i, logits in enumerate(predictions): class_idx = tf.argmax(logits).numpy() p = tf.nn.softmax(logits)[class_idx] name = class_names[class_idx] print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p)) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Custom training: walkthrough Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This guide uses machine learning to *categorize* Iris flowers by species. It uses TensorFlow's [eager execution](https://www.tensorflow.org/r1/guide/eager) to:1. Build a model,2. Train this model on example data, and3. Use the model to make predictions about unknown data. TensorFlow programmingThis guide uses these high-level TensorFlow concepts:* Enable an [eager execution](https://www.tensorflow.org/r1/guide/eager) development environment,* Import data with the [Datasets API](https://www.tensorflow.org/r1/guide/datasets),* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).This tutorial is structured like many TensorFlow programs:1. Import and parse the data sets.2. Select the type of model.3. Train the model.4. Evaluate the model's effectiveness.5. Use the trained model to make predictions. Setup program Configure imports and eager executionImport the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/r1/guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar. Eager execution is available in [TensorFlow >=1.8](https://www.tensorflow.org/install/).Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/r1/guide/eager) for more details. ###Code import os import matplotlib.pyplot as plt import tensorflow.compat.v1 as tf print("TensorFlow version: {}".format(tf.__version__)) print("Eager execution: {}".format(tf.executing_eagerly())) ###Output _____no_output_____ ###Markdown The Iris classification problemImagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).The Iris genus entails about 300 species, but our program will only classify the following three:* Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> Figure 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0).&nbsp; Fortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems. Import and parse the training datasetDownload the dataset file and convert it into a structure that can be used by this Python program. Download the datasetDownload the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file. ###Code train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv" train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url), origin=train_dataset_url) print("Local copy of the dataset file: {}".format(train_dataset_fp)) ###Output _____no_output_____ ###Markdown Inspect the dataThis dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries: ###Code !head -n5 {train_dataset_fp} ###Output _____no_output_____ ###Markdown From this view of the dataset, notice the following:1. The first line is a header containing information about the dataset: * There are 120 total examples. Each example has four features and one of three possible label names.2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/example)* per line, where: * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements. * The last column is the *[label](https://developers.google.com/machine-learning/glossary/label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.Let's write that out in code: ###Code # column order in CSV file column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'] feature_names = column_names[:-1] label_name = column_names[-1] print("Features: {}".format(feature_names)) print("Label: {}".format(label_name)) ###Output _____no_output_____ ###Markdown Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginicaFor more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology). ###Code class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica'] ###Output _____no_output_____ ###Markdown Create a `tf.data.Dataset`TensorFlow's [Dataset API](https://www.tensorflow.org/r1/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.Since the dataset is a CSV-formatted text file, use the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/batch_size) parameter. ###Code batch_size = 32 train_dataset = tf.data.experimental.make_csv_dataset( train_dataset_fp, batch_size, column_names=column_names, label_name=label_name, num_epochs=1) ###Output _____no_output_____ ###Markdown The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}`With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features: ###Code features, labels = next(iter(train_dataset)) features ###Output _____no_output_____ ###Markdown Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays.You can start to see some clusters by plotting a few features from the batch: ###Code plt.scatter(features['petal_length'].numpy(), features['sepal_length'].numpy(), c=labels.numpy(), cmap='viridis') plt.xlabel("Petal length") plt.ylabel("Sepal length") plt.show() ###Output _____no_output_____ ###Markdown To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`.This function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension. ###Code def pack_features_vector(features, labels): """Pack the features into a single array.""" features = tf.stack(list(features.values()), axis=1) return features, labels ###Output _____no_output_____ ###Markdown Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset: ###Code train_dataset = train_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown The features element of the `Dataset` are now arrays with shape `(batch_size, num_features)`. Let's look at the first few examples: ###Code features, labels = next(iter(train_dataset)) print(features[:5]) ###Output _____no_output_____ ###Markdown Select the type of model Why model?A *[model](https://developers.google.com/machine-learning/crash-course/glossarymodel)* is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.Could you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you. Select the modelWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer: <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> Figure 2. A neural network with features, hidden layers, and predictions.&nbsp; When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossaryinference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.02` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.03` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*. Create a model using KerasThe TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the number of features from the dataset, and is required. ###Code model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3) ]) ###Output _____no_output_____ ###Markdown The *[activation function](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU) is common for hidden layers.The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. Using the modelLet's have a quick look at what this model does to a batch of features: ###Code predictions = model(features) predictions[:5] ###Output _____no_output_____ ###Markdown Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossarylogits) for each class.To convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) function: ###Code tf.nn.softmax(predictions[:5]) ###Output _____no_output_____ ###Markdown Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions. ###Code print("Prediction: {}".format(tf.argmax(predictions, axis=1))) print(" Labels: {}".format(labels)) ###Output _____no_output_____ ###Markdown Train the model*[Training](https://developers.google.com/machine-learning/crash-course/glossarytraining)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.The Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features. Define the loss and gradient functionBoth training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossaryloss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.Our model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples. ###Code def loss(model, x, y): y_ = model(x) return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) l = loss(model, features, labels) print("Loss test: {}".format(l)) ###Output _____no_output_____ ###Markdown Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossarygradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/r1/guide/eager). ###Code def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return loss_value, tape.gradient(loss_value, model.trainable_variables) ###Output _____no_output_____ ###Markdown Create an optimizerAn *[optimizer](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> Figure 3. Optimization algorithms visualized over time in 3D space.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results. Let's setup the optimizer and the `global_step` counter: ###Code optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) global_step = tf.Variable(0) ###Output _____no_output_____ ###Markdown We'll use this to calculate a single optimization step: ###Code loss_value, grads = grad(model, features, labels) print("Step: {}, Initial Loss: {}".format(global_step.numpy(), loss_value.numpy())) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) print("Step: {}, Loss: {}".format(global_step.numpy(), loss(model, features, labels).numpy())) ###Output _____no_output_____ ###Markdown Training loopWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:1. Iterate each *epoch*. An epoch is one pass through the dataset.2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.4. Use an `optimizer` to update the model's variables.5. Keep track of some stats for visualization.6. Repeat for each epoch.The `num_epochs` variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation. ###Code ## Note: Rerunning this cell uses the same model variables # keep results for plotting train_loss_results = [] train_accuracy_results = [] num_epochs = 201 for epoch in range(num_epochs): epoch_loss_avg = tf.keras.metrics.Mean() epoch_accuracy = tf.keras.metrics.Accuracy() # Training loop - using batches of 32 for x, y in train_dataset: # Optimize the model loss_value, grads = grad(model, x, y) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) # Track progress epoch_loss_avg(loss_value) # add current batch loss # compare predicted label to actual label epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y) # end epoch train_loss_results.append(epoch_loss_avg.result()) train_accuracy_results.append(epoch_accuracy.result()) if epoch % 50 == 0: print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result())) ###Output _____no_output_____ ###Markdown Visualize the loss function over time While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://tensorflow.org/tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up. ###Code fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8)) fig.suptitle('Training Metrics') axes[0].set_ylabel("Loss", fontsize=14) axes[0].plot(train_loss_results) axes[1].set_ylabel("Accuracy", fontsize=14) axes[1].set_xlabel("Epoch", fontsize=14) axes[1].plot(train_accuracy_results) plt.show() ###Output _____no_output_____ ###Markdown Evaluate the model's effectivenessNow that the model is trained, we can get some statistics on its performance.*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy: Example features Label Model prediction 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 Figure 4. An Iris classifier that is 80% accurate.&nbsp; Setup the test datasetEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.The setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle: ###Code test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv" test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url), origin=test_url) test_dataset = tf.data.experimental.make_csv_dataset( test_fp, batch_size, column_names=column_names, label_name='species', num_epochs=1, shuffle=False) test_dataset = test_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown Evaluate the model on the test datasetUnlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set. ###Code test_accuracy = tf.keras.metrics.Accuracy() for (x, y) in test_dataset: logits = model(x) prediction = tf.argmax(logits, axis=1, output_type=tf.int32) test_accuracy(prediction, y) print("Test set accuracy: {:.3%}".format(test_accuracy.result())) ###Output _____no_output_____ ###Markdown We can see on the last batch, for example, the model is usually correct: ###Code tf.stack([y,prediction],axis=1) ###Output _____no_output_____ ###Markdown Use the trained model to make predictionsWe've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/unlabeled_example); that is, on examples that contain features but not a label.In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica ###Code predict_dataset = tf.convert_to_tensor([ [5.1, 3.3, 1.7, 0.5,], [5.9, 3.0, 4.2, 1.5,], [6.9, 3.1, 5.4, 2.1] ]) predictions = model(predict_dataset) for i, logits in enumerate(predictions): class_idx = tf.argmax(logits).numpy() p = tf.nn.softmax(logits)[class_idx] name = class_names[class_idx] print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p)) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Custom training: walkthrough Run in Google Colab View source on GitHub This guide uses machine learning to *categorize* Iris flowers by species. It uses TensorFlow's [eager execution](https://www.tensorflow.org/r1/guide/eager) to:1. Build a model,2. Train this model on example data, and3. Use the model to make predictions about unknown data. TensorFlow programmingThis guide uses these high-level TensorFlow concepts:* Enable an [eager execution](https://www.tensorflow.org/r1/guide/eager) development environment,* Import data with the [Datasets API](https://www.tensorflow.org/r1/guide/datasets),* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).This tutorial is structured like many TensorFlow programs:1. Import and parse the data sets.2. Select the type of model.3. Train the model.4. Evaluate the model's effectiveness.5. Use the trained model to make predictions. Setup program Configure imports and eager executionImport the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/r1/guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar. Eager execution is available in [TensorFlow >=1.8](https://www.tensorflow.org/install/).Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/r1/guide/eager) for more details. ###Code import os import matplotlib.pyplot as plt import tensorflow.compat.v1 as tf print("TensorFlow version: {}".format(tf.__version__)) print("Eager execution: {}".format(tf.executing_eagerly())) ###Output _____no_output_____ ###Markdown The Iris classification problemImagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).The Iris genus entails about 300 species, but our program will only classify the following three:* Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> Figure 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0).&nbsp; Fortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems. Import and parse the training datasetDownload the dataset file and convert it into a structure that can be used by this Python program. Download the datasetDownload the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file. ###Code train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv" train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url), origin=train_dataset_url) print("Local copy of the dataset file: {}".format(train_dataset_fp)) ###Output _____no_output_____ ###Markdown Inspect the dataThis dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries: ###Code !head -n5 {train_dataset_fp} ###Output _____no_output_____ ###Markdown From this view of the dataset, notice the following:1. The first line is a header containing information about the dataset: * There are 120 total examples. Each example has four features and one of three possible label names.2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/example)* per line, where: * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements. * The last column is the *[label](https://developers.google.com/machine-learning/glossary/label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.Let's write that out in code: ###Code # column order in CSV file column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'] feature_names = column_names[:-1] label_name = column_names[-1] print("Features: {}".format(feature_names)) print("Label: {}".format(label_name)) ###Output _____no_output_____ ###Markdown Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginicaFor more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology). ###Code class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica'] ###Output _____no_output_____ ###Markdown Create a `tf.data.Dataset`TensorFlow's [Dataset API](https://www.tensorflow.org/r1/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.Since the dataset is a CSV-formatted text file, use the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/batch_size) parameter. ###Code batch_size = 32 train_dataset = tf.data.experimental.make_csv_dataset( train_dataset_fp, batch_size, column_names=column_names, label_name=label_name, num_epochs=1) ###Output _____no_output_____ ###Markdown The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}`With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features: ###Code features, labels = next(iter(train_dataset)) features ###Output _____no_output_____ ###Markdown Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays.You can start to see some clusters by plotting a few features from the batch: ###Code plt.scatter(features['petal_length'].numpy(), features['sepal_length'].numpy(), c=labels.numpy(), cmap='viridis') plt.xlabel("Petal length") plt.ylabel("Sepal length") plt.show() ###Output _____no_output_____ ###Markdown To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`.This function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension. ###Code def pack_features_vector(features, labels): """Pack the features into a single array.""" features = tf.stack(list(features.values()), axis=1) return features, labels ###Output _____no_output_____ ###Markdown Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset: ###Code train_dataset = train_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown The features element of the `Dataset` are now arrays with shape `(batch_size, num_features)`. Let's look at the first few examples: ###Code features, labels = next(iter(train_dataset)) print(features[:5]) ###Output _____no_output_____ ###Markdown Select the type of model Why model?A *[model](https://developers.google.com/machine-learning/crash-course/glossarymodel)* is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.Could you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you. Select the modelWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer: <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> Figure 2. A neural network with features, hidden layers, and predictions.&nbsp; When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossaryinference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.02` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.03` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*. Create a model using KerasThe TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the number of features from the dataset, and is required. ###Code model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3) ]) ###Output _____no_output_____ ###Markdown The *[activation function](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU) is common for hidden layers.The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. Using the modelLet's have a quick look at what this model does to a batch of features: ###Code predictions = model(features) predictions[:5] ###Output _____no_output_____ ###Markdown Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossarylogits) for each class.To convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) function: ###Code tf.nn.softmax(predictions[:5]) ###Output _____no_output_____ ###Markdown Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions. ###Code print("Prediction: {}".format(tf.argmax(predictions, axis=1))) print(" Labels: {}".format(labels)) ###Output _____no_output_____ ###Markdown Train the model*[Training](https://developers.google.com/machine-learning/crash-course/glossarytraining)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.The Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features. Define the loss and gradient functionBoth training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossaryloss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.Our model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples. ###Code def loss(model, x, y): y_ = model(x) return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) l = loss(model, features, labels) print("Loss test: {}".format(l)) ###Output _____no_output_____ ###Markdown Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossarygradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/r1/guide/eager). ###Code def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return loss_value, tape.gradient(loss_value, model.trainable_variables) ###Output _____no_output_____ ###Markdown Create an optimizerAn *[optimizer](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> Figure 3. Optimization algorithms visualized over time in 3D space.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results. Let's setup the optimizer and the `global_step` counter: ###Code optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) global_step = tf.Variable(0) ###Output _____no_output_____ ###Markdown We'll use this to calculate a single optimization step: ###Code loss_value, grads = grad(model, features, labels) print("Step: {}, Initial Loss: {}".format(global_step.numpy(), loss_value.numpy())) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) print("Step: {}, Loss: {}".format(global_step.numpy(), loss(model, features, labels).numpy())) ###Output _____no_output_____ ###Markdown Training loopWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:1. Iterate each *epoch*. An epoch is one pass through the dataset.2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.4. Use an `optimizer` to update the model's variables.5. Keep track of some stats for visualization.6. Repeat for each epoch.The `num_epochs` variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation. ###Code ## Note: Rerunning this cell uses the same model variables # keep results for plotting train_loss_results = [] train_accuracy_results = [] num_epochs = 201 for epoch in range(num_epochs): epoch_loss_avg = tf.keras.metrics.Mean() epoch_accuracy = tf.keras.metrics.Accuracy() # Training loop - using batches of 32 for x, y in train_dataset: # Optimize the model loss_value, grads = grad(model, x, y) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) # Track progress epoch_loss_avg(loss_value) # add current batch loss # compare predicted label to actual label epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y) # end epoch train_loss_results.append(epoch_loss_avg.result()) train_accuracy_results.append(epoch_accuracy.result()) if epoch % 50 == 0: print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result())) ###Output _____no_output_____ ###Markdown Visualize the loss function over time While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://tensorflow.org/tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up. ###Code fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8)) fig.suptitle('Training Metrics') axes[0].set_ylabel("Loss", fontsize=14) axes[0].plot(train_loss_results) axes[1].set_ylabel("Accuracy", fontsize=14) axes[1].set_xlabel("Epoch", fontsize=14) axes[1].plot(train_accuracy_results) plt.show() ###Output _____no_output_____ ###Markdown Evaluate the model's effectivenessNow that the model is trained, we can get some statistics on its performance.*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy: Example features Label Model prediction 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 Figure 4. An Iris classifier that is 80% accurate.&nbsp; Setup the test datasetEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.The setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle: ###Code test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv" test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url), origin=test_url) test_dataset = tf.data.experimental.make_csv_dataset( test_fp, batch_size, column_names=column_names, label_name='species', num_epochs=1, shuffle=False) test_dataset = test_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown Evaluate the model on the test datasetUnlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set. ###Code test_accuracy = tf.keras.metrics.Accuracy() for (x, y) in test_dataset: logits = model(x) prediction = tf.argmax(logits, axis=1, output_type=tf.int32) test_accuracy(prediction, y) print("Test set accuracy: {:.3%}".format(test_accuracy.result())) ###Output _____no_output_____ ###Markdown We can see on the last batch, for example, the model is usually correct: ###Code tf.stack([y,prediction],axis=1) ###Output _____no_output_____ ###Markdown Use the trained model to make predictionsWe've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/unlabeled_example); that is, on examples that contain features but not a label.In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica ###Code predict_dataset = tf.convert_to_tensor([ [5.1, 3.3, 1.7, 0.5,], [5.9, 3.0, 4.2, 1.5,], [6.9, 3.1, 5.4, 2.1] ]) predictions = model(predict_dataset) for i, logits in enumerate(predictions): class_idx = tf.argmax(logits).numpy() p = tf.nn.softmax(logits)[class_idx] name = class_names[class_idx] print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p)) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Custom training: walkthrough Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatibility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This guide uses machine learning to *categorize* Iris flowers by species. It uses TensorFlow's [eager execution](https://www.tensorflow.org/r1/guide/eager) to:1. Build a model,2. Train this model on example data, and3. Use the model to make predictions about unknown data. TensorFlow programmingThis guide uses these high-level TensorFlow concepts:* Enable an [eager execution](https://www.tensorflow.org/r1/guide/eager) development environment,* Import data with the [Datasets API](https://www.tensorflow.org/r1/guide/datasets),* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).This tutorial is structured like many TensorFlow programs:1. Import and parse the data sets.2. Select the type of model.3. Train the model.4. Evaluate the model's effectiveness.5. Use the trained model to make predictions. Setup program Configure imports and eager executionImport the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/r1/guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar. Eager execution is available in [TensorFlow >=1.8](https://www.tensorflow.org/install/).Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/r1/guide/eager) for more details. ###Code import os import matplotlib.pyplot as plt import tensorflow.compat.v1 as tf print("TensorFlow version: {}".format(tf.__version__)) print("Eager execution: {}".format(tf.executing_eagerly())) ###Output _____no_output_____ ###Markdown The Iris classification problemImagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).The Iris genus entails about 300 species, but our program will only classify the following three:* Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> Figure 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0).&nbsp; Fortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems. Import and parse the training datasetDownload the dataset file and convert it into a structure that can be used by this Python program. Download the datasetDownload the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file. ###Code train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv" train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url), origin=train_dataset_url) print("Local copy of the dataset file: {}".format(train_dataset_fp)) ###Output _____no_output_____ ###Markdown Inspect the dataThis dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries: ###Code !head -n5 {train_dataset_fp} ###Output _____no_output_____ ###Markdown From this view of the dataset, notice the following:1. The first line is a header containing information about the dataset: * There are 120 total examples. Each example has four features and one of three possible label names.2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/example)* per line, where: * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements. * The last column is the *[label](https://developers.google.com/machine-learning/glossary/label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.Let's write that out in code: ###Code # column order in CSV file column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'] feature_names = column_names[:-1] label_name = column_names[-1] print("Features: {}".format(feature_names)) print("Label: {}".format(label_name)) ###Output _____no_output_____ ###Markdown Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginicaFor more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology). ###Code class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica'] ###Output _____no_output_____ ###Markdown Create a `tf.data.Dataset`TensorFlow's [Dataset API](https://www.tensorflow.org/r1/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.Since the dataset is a CSV-formatted text file, use the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/batch_size) parameter. ###Code batch_size = 32 train_dataset = tf.data.experimental.make_csv_dataset( train_dataset_fp, batch_size, column_names=column_names, label_name=label_name, num_epochs=1) ###Output _____no_output_____ ###Markdown The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}`With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features: ###Code features, labels = next(iter(train_dataset)) features ###Output _____no_output_____ ###Markdown Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays.You can start to see some clusters by plotting a few features from the batch: ###Code plt.scatter(features['petal_length'].numpy(), features['sepal_length'].numpy(), c=labels.numpy(), cmap='viridis') plt.xlabel("Petal length") plt.ylabel("Sepal length") plt.show() ###Output _____no_output_____ ###Markdown To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`.This function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension. ###Code def pack_features_vector(features, labels): """Pack the features into a single array.""" features = tf.stack(list(features.values()), axis=1) return features, labels ###Output _____no_output_____ ###Markdown Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset: ###Code train_dataset = train_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown The features element of the `Dataset` are now arrays with shape `(batch_size, num_features)`. Let's look at the first few examples: ###Code features, labels = next(iter(train_dataset)) print(features[:5]) ###Output _____no_output_____ ###Markdown Select the type of model Why model?A *[model](https://developers.google.com/machine-learning/crash-course/glossarymodel)* is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.Could you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you. Select the modelWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer: <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> Figure 2. A neural network with features, hidden layers, and predictions.&nbsp; When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossaryinference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.02` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.03` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*. Create a model using KerasThe TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the number of features from the dataset, and is required. ###Code model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3) ]) ###Output _____no_output_____ ###Markdown The *[activation function](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU) is common for hidden layers.The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. Using the modelLet's have a quick look at what this model does to a batch of features: ###Code predictions = model(features) predictions[:5] ###Output _____no_output_____ ###Markdown Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossarylogits) for each class.To convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) function: ###Code tf.nn.softmax(predictions[:5]) ###Output _____no_output_____ ###Markdown Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions. ###Code print("Prediction: {}".format(tf.argmax(predictions, axis=1))) print(" Labels: {}".format(labels)) ###Output _____no_output_____ ###Markdown Train the model*[Training](https://developers.google.com/machine-learning/crash-course/glossarytraining)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.The Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features. Define the loss and gradient functionBoth training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossaryloss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.Our model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples. ###Code def loss(model, x, y): y_ = model(x) return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) l = loss(model, features, labels) print("Loss test: {}".format(l)) ###Output _____no_output_____ ###Markdown Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossarygradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/r1/guide/eager). ###Code def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return loss_value, tape.gradient(loss_value, model.trainable_variables) ###Output _____no_output_____ ###Markdown Create an optimizerAn *[optimizer](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> Figure 3. Optimization algorithms visualized over time in 3D space.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results. Let's setup the optimizer and the `global_step` counter: ###Code optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) global_step = tf.Variable(0) ###Output _____no_output_____ ###Markdown We'll use this to calculate a single optimization step: ###Code loss_value, grads = grad(model, features, labels) print("Step: {}, Initial Loss: {}".format(global_step.numpy(), loss_value.numpy())) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) print("Step: {}, Loss: {}".format(global_step.numpy(), loss(model, features, labels).numpy())) ###Output _____no_output_____ ###Markdown Training loopWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:1. Iterate each *epoch*. An epoch is one pass through the dataset.2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.4. Use an `optimizer` to update the model's variables.5. Keep track of some stats for visualization.6. Repeat for each epoch.The `num_epochs` variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation. ###Code ## Note: Rerunning this cell uses the same model variables # keep results for plotting train_loss_results = [] train_accuracy_results = [] num_epochs = 201 for epoch in range(num_epochs): epoch_loss_avg = tf.keras.metrics.Mean() epoch_accuracy = tf.keras.metrics.Accuracy() # Training loop - using batches of 32 for x, y in train_dataset: # Optimize the model loss_value, grads = grad(model, x, y) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) # Track progress epoch_loss_avg(loss_value) # add current batch loss # compare predicted label to actual label epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y) # end epoch train_loss_results.append(epoch_loss_avg.result()) train_accuracy_results.append(epoch_accuracy.result()) if epoch % 50 == 0: print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result())) ###Output _____no_output_____ ###Markdown Visualize the loss function over time While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://tensorflow.org/tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up. ###Code fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8)) fig.suptitle('Training Metrics') axes[0].set_ylabel("Loss", fontsize=14) axes[0].plot(train_loss_results) axes[1].set_ylabel("Accuracy", fontsize=14) axes[1].set_xlabel("Epoch", fontsize=14) axes[1].plot(train_accuracy_results) plt.show() ###Output _____no_output_____ ###Markdown Evaluate the model's effectivenessNow that the model is trained, we can get some statistics on its performance.*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy: Example features Label Model prediction 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 Figure 4. An Iris classifier that is 80% accurate.&nbsp; Setup the test datasetEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.The setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle: ###Code test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv" test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url), origin=test_url) test_dataset = tf.data.experimental.make_csv_dataset( test_fp, batch_size, column_names=column_names, label_name='species', num_epochs=1, shuffle=False) test_dataset = test_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown Evaluate the model on the test datasetUnlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set. ###Code test_accuracy = tf.keras.metrics.Accuracy() for (x, y) in test_dataset: logits = model(x) prediction = tf.argmax(logits, axis=1, output_type=tf.int32) test_accuracy(prediction, y) print("Test set accuracy: {:.3%}".format(test_accuracy.result())) ###Output _____no_output_____ ###Markdown We can see on the last batch, for example, the model is usually correct: ###Code tf.stack([y,prediction],axis=1) ###Output _____no_output_____ ###Markdown Use the trained model to make predictionsWe've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/unlabeled_example); that is, on examples that contain features but not a label.In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica ###Code predict_dataset = tf.convert_to_tensor([ [5.1, 3.3, 1.7, 0.5,], [5.9, 3.0, 4.2, 1.5,], [6.9, 3.1, 5.4, 2.1] ]) predictions = model(predict_dataset) for i, logits in enumerate(predictions): class_idx = tf.argmax(logits).numpy() p = tf.nn.softmax(logits)[class_idx] name = class_names[class_idx] print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p)) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Custom training: walkthrough Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This guide uses machine learning to *categorize* Iris flowers by species. It uses TensorFlow's [eager execution](https://www.tensorflow.org/r1/guide/eager) to:1. Build a model,2. Train this model on example data, and3. Use the model to make predictions about unknown data. TensorFlow programmingThis guide uses these high-level TensorFlow concepts:* Enable an [eager execution](https://www.tensorflow.org/r1/guide/eager) development environment,* Import data with the [Datasets API](https://www.tensorflow.org/r1/guide/datasets),* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).This tutorial is structured like many TensorFlow programs:1. Import and parse the data sets.2. Select the type of model.3. Train the model.4. Evaluate the model's effectiveness.5. Use the trained model to make predictions. Setup program Configure imports and eager executionImport the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/r1/guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar. Eager execution is available in [TensorFlow >=1.8](https://www.tensorflow.org/install/).Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/r1/guide/eager) for more details. ###Code import os import matplotlib.pyplot as plt import tensorflow.compat.v1 as tf print("TensorFlow version: {}".format(tf.__version__)) print("Eager execution: {}".format(tf.executing_eagerly())) ###Output _____no_output_____ ###Markdown The Iris classification problemImagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).The Iris genus entails about 300 species, but our program will only classify the following three:* Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> Figure 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0).&nbsp; Fortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems. Import and parse the training datasetDownload the dataset file and convert it into a structure that can be used by this Python program. Download the datasetDownload the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file. ###Code train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv" train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url), origin=train_dataset_url) print("Local copy of the dataset file: {}".format(train_dataset_fp)) ###Output _____no_output_____ ###Markdown Inspect the dataThis dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries: ###Code !head -n5 {train_dataset_fp} ###Output _____no_output_____ ###Markdown From this view of the dataset, notice the following:1. The first line is a header containing information about the dataset: * There are 120 total examples. Each example has four features and one of three possible label names.2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/example)* per line, where: * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements. * The last column is the *[label](https://developers.google.com/machine-learning/glossary/label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.Let's write that out in code: ###Code # column order in CSV file column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'] feature_names = column_names[:-1] label_name = column_names[-1] print("Features: {}".format(feature_names)) print("Label: {}".format(label_name)) ###Output _____no_output_____ ###Markdown Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginicaFor more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology). ###Code class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica'] ###Output _____no_output_____ ###Markdown Create a `tf.data.Dataset`TensorFlow's [Dataset API](https://www.tensorflow.org/r1/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.Since the dataset is a CSV-formatted text file, use the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/batch_size) parameter. ###Code batch_size = 32 train_dataset = tf.data.experimental.make_csv_dataset( train_dataset_fp, batch_size, column_names=column_names, label_name=label_name, num_epochs=1) ###Output _____no_output_____ ###Markdown The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}`With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features: ###Code features, labels = next(iter(train_dataset)) features ###Output _____no_output_____ ###Markdown Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays.You can start to see some clusters by plotting a few features from the batch: ###Code plt.scatter(features['petal_length'].numpy(), features['sepal_length'].numpy(), c=labels.numpy(), cmap='viridis') plt.xlabel("Petal length") plt.ylabel("Sepal length") plt.show() ###Output _____no_output_____ ###Markdown To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`.This function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension. ###Code def pack_features_vector(features, labels): """Pack the features into a single array.""" features = tf.stack(list(features.values()), axis=1) return features, labels ###Output _____no_output_____ ###Markdown Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset: ###Code train_dataset = train_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown The features element of the `Dataset` are now arrays with shape `(batch_size, num_features)`. Let's look at the first few examples: ###Code features, labels = next(iter(train_dataset)) print(features[:5]) ###Output _____no_output_____ ###Markdown Select the type of model Why model?A *[model](https://developers.google.com/machine-learning/crash-course/glossarymodel)* is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.Could you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you. Select the modelWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer: <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> Figure 2. A neural network with features, hidden layers, and predictions.&nbsp; When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossaryinference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.02` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.03` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*. Create a model using KerasThe TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the number of features from the dataset, and is required. ###Code model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3) ]) ###Output _____no_output_____ ###Markdown The *[activation function](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU) is common for hidden layers.The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. Using the modelLet's have a quick look at what this model does to a batch of features: ###Code predictions = model(features) predictions[:5] ###Output _____no_output_____ ###Markdown Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossarylogits) for each class.To convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) function: ###Code tf.nn.softmax(predictions[:5]) ###Output _____no_output_____ ###Markdown Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions. ###Code print("Prediction: {}".format(tf.argmax(predictions, axis=1))) print(" Labels: {}".format(labels)) ###Output _____no_output_____ ###Markdown Train the model*[Training](https://developers.google.com/machine-learning/crash-course/glossarytraining)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.The Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features. Define the loss and gradient functionBoth training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossaryloss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.Our model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples. ###Code def loss(model, x, y): y_ = model(x) return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) l = loss(model, features, labels) print("Loss test: {}".format(l)) ###Output _____no_output_____ ###Markdown Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossarygradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/r1/guide/eager). ###Code def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return loss_value, tape.gradient(loss_value, model.trainable_variables) ###Output _____no_output_____ ###Markdown Create an optimizerAn *[optimizer](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> Figure 3. Optimization algorithms visualized over time in 3D space.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results. Let's setup the optimizer and the `global_step` counter: ###Code optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) global_step = tf.Variable(0) ###Output _____no_output_____ ###Markdown We'll use this to calculate a single optimization step: ###Code loss_value, grads = grad(model, features, labels) print("Step: {}, Initial Loss: {}".format(global_step.numpy(), loss_value.numpy())) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) print("Step: {}, Loss: {}".format(global_step.numpy(), loss(model, features, labels).numpy())) ###Output _____no_output_____ ###Markdown Training loopWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:1. Iterate each *epoch*. An epoch is one pass through the dataset.2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.4. Use an `optimizer` to update the model's variables.5. Keep track of some stats for visualization.6. Repeat for each epoch.The `num_epochs` variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation. ###Code ## Note: Rerunning this cell uses the same model variables # keep results for plotting train_loss_results = [] train_accuracy_results = [] num_epochs = 201 for epoch in range(num_epochs): epoch_loss_avg = tf.keras.metrics.Mean() epoch_accuracy = tf.keras.metrics.Accuracy() # Training loop - using batches of 32 for x, y in train_dataset: # Optimize the model loss_value, grads = grad(model, x, y) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) # Track progress epoch_loss_avg(loss_value) # add current batch loss # compare predicted label to actual label epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y) # end epoch train_loss_results.append(epoch_loss_avg.result()) train_accuracy_results.append(epoch_accuracy.result()) if epoch % 50 == 0: print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result())) ###Output _____no_output_____ ###Markdown Visualize the loss function over time While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://tensorflow.org/tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up. ###Code fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8)) fig.suptitle('Training Metrics') axes[0].set_ylabel("Loss", fontsize=14) axes[0].plot(train_loss_results) axes[1].set_ylabel("Accuracy", fontsize=14) axes[1].set_xlabel("Epoch", fontsize=14) axes[1].plot(train_accuracy_results) plt.show() ###Output _____no_output_____ ###Markdown Evaluate the model's effectivenessNow that the model is trained, we can get some statistics on its performance.*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy: Example features Label Model prediction 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 Figure 4. An Iris classifier that is 80% accurate.&nbsp; Setup the test datasetEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.The setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle: ###Code test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv" test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url), origin=test_url) test_dataset = tf.data.experimental.make_csv_dataset( test_fp, batch_size, column_names=column_names, label_name='species', num_epochs=1, shuffle=False) test_dataset = test_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown Evaluate the model on the test datasetUnlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set. ###Code test_accuracy = tf.keras.metrics.Accuracy() for (x, y) in test_dataset: logits = model(x) prediction = tf.argmax(logits, axis=1, output_type=tf.int32) test_accuracy(prediction, y) print("Test set accuracy: {:.3%}".format(test_accuracy.result())) ###Output _____no_output_____ ###Markdown We can see on the last batch, for example, the model is usually correct: ###Code tf.stack([y,prediction],axis=1) ###Output _____no_output_____ ###Markdown Use the trained model to make predictionsWe've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/unlabeled_example); that is, on examples that contain features but not a label.In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica ###Code predict_dataset = tf.convert_to_tensor([ [5.1, 3.3, 1.7, 0.5,], [5.9, 3.0, 4.2, 1.5,], [6.9, 3.1, 5.4, 2.1] ]) predictions = model(predict_dataset) for i, logits in enumerate(predictions): class_idx = tf.argmax(logits).numpy() p = tf.nn.softmax(logits)[class_idx] name = class_names[class_idx] print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p)) ###Output _____no_output_____ ###Markdown Copyright 2018 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Custom training: walkthrough Run in Google Colab View source on GitHub This guide uses machine learning to *categorize* Iris flowers by species. It uses TensorFlow's [eager execution](https://www.tensorflow.org/r1/guide/eager) to:1. Build a model,2. Train this model on example data, and3. Use the model to make predictions about unknown data. TensorFlow programmingThis guide uses these high-level TensorFlow concepts:* Enable an [eager execution](https://www.tensorflow.org/r1/guide/eager) development environment,* Import data with the [Datasets API](https://www.tensorflow.org/r1/guide/datasets),* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).This tutorial is structured like many TensorFlow programs:1. Import and parse the data sets.2. Select the type of model.3. Train the model.4. Evaluate the model's effectiveness.5. Use the trained model to make predictions. Setup program Configure imports and eager executionImport the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/r1/guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar. Eager execution is available in [TensorFlow >=1.8](https://www.tensorflow.org/install/).Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/r1/guide/eager) for more details. ###Code from __future__ import absolute_import, division, print_function, unicode_literals import os import matplotlib.pyplot as plt try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow.compat.v1 as tf print("TensorFlow version: {}".format(tf.__version__)) print("Eager execution: {}".format(tf.executing_eagerly())) ###Output _____no_output_____ ###Markdown The Iris classification problemImagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).The Iris genus entails about 300 species, but our program will only classify the following three:* Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> Figure 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0).&nbsp; Fortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems. Import and parse the training datasetDownload the dataset file and convert it into a structure that can be used by this Python program. Download the datasetDownload the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file. ###Code train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv" train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url), origin=train_dataset_url) print("Local copy of the dataset file: {}".format(train_dataset_fp)) ###Output _____no_output_____ ###Markdown Inspect the dataThis dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries: ###Code !head -n5 {train_dataset_fp} ###Output _____no_output_____ ###Markdown From this view of the dataset, notice the following:1. The first line is a header containing information about the dataset: * There are 120 total examples. Each example has four features and one of three possible label names.2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/example)* per line, where: * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements. * The last column is the *[label](https://developers.google.com/machine-learning/glossary/label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.Let's write that out in code: ###Code # column order in CSV file column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'] feature_names = column_names[:-1] label_name = column_names[-1] print("Features: {}".format(feature_names)) print("Label: {}".format(label_name)) ###Output _____no_output_____ ###Markdown Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginicaFor more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology). ###Code class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica'] ###Output _____no_output_____ ###Markdown Create a `tf.data.Dataset`TensorFlow's [Dataset API](https://www.tensorflow.org/r1/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.Since the dataset is a CSV-formatted text file, use the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/batch_size) parameter. ###Code batch_size = 32 train_dataset = tf.data.experimental.make_csv_dataset( train_dataset_fp, batch_size, column_names=column_names, label_name=label_name, num_epochs=1) ###Output _____no_output_____ ###Markdown The `make_csv_dataset` function returns a `tf.data.Dataset` of `(features, label)` pairs, where `features` is a dictionary: `{'feature_name': value}`With eager execution enabled, these `Dataset` objects are iterable. Let's look at a batch of features: ###Code features, labels = next(iter(train_dataset)) features ###Output _____no_output_____ ###Markdown Notice that like-features are grouped together, or *batched*. Each example row's fields are appended to the corresponding feature array. Change the `batch_size` to set the number of examples stored in these feature arrays.You can start to see some clusters by plotting a few features from the batch: ###Code plt.scatter(features['petal_length'].numpy(), features['sepal_length'].numpy(), c=labels.numpy(), cmap='viridis') plt.xlabel("Petal length") plt.ylabel("Sepal length") plt.show() ###Output _____no_output_____ ###Markdown To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: `(batch_size, num_features)`.This function uses the [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) method which takes values from a list of tensors and creates a combined tensor at the specified dimension. ###Code def pack_features_vector(features, labels): """Pack the features into a single array.""" features = tf.stack(list(features.values()), axis=1) return features, labels ###Output _____no_output_____ ###Markdown Then use the [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) method to pack the `features` of each `(features,label)` pair into the training dataset: ###Code train_dataset = train_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown The features element of the `Dataset` are now arrays with shape `(batch_size, num_features)`. Let's look at the first few examples: ###Code features, labels = next(iter(train_dataset)) print(features[:5]) ###Output _____no_output_____ ###Markdown Select the type of model Why model?A *[model](https://developers.google.com/machine-learning/crash-course/glossarymodel)* is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.Could you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you. Select the modelWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer: <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> Figure 2. A neural network with features, hidden layers, and predictions.&nbsp; When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossaryinference)*. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: `0.02` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.03` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*. Create a model using KerasThe TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the number of features from the dataset, and is required. ###Code model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3) ]) ###Output _____no_output_____ ###Markdown The *[activation function](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)* determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU) is common for hidden layers.The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. Using the modelLet's have a quick look at what this model does to a batch of features: ###Code predictions = model(features) predictions[:5] ###Output _____no_output_____ ###Markdown Here, each example returns a [logit](https://developers.google.com/machine-learning/crash-course/glossarylogits) for each class.To convert these logits to a probability for each class, use the [softmax](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) function: ###Code tf.nn.softmax(predictions[:5]) ###Output _____no_output_____ ###Markdown Taking the `tf.argmax` across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions. ###Code print("Prediction: {}".format(tf.argmax(predictions, axis=1))) print(" Labels: {}".format(labels)) ###Output _____no_output_____ ###Markdown Train the model*[Training](https://developers.google.com/machine-learning/crash-course/glossarytraining)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.The Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features. Define the loss and gradient functionBoth training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossaryloss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.Our model will calculate its loss using the [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples. ###Code def loss(model, x, y): y_ = model(x) return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_) l = loss(model, features, labels) print("Loss test: {}".format(l)) ###Output _____no_output_____ ###Markdown Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossarygradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/r1/guide/eager). ###Code def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return loss_value, tape.gradient(loss_value, model.trainable_variables) ###Output _____no_output_____ ###Markdown Create an optimizerAn *[optimizer](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> Figure 3. Optimization algorithms visualized over time in 3D space.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results. Let's setup the optimizer and the `global_step` counter: ###Code optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) global_step = tf.Variable(0) ###Output _____no_output_____ ###Markdown We'll use this to calculate a single optimization step: ###Code loss_value, grads = grad(model, features, labels) print("Step: {}, Initial Loss: {}".format(global_step.numpy(), loss_value.numpy())) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) print("Step: {}, Loss: {}".format(global_step.numpy(), loss(model, features, labels).numpy())) ###Output _____no_output_____ ###Markdown Training loopWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:1. Iterate each *epoch*. An epoch is one pass through the dataset.2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.4. Use an `optimizer` to update the model's variables.5. Keep track of some stats for visualization.6. Repeat for each epoch.The `num_epochs` variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation. ###Code ## Note: Rerunning this cell uses the same model variables # keep results for plotting train_loss_results = [] train_accuracy_results = [] num_epochs = 201 for epoch in range(num_epochs): epoch_loss_avg = tf.keras.metrics.Mean() epoch_accuracy = tf.keras.metrics.Accuracy() # Training loop - using batches of 32 for x, y in train_dataset: # Optimize the model loss_value, grads = grad(model, x, y) optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step) # Track progress epoch_loss_avg(loss_value) # add current batch loss # compare predicted label to actual label epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y) # end epoch train_loss_results.append(epoch_loss_avg.result()) train_accuracy_results.append(epoch_accuracy.result()) if epoch % 50 == 0: print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch, epoch_loss_avg.result(), epoch_accuracy.result())) ###Output _____no_output_____ ###Markdown Visualize the loss function over time While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://tensorflow.org/tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up. ###Code fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8)) fig.suptitle('Training Metrics') axes[0].set_ylabel("Loss", fontsize=14) axes[0].plot(train_loss_results) axes[1].set_ylabel("Accuracy", fontsize=14) axes[1].set_xlabel("Epoch", fontsize=14) axes[1].plot(train_accuracy_results) plt.show() ###Output _____no_output_____ ###Markdown Evaluate the model's effectivenessNow that the model is trained, we can get some statistics on its performance.*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy: Example features Label Model prediction 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 Figure 4. An Iris classifier that is 80% accurate.&nbsp; Setup the test datasetEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.The setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle: ###Code test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv" test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url), origin=test_url) test_dataset = tf.data.experimental.make_csv_dataset( test_fp, batch_size, column_names=column_names, label_name='species', num_epochs=1, shuffle=False) test_dataset = test_dataset.map(pack_features_vector) ###Output _____no_output_____ ###Markdown Evaluate the model on the test datasetUnlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set. ###Code test_accuracy = tf.keras.metrics.Accuracy() for (x, y) in test_dataset: logits = model(x) prediction = tf.argmax(logits, axis=1, output_type=tf.int32) test_accuracy(prediction, y) print("Test set accuracy: {:.3%}".format(test_accuracy.result())) ###Output _____no_output_____ ###Markdown We can see on the last batch, for example, the model is usually correct: ###Code tf.stack([y,prediction],axis=1) ###Output _____no_output_____ ###Markdown Use the trained model to make predictionsWe've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/unlabeled_example); that is, on examples that contain features but not a label.In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica ###Code predict_dataset = tf.convert_to_tensor([ [5.1, 3.3, 1.7, 0.5,], [5.9, 3.0, 4.2, 1.5,], [6.9, 3.1, 5.4, 2.1] ]) predictions = model(predict_dataset) for i, logits in enumerate(predictions): class_idx = tf.argmax(logits).numpy() p = tf.nn.softmax(logits)[class_idx] name = class_names[class_idx] print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p)) ###Output _____no_output_____
Demo1.ipynb
###Markdown Python Indention ###Code if 5>2: print("Five is greater than two!") #This code shows a string of words ###Output Five is greater than two! ###Markdown Phyton Variable ###Code a, b, c=0,1,2 d = "Sally" #This is a type of string s='Mark' #This is a type of string A='Raymond' #This is a type of string print(a) print(b) print(c) print(d) print(s) print(A) ###Output 0 1 2 Sally Mark Raymond ###Markdown Casting ###Code print(int(4)) print(float(4)) f=56 print(type(f)) f=56.789 print(type(f)) x, y, z="one","two","three" print(x) print(y) print(z) x = y = z = "four" print(x) print(y) print(z) x="enjoying" print("Python programming is" " "+ x) ###Output Python programming is enjoying ###Markdown Operations in Python ###Code k=10 l=5 print(k+l) k+=l #Is the same as k=k+l print(k) k>l and l==l k<l or k==k not(k<l or k==k) k is l k is not l k%=5 k ###Output _____no_output_____ ###Markdown Python Indention ###Code if 5>2: print("Five is greater than two") #This code shows a string of words ###Output Five is greater than two ###Markdown Python Variable ###Code a, b, c=0,1,2 d = "Sally" #This is a type of string s = 'Mark' #This is a type of string A = 'Raymond' print(a) print(b) print(c) print(d) print(s) print(A) ###Output 0 1 2 Sally Mark Raymond ###Markdown Casting ###Code print(float(4)) f = 56.78 print(type(f)) x, y, z, = "one", "two", "three" print(x) print(y) print(z) x = y = z ="four" # multiple variable with single value print(x) print(y) print(z) x = "enjoying" print("Python programming is" " " + x) ###Output Python programming is enjoying ###Markdown Operations in Python ###Code k = 10 l = 5 print(k+l) k+=l #Is the same as k = k+l print(k) k>l and l==l k<l or k==k not(k<l or k==k) k is l k%=5 k ###Output _____no_output_____ ###Markdown ###Code ###Output _____no_output_____ ###Markdown Introduction to Python ###Code #Python Introduction if 5>2: print("Five is greater than two!") x = 1 #This is a single variable with single value x,y = 1,2 #These are two variables with two different values x,y,z =1,2,3 #These are multiple variables with different values print(x) print(y) print(z) print(x,y,z) x,y = "four",2 x y x ###Output _____no_output_____ ###Markdown Casting ###Code b = int(4) b c = float(4) c x = 5 #This is an integer y = "John" #This is a string z = "ana" Z = "Ana" print(type(x)) print(type(y)) print(type(z)) print(x) print(y) print(z) print(Z) ###Output <class 'int'> <class 'str'> <class 'str'> 5 John ana Ana ###Markdown One Value to Multiple Variables ###Code x = y = z = 'four' print(x) print(y) print(z) x = "enjoying" print("Python Programming is " + x) x = 11 y = 12 z = 13 print(x+y+z) x+=3 #This is the same as x = x + 3 print(x) y+=5 print(y) print(x>y) x<y and x!=x x>y or not(y==z) not(print(x>y)) #Identity Operations print(x is y) print(x is not z) ###Output False True ###Markdown ###Code ###Output _____no_output_____ ###Markdown Intro to Python Programming ###Code #Phyton Identification if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Phyton Variable ###Code x=1 a, b=0, 1 a,b,c= "Zero","One","Two" print(x,a,b,c) d= "Sally" #This is a string D= "Ana" e= "John" print (d,e,D) print(type(d)) #This is a Type Function print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f= float(4) print(f) g= int(5) print(g) ###Output 4.0 5 ###Markdown Multiple Variables With One Value ###Code a= y= s= "four" print(a, y, s) x= "Enjoying" print("Python Programming Is """ +x) ###Output Python Programming Is Enjoying ###Markdown Operations in Phyton ###Code x= 5 y= 7 x+=y #This is the same as x= x+y print(x+y) print(x*y) print(x) x=5 y=7 not(x>y or y==x) x is y x is not y ###Output _____no_output_____ ###Markdown Introduction to Python Programming ###Code #This is a program to print string hehe a = "Franz" print(a) #This is a program to differ a to A and a to a1 hehe a = "Franz" A = "Louise" b = "Belostrino" B = "Gloriani" print(a) print(A) print(b) print(B) a = "Franz" a1 = "Louise" b = "Belostrino" b1 = "Gloriani" print(a) print(a1) print(b) print(b1) #This is a program to print integer hehe a, b, c = 0, 1, 2 print(a) print(b) print(c) #This is a program to print types of integer hehe a = int(8) print(a) b = float(8) print(b) c = float(8.0000) print(c) d = float(8.0123) print(d) e = float(8.01234567) print(e) #This is a program to print the type of the variable hehe a = "Franz" print(type(a)) b = 0 print(type(b)) c = float(0) print(type(c)) d = 0 print(type(float(0))) ###Output <class 'str'> <class 'int'> <class 'float'> <class 'float'> ###Markdown One Value to Multiple Variables ###Code x = y = z = "Franz" print(x) print(y) print(z) x = "Fun" print('Python programming is' + x) #Comma print('Python programming is' + ' ' + x) print("Python programming is" + x) #Double Comma print("Python programming is" + " " + x) x = 5 y = 10 print(x+y) #Addition print(x-y) #Subtraction print(x*y) #Multiplication print(x/y) #Division #This is an example of program using logical operator x<y and x==x #If one is true, it will result true x>y and x==x #Both are False ###Output _____no_output_____ ###Markdown ###Code b = "sally" print(b) a = 'Sally' A = 'John' print(a) print (A) b = "sally" print (type(b)) a, b, c = 0, 1, 2 print(a) print(b) print(c) a, b, c = 0, 1, 2 print(a) # This is a program using type function print(b) print(c) a = float(4) print(a) a = 4.50 print(type(a)) a,b,c, = 0,1,2 print(type(a)) x = y = z="four" print(x) print(y) print(z) x= "enjoying" print('Python programming is '+ x) x = 4 y = 5 print(x+y) print(x-y) x<y and x==x x>y or x==x not(x>y or x==x) ###Output _____no_output_____ ###Markdown Type () funtion ###Code x=10 y="Rov" print(type(x)) print(type(y)) ###Output <class 'int'> <class 'str'> ###Markdown Case Sensitive ###Code X=5 Y="rov" print(x) print(y) print(X) print(Y) ###Output 10 Rov 5 rov ###Markdown Mutiple Variables ###Code x, y, z=5,2,1 print(x) print(y) print(z) ###Output 5 2 1 ###Markdown One to Multiple Variables ###Code x=y=z="rovick" print(x) print(y) print(z) ###Output rovick rovick rovick ###Markdown output variables ###Code z="causaren" print("jan rovick " + z) x="jan rovick " print(x+z) ###Output jan rovick causaren ###Markdown arithmetic operation ###Code x=5 y=7 z=10 print(x+y) print(x*y) y==x x<6 and x<10 y<10 or y<20 not(y<10 or y<20) y is x y is y y is not y y is not x ###Output _____no_output_____ ###Markdown Intro to Python Programming ###Code #Python Indentation if 5>2: print("five is greater than two") ###Output five is greater than two ###Markdown Python Variable ###Code x=1 a, b=0, 1 a,b,c="zero","one","two" print(x) print(a) print(b) print(c) d="Edwardo" #This is a string D="Matabang" print(d) e="Gillo" print(e) print(D) print(type(d)) #This is a type function print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f=float(4) print(f) g=float(5) g=int(5) print(g) ###Output 4.0 5 ###Markdown Multiple Variables in One Value ###Code x=y=z="four" print(x) print(y) print(z) x="enjoying" print("Python Programming is" " " + x) ###Output Python Programming is enjoying ###Markdown Operation in Python ###Code x=5 y=7 x+= y #This is the same as x = x + y print(x+y) print(x*y) print(x) x=5 y=7 not(x>y or y==x) x is not y x is y ###Output _____no_output_____ ###Markdown Introduction to Python Python Indentation ###Code if 5>2: print("five is greater than two") x = 2 #This is a single variable with single value x,y = 1,2 #This are two variables with two different values x,y,z = 1,2,3 print(x) print(y) print(z) x,y = "four",2 x y x ###Output _____no_output_____ ###Markdown Python Comments ###Code #This is a comment print("Hello, I am JV") ###Output Hello, I am JV ###Markdown Variable Naming Conventions ###Code myvar="Vincent" print(myvar) ###Output Vincent ###Markdown Casting ###Code b = "sally" #This is a type of string b = int(4) print(b) b = float(4) print(b) ###Output 4 4.0 ###Markdown Type Function ###Code x = 5 y = 'John' #This is a type of string h = "ana" H = 'Ana' print(type(x)) print(type(y)) print (h) print (H) ###Output <class 'int'> <class 'str'> ana Ana ###Markdown Multiple Variables ###Code x, y, z= "one", "two", "three" print(x) print(y) print(z) ###Output one two three ###Markdown One Value to Multiple Variables ###Code x = y = z = 'four' print(x) print(y) print(z) ###Output four four four ###Markdown Output Variables ###Code x = 'enjoying' print("Python programming is " + x) ###Output Python programming is enjoying ###Markdown Arithmetic Operations ###Code x = 5 y = 3 print(x+y) x = 11 y = 12 z = 13 print(x+y+z) ###Output 36 ###Markdown Assignment Operators ###Code x+=3 #This is the same as x = x + 3 print(x) y+=5 print(y) ###Output 17 ###Markdown Comparison Operators ###Code x>y x<y ###Output _____no_output_____ ###Markdown Logical Operators ###Code x<y and x!=x x>y or y==z not(print(x>y)) print (x is y) print (x is not z) ###Output False True ###Markdown ###Code if 5 > 2: print("Five is greater than 2!") a, b, c=0,1,2 d = "Sally" #This is a type of string s = "Mark" #This is a type of string A = "Raymond" #This is a type of string print(a) print(b) print(c) print(d) print(s) print(A) print(int(4)) f = 56.789 print(type(f)) x, y, z = "one", "two", "three" print(x) print(y) print (z) x = y = z = "four" #Multiple variable with single value print(x) print(y) print(z) x = "Python is" y = " ""enjoying" z = x + y print(z) k = 10 l = 5 print(k + l) k+=l #This is equal to k = k+l print(k) k>l and l==l k<l or k==k not (k<l or k==k) k is l k%=5 k ###Output _____no_output_____ ###Markdown Introduction to Python Programming ###Code b= "Sally" print(type(b)) ###Output <class 'str'> ###Markdown Naming Variables ###Code a = 'Sally' A = "John" print(a) print(A) a, b, c = 0, 1, 2 print(type(a)) #This is a program using type function print(b) print(c) a = 4.50 print(type(a)) ###Output <class 'float'> ###Markdown One Value to Multiple Variables ###Code x = y = z = "four" print(x) print(y) print(z) x = "enjoying" print("Python programming is" + " " + x) x = 4 y = 5 print(x+y) print(x-y) not(x>y or x == x) #This is an example of program using logical operator ###Output _____no_output_____ ###Markdown ###Code ##Demo1 #Introduction #demo1 A = "Sally pero malaki" a = "Sally pero single" b = "Sally" print(A) print(a) print(b) print(type(b)) a, b, c = "apple", "boy", "cat" d, e = 1, 2 print(a, b, c, d, e) print(type(a)) print(type(d)) a = float (4) b = int(4) c = 4.5 print(a, b) print(type(c)) x = y = z = "three last letter" print(z) print(y) print(x) x = "ok lang" print('python programming is '+ x) x = 1 y = 2 print(x+y) print(x-y) print(x*y) print(x/y) x<y and x==y #example of program using logical operator ###Output 3 -1 2 0.5 ###Markdown Intro to Python Programming ###Code #Phyton Indention if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Python Variable ###Code x=1 a,b=0,1 a,b,c="zero","one","two" print(x) print(a) print(b) print(c) d="Sally" #This is a string D="Ana" print(d) e="John" print(e) print(D) print(type(d)) print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f=float(4) print(f) g=int(5) print(g) ###Output 4.0 5 ###Markdown Multiple Variables with One Value ###Code x=y=z="four" print(x) print(y) print(z) x="enjoying" print("Python Programming is" " "+x) ###Output Python Programming is enjoying ###Markdown Operations in Python ###Code x=5 y=7 x += y #This is the same as x=x+y print(x) x=5 y=7 not(x>y or y==x) x is y x is not y ###Output _____no_output_____ ###Markdown Introduction to Python ###Code #Python Indention if 5>2: print("five is greater than two") x = 1 #This is a single variable with single value x,y=1,2 #These are two variables with two different values x,y,z=1,2,3 print(x) print(y) print(z) x,y="four",2 x y x ###Output _____no_output_____ ###Markdown Casting ###Code b = int(4) b c = float(4) c ###Output _____no_output_____ ###Markdown Type Function ###Code x = 5 y = 'John' #This is a type of string h = "ana" H = 'Ana' print(type(x)) print(type(y)) print(h) print(H) ###Output <class 'int'> <class 'str'> ana Ana ###Markdown One Value to Multiple Variables ###Code x = y = z = 'four' print(x) print(y) print(z) x = "enjoying" print("Python Programming is" " " + x) x = 11 y = 12 z = 13 print(x+y+z) x+=3 #This is the same as x = x +3 print(x) y+=5 print(y) x<y and x!=x x>y or not(y==z) not(print(x>y)) #Identity operations print(x is y) print(x is not z) ###Output False True ###Markdown ###Code a, b, c, d = 1, 2, 3, 5 print(a) print(b) print(c) print(d) x = " enjoying" print("programming is" + x) x = 9 print(x+x) ###Output 18 ###Markdown Intro to Python Programming ###Code #Python Indention if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Python Variable ###Code x = 1 a, b = 0, 1 a,b,c= "zero","one","two" print(x) print(a) print(b) print(c) d = "Sally" #This is a string D = "Ana" print(d) e= 'John' print(e) print(D) print(type(d)) #This is a type of function print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f = float(4) print(f) g = int(5) print(g) ###Output 4.0 5 ###Markdown Multiple Variables with One Value ###Code x = y = z = "four" print(x) print(y) print(z) x = "enjoying" print("Python Programming is" " " + x) ###Output Python Programming is enjoying ###Markdown Operations in Python ###Code x = 5 y = 7 x += y #This is the same as x = x + y print(x+y) print(x*y) print(x) x = 5 y = 7 not(x>y or y==x) x is y x is not y ###Output _____no_output_____ ###Markdown Introduction to Python Indention ###Code if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Python Variable ###Code x=1 #This is an integer a, b = 0,1 a, b, c = 0,1,2 print(x) print(a) print(b) print(c) x="one" #This is a string a, b = "zero", "one" a, b, c = "zero", "one", "two" print(x) print(a) print(b) print(c) d="Philippe" #case sensitive D="Justine" print(d) e="Binato" print(d) print(D) print(e) ###Output Philippe Philippe Justine Binato ###Markdown Casting ###Code f=float(9) print(f) g=int(9) print(g) ###Output 9.0 9 ###Markdown Type Function ###Code h=1.0 i=1 j="function" print(type(h)) print(type(i)) print(type(j)) ###Output <class 'float'> <class 'int'> <class 'str'> ###Markdown Double Quotes and Single Quotes ###Code k='Single Quotes' l="Double Quotes" #Both Works print(k) print(l) ###Output Single Quotes Double Quotes ###Markdown Multiple Variables ###Code m,n,o="one", "two", "three" #Multiple Variables with Multiple Value p=q=r="zero" #Multiple Variables with One Value print(m) print(n) print(o) print(p) print(q) print(r) ###Output one two three zero zero zero ###Markdown Output Variables ###Code p="Introduction to Python." print("I understand the" " " + p) ###Output I understand the Introduction to Python. ###Markdown Operations in Python ###Code q=2 r=8 print(q+r) #Addition q=2 r=8 q -= r #This is equal to q=q-r print(q) print(type(q)) s=0 t=8 s==s and t==t #This is 'and' function s==t or t==t #This is 'or' function not(s<t and t>s) #This is 'not' function s is t #This is 's' function s is not t #This is 'is not' function ###Output _____no_output_____ ###Markdown **INTRODUCTION TO PROGRAMMING** **Naming of Variable** ###Code x = "Trisha" print(x) x = 'Trisha' print(type(x)) a = "Trisha" A = "Faye" print(a) print(A) a, b, c = 0, 1, 2 print(type(a)) #This is a program using a type function print(b) print(c) a = 4.50 print(type(a)) #This is a program using a type function a = float(4) print((a)) ###Output 4.0 ###Markdown **One Value to Multiple Variable** ###Code x = y = z = "four" print(x) x = "enjoying" print('Programing is'+ " " + x) x = "Python" y = " is enjoying" print(x+y) x = 3 y = 5 sum = x + y print(sum) x = 4 y = 5 print(x+y) print(x-y) x<y and x==x not(x<y or x==x) #This is an example of comparison operators ###Output _____no_output_____ ###Markdown Intro to Python Programming ###Code #Phython Indention if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Phyton Variable ###Code x = 1 a, b = 0 , 1 a,b,c= "zero","one","two" print(x) print(a) print(b) print(c) d = "Sally" #This is a string print(d) e = "john" print(e) print(type(d)) #this is function print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f = float(4) print (f) g = int() ###Output _____no_output_____ ###Markdown Multiple VAriables with One Value ###Code x = y = z = "four" print(x) print(y) print(z) x = "enjoying" print("Python Programming is" " " + x) ###Output Python Programming is enjoying ###Markdown Operations in Python ###Code x = 5 y = 7 x += y #This is the same as x = x + y print(x) print(x*y) x = 5 y = 7 not(x>y or y==x) x is y x is not y ###Output _____no_output_____ ###Markdown Python Variable ###Code x=10 y=20 z=30 print(x) print(y) print(z) p,q,r=25,50,75 print(p) print(q) print(r) a="Mark" A="Macapinlac" print(a) print(A) ###Output Mark Macapinlac ###Markdown Casting ###Code f=float(100) print(f) i=int(100) print(i) ###Output 100.0 100 ###Markdown Multiple Variables with One Value ###Code m = n = o = 1000 print(m) print(n) print(o) ###Output 1000 1000 1000 ###Markdown Operation in Python ###Code A = 5 B = 5 print(A+B) print(A-B) print(A*B) A is B A is not B A>B or A<B ###Output _____no_output_____ ###Markdown Intro to Python Programming ###Code #Python Indention if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Python Variable ###Code x = 1 a, b = 0, 1 a,b,c= "zero","one","two" print(x) print(a) print(b) print(c) d = "Sally" #This is a string D = "Ana" print(d) e = 'John' print(e) print(D) print(type(d)) #This is a Type function print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f = float(4) print(f) g = int(5) print(g) ###Output 4.0 5 ###Markdown Multiple Variables with One Value ###Code x = y = z = "four" print(x) print(y) print(z) x = "enjoying" print("Python Programming is" " " + x) ###Output Python Programming is enjoying ###Markdown Operations in Python ###Code x = 5 y = 7 x += y #This is the same as x = x + y print(x) x = 5 y = 7 not(x>y or y==x) x is y x is not y ###Output _____no_output_____ ###Markdown Python Indention ###Code if 5>2: print("Five is greater than two") #This code shows a string of words ###Output Five is greater than two ###Markdown Python Variable ###Code a, b, c=0,1,2 d = "Sally" #This is a type of string s = 'Mark' #This is a type of string A = 'Raymond' print(a) print(b) print(c) print(d) print(s) print(A) ###Output 0 1 2 Sally Mark Raymond ###Markdown Casting ###Code print(float(4)) f = 56.789 print(type(f)) x, y, z = "one", "two", "three" print(x) print(y) print(z) x = y = z = "four" #Multiple Variable with single value print(x) print(y) print(z) x = "enjoying" print("Python programming is" " " + x) ###Output Python programming is enjoying ###Markdown Operations in Python ###Code k = 10 l = 5 print(k+l) k+=l #Is the same as k = k+l print(k) k>l and l==l k<l or k==k not(k<l or k==k) k is l k%=5 k ###Output _____no_output_____ ###Markdown Introduction to Python Programming ###Code b = "Sally" print(b) b = "Sally" print(type(b)) ###Output <class 'str'> ###Markdown Naming Variables ###Code a = 'Sally' A = "John" print(a) print(A) a, b, c = 0, 1, 2 print(type(a)) #This is a program using typing function print(b) print(c) a = 4.50 print(type(a)) ###Output <class 'float'> ###Markdown One Value to Multiple Variables ###Code x = y = z = "four" print(x) print(y) print(z) x = "enjoying" print("Python programming is "+ x) x = 4 y = 5 print(x+y) print(x-y) not(x>y or x==x) #This is an example of program using logical operator ###Output _____no_output_____ ###Markdown **Introduction to Python Programming** ###Code a = "Sally" A = "John" print(a) print(A) b = "Sally" print(type(b)) a, b, c = 0, 1, 2 print(type(a)) #This is a comment print(b) print(c) a = int(4) print(a) a = float(4) print(a) ###Output _____no_output_____ ###Markdown One Value to Multiple Variables ###Code x = y = z ="four" print(x) print(y) print(z) x = "enjoying" print("Python programming is "+ x) x = 9 y = 5 print(x+y) print(y-x) x>y and x==x ###Output _____no_output_____ ###Markdown Introduction to Python ###Code #Python Indention if 5>2: print ("Five is greater than two") x=1 #Single Variable with single value x,y=1,2 #Two variables with two different values x,y,z=1,2,3 #Three variables with three different values print(x) print(y) print(z) x,y="four",2 x y x ###Output _____no_output_____ ###Markdown Casting ###Code b = int(4) b c = float(4) c ###Output _____no_output_____ ###Markdown Type Function ###Code x = 5 y = 'John' h = "ana" H = "Ana" print(type(x)) print(type(y)) print(h) print(H) ###Output <class 'int'> <class 'str'> ana Ana ###Markdown One Value to Multiple Variables ###Code x = y = z = 'four' print(x) print(y) print(z) x = 'enjoying' print("Python programming is"" " + x) x=11 y=12 z=13 print(x+y+z) x+=3 #Same as x = x+3 print(x) y+=5 print(y) x<y and x!=x x>y or not (y==z) not(print(x>y)) #Identity operations print(x is y) print(x is not z) ###Output False True ###Markdown ###Code Basic coding ###Output _____no_output_____ ###Markdown ###Code b = 'Sally' print(b) b = 'Sally' print(type(b)) x = '5' y = '4' print(x>y and x==x) x=y=z = 'four' print(x) print(y) print(z) x = 'enjoying' print('python programming is'+' x') a = 'John' A = 'Sally' print(a) print(A) ###Output John Sally ###Markdown Introduction to Phyton progamming ###Code b = "Sally" print (type(b)) a = "sally" A = "john" print(a) print (A) a, b ,c = 0, 1, 2 print(type(a)) print(b) print(c) a = float(4.50) print(type(a)) ###Output <class 'float'> ###Markdown One value to multiple Variables ###Code x = y = z ="four" print(x) print(y) print(z) x ="enjoying" print('Phython programming is '''+ x) x = 4 y = 5 print(x+y) print(x-y) not(x>y or x==x) ###Output _____no_output_____ ###Markdown **Introduction to Python Programming** ###Code b = "Sally" print(b) ###Output Sally ###Markdown **Naming Variable** ###Code b="Sally" print(type(b)) a = "Sally" A = "John" print(a) print(A) a, b, c= 0, 1, 2 print(type(a)) #This is a program using type function print(b) print(c) a = 4.50 print(a) a = int(5) print(a) a = float(4) print((a)) a = 3 float(a) ###Output _____no_output_____ ###Markdown **One Value to Multiple Value** ###Code x = y = z = "four" print(x) print(y) print(z) x = "enjoying." print("Programming is "+x) x = 3 y = 5 sum = x + y print(sum) x = 6 y = 4 print(x+y) print(x-y) x = 9 y = 6 x<y or x==x not(x<y or x==x) ###Output _____no_output_____ ###Markdown Introduction to Python Programming ###Code b = "Sally" print(b) a, b, c = 0, 1, 2 print (a) print (b) print (c) a = int(4) print (a) b = "Sally" print(type(b)) ###Output _____no_output_____ ###Markdown One value to multiple variables ###Code x = y = z = "four" print (x) print (y) print (z) x = "enjoying" print ("Python programming is" + " " + x) x = 4 y = 5 print (x+y) print (x-y) x<y and x == x not(x<y and x == x) ###Output _____no_output_____ ###Markdown Python Indention ###Code if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Python Comments ###Code print("Hello, World!") #This is a comment ###Output Hello, World! ###Markdown Python Variable ###Code x=1 x,y=1,2 x,y,z=1,2,3 print(x) print(y) print(z) ###Output 1 2 3 ###Markdown Casting ###Code b=int(4) print(b) ###Output 4 ###Markdown Type Function ###Code x=5 y="John" print(type(x)) print(type(y)) ###Output <class 'int'> <class 'str'> ###Markdown Quotes ###Code y="John" y='John' print(y) ###Output John ###Markdown Case Sensitive ###Code a=4 A="Sally" #A will not overwite a print(a) print(A) ###Output 4 Sally ###Markdown Multiple Variables ###Code x,y,z="one","two","three" print(x) print(y) print(z) ###Output one two three ###Markdown One Value to Multiple Variables ###Code x=y=z="four" print(x) print(y) print(z) ###Output four four four ###Markdown Output Variables ###Code x="Wonyoung" print("Jang " +x) ###Output Jang Wonyoung ###Markdown Arithmetic Operators ###Code x=3 y=8 print(x+y) x=2 y=9 z=8 sum=(x+y+z) print(sum) a,b,c=0,-1,6 #assignment operator c%=3 print(c) x==y #comparison operator x==y and x!=z #logical operator x,y,z=6,7,6 #identity operator x is z ###Output _____no_output_____ ###Markdown Intro to Python Programming ###Code #Python Indention if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Python Variable ###Code x=1 a, b = 0,1 a,b,c="zero","one","two" print(x) print(a) print(b) print(c) d="Sally" #This is a string D="Ana" print(d) e="John" print(e) print(d) print(type(d)) print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f=float(4) print(f) g=int(5) print(g) ###Output 4.0 5 ###Markdown Multiple Variables with One Value ###Code x = y = z = "four" print(x) print(y) print(z) x= "enjoying" print("Python Programming is" " " +x) ###Output Python Programming is enjoying ###Markdown Operations in Python ###Code x=5 y=7 x+=y #This is the same as x = x + y print(x+y) print(x*y) print(x) x=5 y=7 not(x>y or y==x) x is y x is not y ###Output _____no_output_____ ###Markdown Intro to Python Programming ###Code #Python Indention if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Python Variable ###Code x = 1 a, b = 0, 1 a,b,c= "zero","one","two" print(x) print(a) print(b) print(c) d = "Sally" #This is a string D = "Ana" print(d) e = 'John' print(e) print(D) print(type(d)) #This is a Type function print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f = float(4) print(f) g = int(5) print(g) ###Output 4.0 5 ###Markdown Multiple Variables with One Value ###Code x = y = z = "four" print(x) print(y) print(z) x = "enjoying" print("Python Programming is" " " + x) ###Output Python Programming is enjoying ###Markdown Operations in Python ###Code x = 5 y = 7 x += y #This is the same as x = x + y print(x) x = 5 y = 7 not(x>y or y==x) x is y x is not y ###Output _____no_output_____ ###Markdown Intro To Python ###Code if 5>2: print("five is greater than two!") x=1 x, y= 1, 2 x, y, z= 1, 2, 3 print(x) print(y) print(z) b = int(4) b c = float(4) c x = 5 y = "Hello" print(type(x)) print(type(y)) y = 'John' z = "John" print(y) print(z) a = 7 A = 10 print(a) print(A) x,y,z = 6,7,8 print(x) print(y) print(z) x=y=z="seven" print(x) print(y) print(z) x ="enjoying" print("Python Programming is " + x ) x=34 y=35 z=x+y print(z) x+=7 print(x) x!=x x = 5 y = 4 z = 3 x>y and y>z x = 5 y = 4 z = 3 x<y or y>z x = 6 y = 6 x is y x = 7 y = 8 x is not y ###Output _____no_output_____ ###Markdown Introduction to Python Programming ###Code b = "Sally" print (b) a,b,c = 0, 1, 2 print (a) print (b) print (c) x=y=z = "0" print (x) print (y) print (z) b="Sally" b=int(4) print(b) b=float(4) print(b) y = 9 ###Output _____no_output_____ ###Markdown Python Programming ###Code x="enjoying" print('Python programming is'+" "+ x) x = 4 y = 5 print(x+y) print(x-y) x<y and x==x not(x<y and x==x) ###Output _____no_output_____ ###Markdown Introduction to Python Programming ###Code b = "Sally" print(type(b)) ###Output <class 'str'> ###Markdown Naming Variables ###Code a = 'Sally' A = 'John' print(a) print (A) a, b, c = 0, 1, 2 print(type(a)) #This is a program using type function print(b) print(c) a = float(4.50) print(type(a)) ###Output <class 'float'> ###Markdown One Value to Multiple Variables ###Code x = y = z ="four" print(x) print(y) print(z) x ="enjoying" print('Phython programming is '''+ x) x = 4 y = 5 print(x+y) print(x-y) not(x>y or x==x) #This is an example of program of logical operator ###Output _____no_output_____ ###Markdown Python Indention ###Code if 5>2: print ("Five is greater than two") #This code shows a string of a words ###Output Five is greater than two ###Markdown Python Variable ###Code a,b,c=0,1,2 d ="Sally" #This is a type of string s= 'mark' #This is a type of string A= 'Raymond' print(a) print(b) print(c) print(d) print(s) print(A) ###Output 0 1 2 Sally mark Raymond ###Markdown Casting ###Code print(float(4)) f =56.47 print(type(f)) x,y,z="one","two", "three" print(x) print(y) print(z) x=y=z="four" #Multiple variable but single value print(x) print(y) print(z) x= "enjoying" print("Python programming is "+x) ###Output Python programming is enjoying ###Markdown Operations in Python ###Code k=10 l=5 print(k+l) k+=l #Is the same as k=k+l print(k) k>l or l==k k>l or l==k not (k>1 or k==l) not (k is l) k is not l k%=3 k ###Output _____no_output_____ ###Markdown Intro to Python Programming ###Code x=1 a,b,c=0,1,2 print(x) print(a) print(b) print(c) d="Sally" D="Ana" e="John" print(d) print(e) print(D) print(type(d)) print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f=float(4) print(f) g=int(5) print(g) ###Output 4.0 5 ###Markdown Multiple Variables with one Value ###Code x=y=z="four" print(x) print(y) print(z) x="enjoying" print("Python Programming is " + x) x=5 y=7 x += y print(x) x=5 y=7 x<y and y==y x=5 y=7 x<y or y==y x=5 y=7 x is y x=5 y=7 not(y is x) ###Output _____no_output_____ ###Markdown Introduction to Python Programming Naming of variable ###Code a, b, c,= 0, 1, 2 print(a) print(b) print(c) print (type(a)) a = 3 float(a) f = a print(f) g = "Good morning" print(g) ###Output Good morning ###Markdown One Value to Multiple Variable ###Code x = y = z ="four" print(x) print(y) print(z) x = "enjoying" print("Programming is " + x) e = "many uses" print("Programming have " + e) x = 4 y = 5 print(x + y) print(x - y) x>y and x == x x = 3 y = 5 sum = x + y print(sum) x<y and x == x #This is an example of programming not(x<y and x == x) ###Output _____no_output_____ ###Markdown Intro to Python Programming ###Code #Python Indentation if 5>2: print("five is greater than two") ###Output five is greater than two ###Markdown Python Variable ###Code x=1 a, b=0, 1 a,b,c="zero","one","two" print(x) print(a) print(b) print(c) d="Maria" #This is a string D="Jessica" print(d) e="Meri" print(e) print(D) print(type(d)) #This is a type function print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f=float(4) print(f) g=float(5) g=int(5) print(g) ###Output 4.0 5 ###Markdown Multiple Variables in One Value ###Code x=y=z="four" print(x) print(y) print(z) x="enjoying" print("Python Programming is" " " + x) ###Output Python Programming is enjoying ###Markdown Operation in Python ###Code x=5 y=7 x+= y #This is the same as x = x + y print(x+y) print(x*y) print(x) x=5 y=7 not(x>y or y==x) x is not y x is y ###Output _____no_output_____ ###Markdown **INTRODUCTION TO PHYTON PROGRAMMING** ###Code b = "Sally" print(type(b)) b = "Sally" print(b) a,b,c = 0,1,2 print(a) print(b) print(c) a,b,c = 0,1,2 #This is a program using text function print(type(a)) print(b) print(c) a = 4.50 print(type(a)) a = int(4) print(a) a = float(4) print(a) ###Output 4.0 ###Markdown Naming Variables ###Code a = "Johnny Johnny" A = "John" print(a) print(A) b = "Jisoo" B = "Kim" print(b) print(B) ###Output Jisoo Kim ###Markdown One Value to Multiple Variables ###Code x = y = z ="four" print(x) print(y) print(z) a = b = c ="5" print(a) print(b) print(c) l = i = s = a = "Lisa Manoban" print(l) print(i) print(s) print(a) x = "enjoying" print('Phyton Programming is'+" "+ x) y = "newbie" print('I am still a'+" "+ y) ###Output I am still a newbie ###Markdown Logical Operators ###Code x = 4 #This is a program using logical operators y = 5 print(x+y) print(x-y) print(x*y) print(x/y) print(x) x<y and x == x not(x<y and x==x) x>y and x==x not(x>y and x==x) ###Output _____no_output_____ ###Markdown Python Indentation ###Code if 5>2: print("five is greater than two") ###Output five is greater than two ###Markdown Python Variable ###Code x = 1 a, b=0, -1 a, b, c=0, -1, 2 print(a) print(b) print(c) ###Output 0 -1 2 ###Markdown Casting ###Code b = "sally" #This is a type of string b = int(4) print(b) b = float(4) print(b) ###Output 4.0 ###Markdown Type () Function ###Code x = 5 y = "Jhon" print(type(x)) print(type(y)) ###Output <class 'int'> <class 'str'> ###Markdown Double quotes or single quotes ###Code y = "Jhon" y = 'Jhon' print(y) print(y) ###Output Jhon Jhon ###Markdown Case sensitive ###Code a = 4 A = "Sally" #A will not overwrite a print(a) print(A) ###Output 4 Sally ###Markdown Multiple Variables ###Code x, y, z ="one", "two", "three" print(x) print(y) print(z) ###Output one two three ###Markdown One Value to Multilee Variables ###Code x = y = z = "four" print(x) print(y) print(z) ###Output four four four ###Markdown Output Variables ###Code x = "enjoying" print("Python programming is " + x) ###Output Python programming is enjoying ###Markdown Other way: ###Code x = "python is" y = " enjoying" z = x + y print(z) ###Output python is enjoying ###Markdown Arithmetic Operations ###Code x = 5 y = 3 print(x+y) x=5 y=3 sum=x+y sum ###Output _____no_output_____ ###Markdown Assignment Operators ###Code a,b,c=0,-1,6 c%=3 c ###Output _____no_output_____ ###Markdown Logical Operators ###Code a,b,c=0,-1,6 a>b and c>b True ###Output _____no_output_____ ###Markdown Identity Operators ###Code a,b,c=0,-1,5 a is c False ###Output _____no_output_____ ###Markdown Intro to Python Programming ###Code #Python Indention if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Python Variable ###Code x = 1 a, b = 0, 1 a,b,c="zero","one","two" print(x) print(a) print(b) print(c) d = "Sally" #This is a string D="Ana" print(d) e = 'John' print(e) print(D) print(type(d)) #This is a Type function print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f = float(4) print(f) g = int(5) print(g) ###Output 4.0 5 ###Markdown Multiple Variables with One Value ###Code x = y = z ="four" print(x) print(y) print(z) x = "enjoying" print("Python Programming is" " " + x) ###Output Python Programming is enjoying ###Markdown Operations in Python ###Code x = 5 y = 7 x += y #This is the same as x = x + y print(x+y) print(x*y) print(x) x = 5 y = 7 not(x>y or y==x) x is y x is not y ###Output _____no_output_____ ###Markdown Introduction to Python Programing ###Code b = "Sally" print(type(b)) ###Output <class 'str'> ###Markdown Naming Variables ###Code a = 'Sally' A = "John" print(a) print(A) a, b, c = 0, 1, 2 print(type(a)) #This is programming using type function print(b) print(c) a = 4.50 print(type(a)) ###Output <class 'float'> ###Markdown One Value to Multiple Variables ###Code x = y = z = "four" print(x) print(y) print(z) x = "enjoying" print("Python programing is" + " " + x) x = 4 y = 5 print(x+y) print(x-y) not(x>y or x==x) #This is an example of program using logical operator ###Output _____no_output_____ ###Markdown Introduction to Python Programming Dominic Z. Marasigan ###Code a = 'Dominic' A = 'Drake' print(a) print (A) b = "Dominic" print (type(b)) a, b, c = 0, 1, 2 print(a) print(b) print(c) a, b, c = 0, 1, 2 print(a) # This is a program using type function. print(b) print(c) a = float(4) print(a) a,b,c, = 0,1,2 print(type(a)) x = y = z="zero" print(x) print(y) print(z) x= "enjoyable" print('Python programming is '+ x) x = -3 y = 3 z = 4 print (x+y) print (x<y) print (not(x<z and z>y)) ###Output 0 True False ###Markdown ###Code #Intro to Python Programming ###Output _____no_output_____ ###Markdown Intro to Python Programming ###Code #Python Indention if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Python Variable ###Code x = 1 a, b = 0, 1 a,b,c= "zero","one","two" print(x) print(a) print(b) print(c) d = "Sally" #This is a string D = 'Ana' print(d) e = 'John' print(e) print(D) print(type(d)) #This is a Type function print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f = float(4) print(f) g = int(5) print(g) ###Output 4.0 5 ###Markdown Multiple Variables with One Value ###Code x = y = z = "four" print(x) print(y) print(z) x = "enjoying" print("Python Programming is" " " + x) ###Output Python Programming is enjoying ###Markdown Operations in Python ###Code x = 5 y = 7 x += y #This is the same as x = x + y print(x) x = 5 y = 7 not(x>y or y==x) x is y x is not y ###Output _____no_output_____ ###Markdown Intro to Python Programming ###Code if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Python Variable ###Code x=1 a, b=0, 1 a,b,c= "Three","Four","Five" print(x,a,b,c) d= "Sally" #This is a string D= "Ana" e= "John" print (D,e,d) print(type(c)) #This is a Type Function print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f= float(6) print(f) g= int(9) print(g) ###Output 6.0 9 ###Markdown Multiple Variables With One Value ###Code a= y= s= "seven" print(s, a, y) x= "Educational" print("Python Programming Is """ +x) ###Output Python Programming Is Educational ###Markdown Operations in Python ###Code x= 6 y= 9 x+=y #This is the same as x= x+y print(x+y) print(x*y) print(x) x=1 y=25 not(x>y or y==x) x is y x is not y ###Output _____no_output_____ ###Markdown Python Indentation ###Code if 5>2: print("Five is greater than two!") #This code shows a Strings of words ###Output Five is greater than two! ###Markdown Python Variable ###Code a, b, c=0,1,2 d = "Sally" #This is a type of string s = 'Mark' #This is a type of string A = 'Raymond' print(a) print(b) print(c) print(d) print(s) print(A) ###Output 0 1 2 Sally Mark Raymond ###Markdown Casting ###Code print(int(4)) f = 56.789 print(type(f)) x, y, z = "one","two","three" print(x) print(y) print(z) x = y = z = "four" #Multiple variable print(x) print(y) print(z) x = "enjoying" print("Python progamming is"" "+ x) ###Output Python progamming is enjoying ###Markdown Operations in Python ###Code k = 10 l = 5 print(k+l) k+=l #Is the same as k = k+l print(k) k>l and l==l k<l or k==k not (k<l or k==k) k is l k%=5 print(k) ###Output 0 ###Markdown Introduction ###Code s = "Hi there! How do you do? My name is Lleyton Earl Emmanuel Bondal from ECE2-3" a = "I am neophyte in encoding." y = "This is part of our subject course." o = "I hope we get along. Have a great life and hello world!" print(s) print(a) print(y) print(o) ###Output Hi there! How do you do? My name is Lleyton Earl Emmanuel Bondal from ECE2-3 I am neophyte in encoding. This is part of our subject course. I hope we get along. Have a great life and hello world! ###Markdown Python Indentation ###Code if 5>2: print("Five is greater than two!") #This code shows a string of words ###Output Five is greater than two! ###Markdown Python Variable ###Code a, b, c=0,1,2 d = "Sally" #This is a type of string s = "Mark" #This is a type of string A = "Raymond" print(a) print(b) print(c) print(d) print(s) print(A) ###Output 0 1 2 Sally Mark Raymond ###Markdown Casting ###Code print(int(4)) f = 56.789 print(type(f)) x, y, z = "one", "two", "three" print(x) print(y) print(z) x = y = z = "four" #Multiple variable with single value print (x) print(y) print(z) x = "enjoying" print("Python programming"" "+ x) ###Output Python programming enjoying ###Markdown Operations is Python ###Code k = 10 l = 5 print(k+l) k+=l #This is the same as k = k+l print(k) k>l and l==l k<l or k==k not (k<l or k==k) k is l k%=5 k ###Output _____no_output_____ ###Markdown Bonus/Practice ###Code I = "I am turning 21 years old this Sunday. At this age, all I care is peace of mind." M = "I already invested in Mutual Fund, Soldivo Fund thru International Marketing Group-IMG for as low as Php1000 which allows me to become a shareholder of top chips company in the Philippines" G = "I also purchased a KAISER Long-Term healthcare which offers Investment Life Protection, and Healthcare. One of its benefit is to have a free medical consultation to any Kaiser affiliated Hospitals" P = "IMG offers at least 50+ benefits which include death/accidental insurance. The latest benefit added are Legal Consultation and Php20 minimum top up to your mutual fund account." O = "You can also create a sub-account for your child's education or retirement plan, etc." Y = "International Marketing Group is a brokerage company that offers financial education to every Filipino individual all over the world. Becoming an IMG member gives you the opportunity to have access to its" Z = "partnered companies in Philippines at the same time offers you a business/career opportunity" X = "Ms., practice lang po ng marketing skills, hehe. Salamat po. Take care!." print(I) print(M) print(G) print(P) print(O) print(Y) print(Z) print(X) ###Output I am turning 21 years old this Sunday. At this age, all I care is peace of mind. I already invested in Mutual Fund, Soldivo Fund thru International Marketing Group-IMG for as low as Php1000 which allows me to become a shareholder of top chips company in the Philippines I also purchased a KAISER Long-Term healthcare which offers Investment Life Protection, and Healthcare. One of its benefit is to have a free medical consultation to any Kaiser affiliated Hospitals IMG offers at least 50+ benefits which include death/accidental insurance. The latest benefit added are Legal Consultation and Php20 minimum top up to your mutual fund account. You can also create a sub-account for your child's education or retirement plan, etc. International Marketing Group is a brokerage company that offers financial education to every Filipino individual all over the world. Becoming an IMG member gives you the opportunity to have access to its partnered companies in Philippines at the same time offers you a business/career opportunity Ms., practice lang po ng marketing skills, hehe. Salamat po. Take care!. ###Markdown Intro to Python Programming ###Code #Python Indention if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Python Variable ###Code x = 1 a, b = 0, 1 a,b,c= 0,1,2 print(x) print(a) print(b) print(c) d = "Sally" #This is a string D = "Ana" print(d) e= 'John' print(e) print(D) print(type(d)) #This is a Type function print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f = float(4) print(f) g = int(5) print(g) ###Output 4.0 5 ###Markdown Multiple Variables with One Value ###Code x = y = z = "four" print(x) print(y) print(z) x = "enjoying" print("Python Programming is" " " + x) ###Output Python Programming is enjoying ###Markdown Operation in Python ###Code x = 5 y = 7 print(x+y) print(x*y) print(x) ###Output 12 35 5 ###Markdown Introduction to python programming ###Code b = "sally" print(b) a = 'Sally' A = 'John' print(a) print (A) b = "sally" print (type(b)) a, b, c = 0, 1, 2 # This is a program using type function print(a) print(b) print(c) a,b,c, = 0,1,2 print(type(a)) x=y=z="four" print(x) print(y) print(z) a=4.50 print(type(a)) x="enjoying" print("Python programming is"+ x) x=4 y=5 print(x+y) print(x-y) x<y and x==x x>y or x==x not(x>y or x == x) #This is an example of programming operation ###Output _____no_output_____ ###Markdown Python Indention ###Code if 5>2: print("Five is greater than two") #This code shows a string of words ###Output Five is greater than two ###Markdown Python Variable ###Code a, b, c=0,1,2 d = "Sally" #This is a type of string s = 'Mark' #This is a type of string A = 'Raymond' print(a) print(b) print(c) print(d) print(s) print(A) ###Output 0 1 2 Sally Mark Raymond ###Markdown Casting ###Code print(float(4)) f = 56.789 print(type(f)) x, y, z = "one", "two", "three" print(x) print(y) print(z) x = y = z = "four" # Multiple variable with single value print(x) print(y) print(z) x = "enjoying" print("Python programming is" " " + x) ###Output Python programming is enjoying ###Markdown Operations in Python ###Code k = 10 l = 5 print(k+l) k+=l #Is the same as k = k+l print(k) k>l and l==l k<l or k==k not(k<l or k==k) k is l k%=5 k ###Output _____no_output_____ ###Markdown Introduction to Programming ###Code b="sally" print(b) print(type(b)) #print type a, b, c = 0, 1, 2 print(a) print(b) print(c) a, b=15, 10 #integers print(a+b) print(a-b) print(type(a+b)) x, y, z = 4, 3, 2 x==z or y<=x a = 25 #float print(float(a)) ###Output 25.0 ###Markdown Intro to Python Programming ###Code #Phyton Indention if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Python Variable ###Code x=1 a, b = 0, 1 a,b,c= "zero","one","two" print(x) print(a) print(b) print(c) d = "Sally" #This is a string D = "Ana" print(d) e = 'John' print(e) print(D) print(type(d)) print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f = float(4) print(f) g = int(5) print(g) ###Output 4.0 5 ###Markdown Multiple Variables with One Value ###Code x = y = z = "four" print(x) print(y) print(z) x = "enjoying" print("Phyton Programming is" " " + x) ###Output Phyton Programming is enjoying ###Markdown Operations in Python ###Code x = 5 y = 7 x += y #This is the same as x = x + y print(x+y) print(x*y) print(x) x = 5 y = 7 not(x>y or y==x) x is y x is not y ###Output _____no_output_____ ###Markdown Python Indention ###Code if 5>2: print ("Five is greater than two") ###Output Five is greater than two ###Markdown Python Variable ###Code a, b, c = 0,1,2 d= "Sally" #This is a type of string s= 'Mark' #This is a type of string A= "Raymond" #This is a type of string print (a) print (b) print (c) print (d) print (s) print (A) x = y = z = "four" print (x) print (y) print (z) x = "enjoying" print("Python programming is" " " + x) x= "Python is " y= "enjoying" z= x+y print (z) ###Output Python is enjoying ###Markdown Casting ###Code b="Sally" #This is a type of string b=int(4) print (b) b= float (4) print (b) ###Output 4.0 ###Markdown Operations in Python ###Code k = 10 l = 5 print (k+l) k+=l #Is the same as k = k+l print (k) k>l and l==l k<l and k==k not(k<l) or k==k k is l k%=5 k a,b,c = 0,-1,5 a is c ###Output _____no_output_____ ###Markdown Python Indention ###Code if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Python Comments ###Code print("Python Programming is enjoying") #this is a comment ###Output Python Programming is enjoying ###Markdown Python Variable ###Code x = 1 x, y=0, -1 x, y, z=0, -1, 2 print(z) print(y) print(z) ###Output 2 -1 2 ###Markdown Casting ###Code x="sally" y=int(4) print(x) y=float(y) print(y) ###Output sally 4.0 ###Markdown Python Indention ###Code if 5>2: print("Five is greater than two!") #This code shows a String of words ###Output Five is greater than two! ###Markdown Pyhton Variable ###Code a, b, c=0,1,2 d = "Sally" #This is a type of string s = 'Mark' #This is a type of string A = 'Raymond' print(a) print(b) print(c) print(d) print(s) print(A) ###Output 0 1 2 Sally Mark Raymond ###Markdown Casting ###Code print(int(4)) f = 56.789 print(type(f)) x, y, z = "one","two", "three" print(x) print(y) print(z) x = y = z = "four" # Multiple variable with single value print(x) print(y) print(z) x = "enjoying" print("Python programming is"" "+ x) ###Output Python programming is enjoying ###Markdown Operations in Python ###Code k = 10 l = 5 print(k+l) k+=l #Is the same as k = k+l print(k) k>l and l==l k<l or k==k not (k<l or k==k) k is l k%=5 k ###Output _____no_output_____ ###Markdown Intro to Python Programming ###Code #Python Indention if 5>2: print("Five is greater than two") ###Output Five is greater than two ###Markdown Python Variable ###Code x = 1 a, b = 0, 1 a,b,c = "zero", "one", "two" print(x) print(a) print(b) print(c) d = "Sally" #This is a string D = "Ana" print(d) e = 'John' print(e) print(D) print(type(d)) #This is a type function print(type(x)) ###Output <class 'str'> <class 'int'> ###Markdown Casting ###Code f = float(4) print(f) g = int(5) print(g) ###Output 4.0 5 ###Markdown Multiple variables with One Value ###Code x = y = z = "four" print(x) print(y) print(z) x = "enjoying" print("Python Programming is"" " + x) ###Output Python Programming is enjoying ###Markdown Operations in Python ###Code x = 5 y = 7 x += y #This is the same as x = x + y print(x) x = 5 y = 7 not(x>y or y==x) x is y x is not y ###Output _____no_output_____ ###Markdown ###Code ###Output _____no_output_____ ###Markdown Introduction to Python Programming ###Code b = "Sally" print (b) a, b, c = 0, 1, 2 print(a) print(b) print(c) x=y=z = "0" print(x) print(y) print(z) x,y,z = "one","two","three" print(x) print(y) print(z) b = "sally" #This is a type of string b =int(4) print (b) b=float(4) print(b) ###Output 4.0 ###Markdown Python Programming ###Code x = "enjoying" print('Python programming is '+ x) x = 4 y = 5 print(x+y) print(x-y) x<y and x==x ###Output _____no_output_____ ###Markdown Introduction to Python Programming ###Code b= "Sally" print(type(b)) ###Output <class 'str'> ###Markdown Naming Variables ###Code a = 'Sally' A = "John" print(a) print(A) a, b, c = 0, 1, 2 print(type(a)) #This is a program using type function print(b) print(c) a = 4.50 print(type(a)) ###Output <class 'float'> ###Markdown One Value to Multiple Variables ###Code x = y = z = "four" print(x) print(y) print(z) x = "enjoying" print("Python programming is" + " " + x) x = 4 y = 5 print(x+y) print(x-y) not(x>y or x == x) #This is an example of program using logical operator ###Output _____no_output_____ ###Markdown Python Indentation ###Code if 5>2: print("Five is greater than two!") #This code shows a string of words. ###Output Five is greater than two! ###Markdown Python Variable ###Code a, b, c = 0, 1, 2 d = "Sally" #This is a type of string s = 'Mark' #This is a type of string A = "Raymond" #This is a type of string print(a) print(b) print(c) print(d) print(s) print(A) ###Output 0 1 2 Sally Mark Raymond ###Markdown Casting ###Code print(float(4)) ###Output 4.0 ###Markdown Type()Function ###Code f = 56 print(type(f)) x, y, z ="one", "two", "three" print(x) print(y) print(z) x = y = z = "four" #Multiple variables with single value print(x) print(y) print(z) x = "enjoying" print("Python programming is " + x) ###Output Python programming is enjoying ###Markdown Operations in Python ###Code k = 10 l = 5 print(k+l) k += l #k is the same k = k+1 print(k) k>1 and l==l k<l or k==k not(k<l or k==k) k += l #k is the same k= k+l print(k) k>1 and l==l k<l or k==k not(k<l or k==k) k is not l k%=5 ###Output _____no_output_____ ###Markdown Introduction to Python Programming ###Code b = "Sally" print(type(b)) ###Output _____no_output_____ ###Markdown Naming Variables ###Code a = 'Sally' A = "John" print(a) print(A) a,b,c = 1,2 print(type(a)) #This is a program using typing function print(b) print(c) a = 4.50 print(type(a)) ###Output <class 'float'> ###Markdown One value to Multiple Variables ###Code x = y = z ="four" print(x) print(y) print(z) x = "enjoying" print("Python programming is " +x) x = 4 y = 5 print(x+y) print(x-y) not(x>y or x==x) #This is an example of program using logical operator ###Output _____no_output_____ ###Markdown Python Indention ###Code if 5>2: print("Five is greater than two") #This code shows a string of words. ###Output Five is greater than two ###Markdown Python Variable ###Code a, b, c = 0, 1, 2 d = "Sally" #This is a type of string s = 'Mark' #This is a type of string A = "Raymond" #This is a type of string print(a) print(b) print(c) print(d) print(s) print(A) ###Output 0 1 2 Sally Mark Raymond ###Markdown Casting ###Code print(float(4)) ###Output 4.0 ###Markdown Type Function ###Code f = 56 print(type(f)) x, y, z = "one", "two", "three" print(x) print(y) print(z) x = y = z = "four" #Multiple Variables with single value print(x) print(y) print(z) x = "enjoying" print("Python programming is " + x) ###Output Python programming is enjoying ###Markdown Operations in Python ###Code k = 10 l = 5 print(k+l) k += l #k is the same k = k+l print(k) k>l and l==l k<l or k==k not(k<l or k==k) k is not l k%=5 ###Output _____no_output_____ ###Markdown Phyton Identation ###Code if 5>2: print("Five is greater than two!") ###Output Five is greater than two! ###Markdown Phyton Variable ###Code a,b,c=0,1,2 d="Sally" s="Mark" A="Raymond" print(a) print(b) print(c) print(d) print(s) print(A) ###Output 0 1 2 Sally Mark Raymond ###Markdown Casting ###Code In [4] print(int(4)) In [5] f=56.789 print(type(f)) x,y,z="one","two","three" print(x) print(y) print(z) In [9] x=y=z="four" print(x) print(y) print(z) x="enjoying" print("Phyton Programming"" "+x) ###Output Phyton Programming enjoying ###Markdown Operation is Phyton ###Code In [17] k=10 l=5 print(k+l) In [18] k+=l print(k) In [19] k>l and l==l In [20] k<1 and k==k In [22] not (k<1 or k==k) k%=5 k ###Output _____no_output_____ ###Markdown Phyton Variable ###Code x = 1 a, b = 0, -1 a, b, c = 0, -1, 2 b = "Sally" ###Output _____no_output_____ ###Markdown Casting ###Code b = "sally" b = int (4) print (b) b = float (4) print (b) ###Output 4 4.0 ###Markdown Type () Function ###Code x = 5 y = "John" print (type(x)) print (type (y)) ###Output <class 'int'> <class 'str'> ###Markdown Double Quotes or Single Quotes ###Code y = "John" x = 'John' print (y) print (x) ###Output John John ###Markdown Case Sensitive ###Code a = 4 A = "Sally" #A will not over write a print (a) print (A) ###Output 4 Sally ###Markdown Multiple Variables ###Code x, y, z = "one", "two", "three" print (x) print (y) print (z) ###Output one two three ###Markdown One Value to Multiple Variables ###Code x = y = z = "four" print (x) print (y) print (z) ###Output four four four ###Markdown Output Variables ###Code x = "enjoying" print ("Phython programming is" + x) x = "Phyton is" y = "enjoying" z = x + y print (z) ###Output Phyton isenjoying ###Markdown Operations of Phyton ###Code k = 10 l = 5 print(k+l) k+=l print (k) k>l and l==l k<l or k==k not (k<l or k==k) k is l k%=5 k ###Output _____no_output_____ ###Markdown Python Indention ###Code if 5>2: print ("Five is greater than two") #This code shows a string of words myvar = "John" print (myvar) ###Output John ###Markdown Python Variable ###Code x=1 a,b,c=-0,-1,2 print (a) print (b) print (c) ###Output 0 -1 2 ###Markdown Casting ###Code b="sally" #This is a type of string b=int(4) print (b) print (float(b)) print (int(b)) print (type(b)) print (type (x)) ###Output 4 4.0 4 <class 'int'> <class 'int'> ###Markdown Using Double quotes or Single quotes ###Code d="John" e='Patrick' print (d) print (e) ###Output John Patrick ###Markdown Case Sensitive ###Code f=4 F="Mark" print (f) print (F) ###Output 4 Mark ###Markdown One Value to Multiple Variables ###Code l=m=n="seven" print (l) print (m) print (n) ###Output seven seven seven ###Markdown Output Variables ###Code x="enjoying" y="Python is " z= y + x print ("Python programming is enjoying") print ("Python programming is " + x) print (z) ###Output Python programming is enjoying Python programming is enjoying Python is enjoying ###Markdown Arithmetic Operation ###Code x=5 y=3 sum=x+y print (x+y) sum ###Output 8 ###Markdown Assignment Operators ###Code a,b,c,=0,-1,6 c%=3 b+=10 print (b) print (c) b ###Output 9 0 ###Markdown Logical Operators ###Code a,b,c=0,-1,6 a>b and c>b a,b,c=0,-1,6 a>b or c<b a,b,c=0,-1,6 not(a>b and c>b) ###Output _____no_output_____ ###Markdown Identity Operators ###Code a,b,c=0,-1,6 a is c ###Output _____no_output_____ ###Markdown Python Indentation ###Code if 5 > 2: print("Five is greater than two!") #this code shows a string of word ###Output Five is greater than two! ###Markdown Python Variable ###Code x = 1 a, b=0,-1 a, b, c=0,1,2 print(a) print(b) print(c) ###Output 0 1 2 ###Markdown Cast ###Code b="sally" #This is a type of string b=int(4) print(b) b=float(4) print(b) ###Output 4 4.0 ###Markdown Type() function ###Code x = 5 y = "John" print(type(x)) print(type(y)) ###Output <class 'int'> <class 'str'> ###Markdown “Double quotes” or ‘Single’ quotes ###Code x= "John" y= 'John' print(y) print(x) ###Output John John ###Markdown Case Sensitive ###Code a = 4 A= "Sally" print(a) print(A) #A will not overwrite a ###Output 4 Sally ###Markdown Multiple Variable ###Code x, y, z=1,2,3 print(x) print(y) print(z) ###Output 1 2 3 ###Markdown One Value to Multiple Variables ###Code x = y = z = "four" print(x) print(y) print(z) ###Output four four four ###Markdown Output Variables ###Code x = 10 y = 5 z = x + y print(z) ###Output 15 ###Markdown Operations ###Code k=10 l=5 print(k+l) k<5 and l<1 k>5 or l<1 not(k<5 and l<1) k is l k is not l ###Output _____no_output_____ ###Markdown introduction to python ###Code b=float(4) print(b) A= "Si sally " B= "at " Num= "ang sampung" C= " kalabaw" print(type(a)) print(A + B + Num + C) x=y=z="four" print(x) print(y) print(z) ###Output four four four ###Markdown ###Code a, b, c, d = 1, 2, 3, 4 print(a+d) print(d-c) print(b*c) print(b**2) ###Output 5 1 6 4 ###Markdown Introduction to Python Programming ###Code b = "Sally" print(type(b)) ###Output <class 'str'> ###Markdown Naming Variables ###Code a = "Sally" A = 'John' print(a) print(A) a, b, c = 1, 2 , 3 print(type(a)) #This is a program using type function print(b) print(c) a = 4.50 print(type(a)) ###Output <class 'float'> ###Markdown One Value to Multiple Variables ###Code x = y = z ="four" print(x) print(y) print(z) x = "enjoying" print("Python programming is"+" "+ x) x = 4 y = 5 print(x+y) print(x-y) not(x<y and x==x) #This is an example of program using logical operator ###Output _____no_output_____ ###Markdown Intro to Python ###Code if 5>2 : print ("five is greater than two!") x= 1 x, y = 1, 2 x, y, z = 1, 2, 3 print ("x") print ("y") print ("z") b= int(4) b c= float(4) c x= 5 y= "Ernest" print (type(x)) print (type(y)) y= 'John' x= "John" print(y) print(x) a= 5 A= 9 print (a) print (A) x, y, z = 4,5,6 print (x) print (y) print (z) x=y=z = 'four' print (x) print (y) print (z) x= "enjoying" print( 'Python programming is' ' ' + x) x= 10 y= 23 z= 6 print (x+y+z) x+=8 print (x) x!=x x==x x= 5 y= 7 z= 9 x<y and y<z x= 5 y= 7 z= 9 x<y or y>z x= 5 y= 7 z= 9 not(x<y and y>z) x= 7 y= 7 x is y x= 8 y= 10 x is not y ###Output _____no_output_____ ###Markdown Python Indention ###Code if 7>3: print("Seven is greater than three!") #This code shows a String of words ###Output Seven is greater than three! ###Markdown Pyhton Variable ###Code a, b, c=6,5,4 z = "Erna" #This is a type of string x = 'Gemma' #This is a type of string H = 'Marites' print(a) print(b) print(c) print(z) print(x) print(H) ###Output 6 5 4 Erna Gemma Marites ###Markdown Casting ###Code print(int(5)) f = 34.35 print(type(f)) x, y, z = "Gemma","Anna", "Fe" print(x) print(y) print(z) x = y = z = "mine" # Multiple variable with single value print(x) print(y) print(z) k = "myself" print("I love"" "+ k) ###Output I love myself ###Markdown Operations in Python ###Code v = 20 x = 30 z = 40 print(v-x) v*=x #Is the same as v = v*x print(v) v<=x and v<=x x<v or v==x not (x<z and z==x) z is not z ###Output _____no_output_____ ###Markdown Python Indention ###Code if 5>2: print("Five is greater than two") #This code shows a string of words ###Output Five is greater than two ###Markdown Python Variable ###Code a, b, c=0,1,2 d = "Sally" #This is a type of string s = 'Mark' #This is a type of string A = 'Raymond' print(a) print(b) print(c) print(d) print(s) print(A) ###Output 0 1 2 Sally Mark Raymond ###Markdown Casting ###Code print(int(4)) print(float(4)) f = 56.789 print(type(f)) x, y, z = "one", "two", "three" print(x) print(y) print(z) x = y = z = "four" # Multiple variable with single value print(x) print(y) print(z) x = "enjoying" print("Python programming is" " " + x) ###Output Python programming is enjoying ###Markdown Operations in Python ###Code k = 10 l = 5 print(k+l) k+=l #Is the same as k = k+l print(k) k>l and l==l k<l or k==k not(k<l or k==k) k is l k is not l k%=5 k ###Output _____no_output_____ ###Markdown Phyton INDENTATION ###Code if 5>2: print("Five is greater than two!") # ###Output Five is greater than two! ###Markdown Phyton Variable ###Code a, b, c = 0,-1,2 print (a) print (b) print (c) ###Output 0 -1 2 ###Markdown CASTING ###Code b ="sally" #this is a type of string b=int(3) print(b) b=float(3) print(b) ###Output _____no_output_____ ###Markdown TYPE () FUNCTION ###Code x="1231" y= 5 print (type(x)) print(type(y)) ###Output <class 'str'> <class 'int'> ###Markdown Double Quotes or Single Quotes ###Code x= "jane" x= 'john' print(x) x='jane' x="john" print(x) x="john" x='Jane' print(x) ###Output _____no_output_____ ###Markdown Case Sensitive ###Code y=4 Y=31 print(y) print(Y) ###Output _____no_output_____ ###Markdown Multiple Variables ###Code a,b,c=5,3,1 print(a) print(b) print(c) ###Output _____no_output_____ ###Markdown One to multiple variable ###Code x=y=z="Five" print(x) print(y) print(z) a=b=c=8 print(a) print(b) print(c) ###Output 8 8 8 ###Markdown Output variables ###Code x= "enjoying" print ("Valorant is " + x) ###Output _____no_output_____ ###Markdown Arithmetic Operations ###Code x=5 y = 3 sum=x+y sum x=5 y = 3 print (x + y) x=5 y=3 sum=x+y sum ###Output 8 ###Markdown Assignment Operators ###Code a,b,c=0,2,13 c%b #% is the remainder ###Output _____no_output_____ ###Markdown Logical Operators ###Code a,b,c=0,-1,6 -2>b and c>b #if one is wrong then all is false ###Output _____no_output_____ ###Markdown Identity Operators ###Code a,b,c=0,-1,5 a is c ###Output _____no_output_____ ###Markdown ###Code ###Output _____no_output_____ ###Markdown Python Indention ###Code if 5>2: print("Five is greater than two") print("Good Morning Everybody!") #This is a comment x=1 x,y=1,2 x,y,z=1,2,3 print(x) print(y) print(z) b=int(4) print(b) c=float(4) print(c) x=10 y="IAN" print(type(x)) print(type(y)) y='IAN' y="Ian" print(y) a=5 #A will not overwrite a. A="ian" print(a) print(A) x,y,z=1,2,3 print(x) print(y) print(z) x=y=z="Congratulations!" print(x) print(y) print(z) x="Birthday!" print("Happy "+x) x="Good" y="Morning" z=x+y print(z) x=100 y=100 print(x+y) x=1 y=10 z=2 x=100 #assignment operator x+=110 print(x) x==y #comparison operator x!=z x==y and x!=z #logical operator x,y,z=1,4,3 x is y #Identity operator ###Output _____no_output_____ ###Markdown ###Code from sklearn import datasets dataset = datasets.load_iris() print('특성 이름:\n'.format(dataset['feature_names'])) print('입력 데이터:\n'.format(dataset['data'][:5])) print('타깃 이름:\n'.format(dataset['target_names'])) print('타깃:\n'.format(dataset['target'])) ###Output 특성 이름: ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'] 입력 데이터: [[5.1 3.5 1.4 0.2] [4.9 3. 1.4 0.2] [4.7 3.2 1.3 0.2] [4.6 3.1 1.5 0.2] [5. 3.6 1.4 0.2]] 타깃 이름: ['setosa' 'versicolor' 'virginica'] 타깃: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2] ###Markdown Introduction to Python ###Code #Python Indention if 5>2: print("five is greater than two!") x = 1 # This is a single variable with single value x,y = 1,2 # these are two variables with two different values x,y,z= 1,2,3 print(x) print(y) print(z) x,y="four",2 x y x ###Output _____no_output_____ ###Markdown Casting ###Code b = int(4) b c= float(4) c ###Output _____no_output_____ ###Markdown Type Function ###Code x=5 y= "John" # This is a type of string h= "ana" H='Ana' print(type(x)) print(type(y)) print(h) print(H) ###Output <class 'int'> <class 'str'> ana Ana ###Markdown One Value to Multiple Variables ###Code x = y = z = 'four' print(x) print(y) print(z) x = "enjoying" print("Python Programming is" " " + x) x = 11 y = 12 z = 13 print(x+y+z) x+=3 #This is the same as x = +3 print(x) y+=5 print(y) x<y and x!=x # pag isang lang yung true false na x>y or not y==z # kahit isa lang yung true, true paden not(print(x>y)) #Identity operations print (x is y) print (x is not z) ###Output False True ###Markdown Introduction to Python ###Code #Python Indention if 5>2: print("five is greater than two") x = 1 #this is a single variable with single value x, y = 1,2 #these are twp variables with two different values y, y, z = 1, 4, 3 print(x) print(y) print(z) #Casting b = int(4) b c = float(4) c x = 5 y = "john" #this is a type of string h = "ana" H = "COL" #H is not equal to h print(type(x)) print(type(y)) print(h) print(H) x = y = z = "four" print (x) print (y) print (z) x = "enjoying" print("Python Programming is" " " + x) x= 11 y = 12 print (x+y) x+=3 #this is the same as the x = x+3 print(x) y+=5 print(y) print(x>y) x!= x not(print(x)) print(x) x < 5 and x > 12 x < 5 or x < 10 #"or" it would display the desc. of one the set variable/statements ###Output _____no_output_____ ###Markdown ###Code print ("I'm John Cedric Pengon from BSECE 2-1") print("Taking Bachelors of Science in Electronics Engineering") age = 19 num1 = 24 num2 = 56 # Add two numbers sum = num1 + num2 # Display the sum print('The sum of {0} and {1} is {2}'.format(num1, num2, sum)) ###Output The sum of 24 and 56 is 80 ###Markdown Python Indentation ###Code if 5>2: print("Five is greater than two!") #This code shows a string of words ###Output Five is greater than two! ###Markdown Python Variable ###Code a,b,c=0,1,2 d="Sally" #This is a type of string s='Mark' #This is a type of string A='Raymond' print(a) print(b) print(c) print(d) print(s) print(A) ###Output 0 1 2 Sally Mark Raymond ###Markdown Casting ###Code print(float(4.6)) f=56.789 print(type(f)) x,y,z="one","two","three" print(x) print(y) print(z) x=y=z="four" print(x) print(y) print(z) x="enjoying" print("Python programming is"" "+x) ###Output Python programming is enjoying ###Markdown Operations in Python ###Code k=10 l=5 print(k+l) k+=l #Is the same as k=k+l print(k) k>l and l==l k<l or k==k not(k<l or k==k) k is l k is not l k%=5 k ###Output _____no_output_____ ###Markdown Introduction to Python ###Code #Python Indention if 5>2: print("five is greater than two!") x = 1 # This is a single variable with single value x,y = 1,2 # these are two variables with two different values x,y,z= 1,2,3 print(x) print(y) print(z) x,y="four",2 x y x ###Output _____no_output_____ ###Markdown Casting ###Code b = int(4) b c= float(4) c ###Output _____no_output_____ ###Markdown Type Function ###Code x=5 y= "John" # This is a type of string h= "ana" H='Ana' print(type(x)) print(type(y)) print(h) print(H) ###Output <class 'int'> <class 'str'> ana Ana ###Markdown One Value to Multiple Variables ###Code x = y = z = 'four' print(x) print(y) print(z) x = "enjoying" print("Python Programming is" " " + x) x = 11 y = 12 z = 13 print(x+y+z) x+=3 #This is the same as x = +3 print(x) y+=5 print(y) x<y and x!=x # pag isang lang yung true false na x>y or not y==z # kahit isa lang yung true, true paden not(print(x>y)) #Identity operations print (x is y) print (x is not z) ###Output False True ###Markdown Python Indention ###Code if 5>2: print("Five is greater than two") #This code shows a string of words ###Output Five is greater than two ###Markdown Python Variable ###Code a, b, c=0,1,2 d = "Sally" #This is a type of string s = "Mark" #This is a type of string A = "Raymond" print (a) print (b) print (c) print (d) print (s) print (A) ###Output 0 1 2 Sally Mark Raymond ###Markdown Casting ###Code print (float(4)) f = 56.78 print(type(f)) x, y, z, = "one", "two", "three" print (x) print (y) print (z) x = y = z ="four" # multiple variable with single value print (x) print (y) print (z) x = y = z ="twenty" # multiple variable with single value print (x) print (y) print (z) print("Python programming is enjoying") x = "enjoying" print("Python programing is" + x) x = "enjoying" print("Python programing is" " " + x) ###Output Python programing is enjoying ###Markdown Operations in Python ###Code k = 10 l = 5 print(k+l) k+=l #Is the same as k = k+l print(k) k>l and l==l k<l or k==k not(k<l or k==k) k is l k%=5 k ###Output _____no_output_____ ###Markdown Introduction to Python ###Code if 5>2: print("Five is greater than two") x = 1 #This is a single variable x,y=1,2 #These are two variables with two different values x,y,z=1,2,3 print(x) print(y) print(z) x,y="four",2 x y x ###Output _____no_output_____ ###Markdown Casting ###Code b = int(4) b c = float(4) c ###Output _____no_output_____ ###Markdown Type Function ###Code x = 5 y = "John" #This is a type of string h = "ino" H = "Ino" print(type(x)) print(type(y)) print(h) print(H) ###Output <class 'int'> <class 'str'> ino Ino ###Markdown One Value to Multiple Variables ###Code x = y = z = 'four' print(x) print(y) print(z) x = "enjoying" print("Python Programming is" " " + x) x = 11 y = 12 z = 13 print(x+y+z) x+=3 #This is the same as x = x =3 print(x) y+=5 print(y) print(x>y) x<y and x==x x<y and x!=x x<y or y==y x>y or y==z not(print(x>y)) #Identify operations print(x is y) print(x is not z) ###Output False True
site/it/tutorials/quickstart/beginner.ipynb
###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Guida rapida a Tensorflow 2 per principianti Visualizza su TensorFlow.org Esegui in Google Colab Visualizza il sorgente su GitHub Scarica il notebook Note: La nostra comunità di Tensorflow ha tradotto questi documenti. Poichè queste traduzioni sono *best-effort*, non è garantito che rispecchino in maniera precisa e aggiornata la [documentazione ufficiale in inglese](https://www.tensorflow.org/?hl=en). Se avete suggerimenti per migliorare questa traduzione, mandate per favore una pull request al repository Github [tensorflow/docs](https://github.com/tensorflow/docs). Per proporsi come volontari alla scrittura o alla review delle traduzioni della comunità contattate la [mailing list [email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). Questa breve introduzione usa [Keras](https://www.tensorflow.org/guide/keras/overview) per:1. Costruire una rete neurale che classifica immagini.2. Addestrare questa rete neurale.3. E, infine, valutare l'accuratezza del modello. Questo è un [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. I programmi Python sono eseguiti direttamente nel browser—un ottimo modo per imparare e utilizzare TensorFlow. Per seguire questo tutorial, esegui il file notebook in Google Colab cliccando sul bottone in cima a questa pagina.1. All'interno di Colab, connettiti al runtime di Python: In alto a destra della barra dei menu, seleziona *CONNECT*.2. Esegui tutte le celle di codice di notebook: Seleziona *Runtime* > *Run all*. Scarica e installa il package TensorFlow 2. Importa TensorFlow nel tuo codice: ###Code # Install TensorFlow import tensorflow as tf ###Output _____no_output_____ ###Markdown Carica e prepara il [dataset MNIST](http://yann.lecun.com/exdb/mnist/). Converti gli esempi da numeri di tipo integer a floating-point: ###Code mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 ###Output _____no_output_____ ###Markdown Costruisci il modello `tf.keras.Sequential` tramite layer. Scegli un metodo di ottimizzazione e una funzione obiettivo per l'addestramento: ###Code model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Addestra e valuta il modello: ###Code model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test, verbose=2) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Guida rapida a Tensorflow 2 per principianti Visualizza su TensorFlow.org Esegui in Google Colab Visualizza il sorgente su GitHub Scarica il notebook Note: La nostra comunità di Tensorflow ha tradotto questi documenti. Poichè queste traduzioni sono *best-effort*, non è garantito che rispecchino in maniera precisa e aggiornata la [documentazione ufficiale in inglese](https://www.tensorflow.org/?hl=en). Se avete suggerimenti per migliorare questa traduzione, mandate per favore una pull request al repository Github [tensorflow/docs](https://github.com/tensorflow/docs). Per proporsi come volontari alla scrittura o alla review delle traduzioni della comunità contattate la [mailing list [email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). Questa breve introduzione usa [Keras](https://www.tensorflow.org/guide/keras/overview) per:1. Costruire una rete neurale che classifica immagini.2. Addestrare questa rete neurale.3. E, infine, valutare l'accuratezza del modello. Questo è un [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. I programmi Python sono eseguiti direttamente nel browser—un ottimo modo per imparare e utilizzare TensorFlow. Per seguire questo tutorial, esegui il file notebook in Google Colab cliccando sul bottone in cima a questa pagina.1. All'interno di Colab, connettiti al runtime di Python: In alto a destra della barra dei menu, seleziona *CONNECT*.2. Esegui tutte le celle di codice di notebook: Seleziona *Runtime* > *Run all*. Scarica e installa il package TensorFlow 2. Importa TensorFlow nel tuo codice: ###Code from __future__ import absolute_import, division, print_function, unicode_literals # Install TensorFlow try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow as tf ###Output _____no_output_____ ###Markdown Carica e prepara il [dataset MNIST](http://yann.lecun.com/exdb/mnist/). Converti gli esempi da numeri di tipo integer a floating-point: ###Code mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 ###Output _____no_output_____ ###Markdown Costruisci il modello `tf.keras.Sequential` tramite layer. Scegli un metodo di ottimizzazione e una funzione obiettivo per l'addestramento: ###Code model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Addestra e valuta il modello: ###Code model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test, verbose=2) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Guida rapida a Tensorflow 2 per principianti Visualizza su TensorFlow.org Esegui in Google Colab Visualizza il sorgente su GitHub Scarica il notebook Note: La nostra comunità di Tensorflow ha tradotto questi documenti. Poichè queste traduzioni sono *best-effort*, non è garantito che rispecchino in maniera precisa e aggiornata la [documentazione ufficiale in inglese](https://www.tensorflow.org/?hl=en). Se avete suggerimenti per migliorare questa traduzione, mandate per favore una pull request al repository Github [tensorflow/docs](https://github.com/tensorflow/docs). Per proporsi come volontari alla scrittura o alla review delle traduzioni della comunità contattate la [mailing list [email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). Questa breve introduzione usa [Keras](https://www.tensorflow.org/guide/keras/overview) per:1. Costruire una rete neurale che classifica immagini.2. Addestrare questa rete neurale.3. E, infine, valutare l'accuratezza del modello. Questo è un [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. I programmi Python sono eseguiti direttamente nel browser—un ottimo modo per imparare e utilizzare TensorFlow. Per seguire questo tutorial, esegui il file notebook in Google Colab cliccando sul bottone in cima a questa pagina.1. All'interno di Colab, connettiti al runtime di Python: In alto a destra della barra dei menu, seleziona *CONNECT*.2. Esegui tutte le celle di codice di notebook: Seleziona *Runtime* > *Run all*. Scarica e installa il package TensorFlow 2. Importa TensorFlow nel tuo codice: ###Code # Install TensorFlow import tensorflow as tf ###Output _____no_output_____ ###Markdown Carica e prepara il [dataset MNIST](http://yann.lecun.com/exdb/mnist/). Converti gli esempi da numeri di tipo integer a floating-point: ###Code mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 ###Output _____no_output_____ ###Markdown Costruisci il modello `tf.keras.Sequential` tramite layer. Scegli un metodo di ottimizzazione e una funzione obiettivo per l'addestramento: ###Code model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Addestra e valuta il modello: ###Code model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test, verbose=2) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Guida rapida a Tensorflow 2 per principianti Visualizza su TensorFlow.org Esegui in Google Colab Visualizza il sorgente su GitHub Scarica il notebook Note: La nostra comunità di Tensorflow ha tradotto questi documenti. Poichè queste traduzioni sono *best-effort*, non è garantito che rispecchino in maniera precisa e aggiornata la [documentazione ufficiale in inglese](https://www.tensorflow.org/?hl=en). Se avete suggerimenti per migliorare questa traduzione, mandate per favore una pull request al repository Github [tensorflow/docs](https://github.com/tensorflow/docs). Per proporsi come volontari alla scrittura o alla review delle traduzioni della comunità contattate la [mailing list [email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). Questa breve introduzione usa [Keras](https://www.tensorflow.org/guide/keras/overview) per:1. Costruire una rete neurale che classifica immagini.2. Addestrare questa rete neurale.3. E, infine, valutare l'accuratezza del modello. Questo è un [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. I programmi Python sono eseguiti direttamente nel browser—un ottimo modo per imparare e utilizzare TensorFlow. Per seguire questo tutorial, esegui il file notebook in Google Colab cliccando sul bottone in cima a questa pagina.1. All'interno di Colab, connettiti al runtime di Python: In alto a destra della barra dei menu, seleziona *CONNECT*.2. Esegui tutte le celle di codice di notebook: Seleziona *Runtime* > *Run all*. Scarica e installa il package TensorFlow 2. Importa TensorFlow nel tuo codice: ###Code from __future__ import absolute_import, division, print_function, unicode_literals # Install TensorFlow try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow as tf ###Output _____no_output_____ ###Markdown Carica e prepara il [dataset MNIST](http://yann.lecun.com/exdb/mnist/). Converti gli esempi da numeri di tipo integer a floating-point: ###Code mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 ###Output _____no_output_____ ###Markdown Costruisci il modello `tf.keras.Sequential` tramite layer. Scegli un metodo di ottimizzazione e una funzione obiettivo per l'addestramento: ###Code model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Addestra e valuta il modello: ###Code model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test, verbose=2) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Guida rapida a Tensorflow 2 per principianti Visualizza su TensorFlow.org Esegui in Google Colab Visualizza il sorgente su GitHub Scarica il notebook Note: La nostra comunità di Tensorflow ha tradotto questi documenti. Poichè queste traduzioni sono *best-effort*, non è garantito che rispecchino in maniera precisa e aggiornata la [documentazione ufficiale in inglese](https://www.tensorflow.org/?hl=en). Se avete suggerimenti per migliorare questa traduzione, mandate per favore una pull request al repository Github [tensorflow/docs](https://github.com/tensorflow/docs). Per proporsi come volontari alla scrittura o alla review delle traduzioni della comunità contattate la [mailing list [email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). Questa breve introduzione usa [Keras](https://www.tensorflow.org/guide/keras/overview) per:1. Costruire una rete neurale che classifica immagini.2. Addestrare questa rete neurale.3. E, infine, valutare l'accuratezza del modello. Questo è un [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. I programmi Python sono eseguiti direttamente nel browser—un ottimo modo per imparare e utilizzare TensorFlow. Per seguire questo tutorial, esegui il file notebook in Google Colab cliccando sul bottone in cima a questa pagina.1. All'interno di Colab, connettiti al runtime di Python: In alto a destra della barra dei menu, seleziona *CONNECT*.2. Esegui tutte le celle di codice di notebook: Seleziona *Runtime* > *Run all*. Scarica e installa il package TensorFlow 2. Importa TensorFlow nel tuo codice: ###Code from __future__ import absolute_import, division, print_function, unicode_literals # Install TensorFlow try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow as tf ###Output _____no_output_____ ###Markdown Carica e prepara il [dataset MNIST](http://yann.lecun.com/exdb/mnist/). Converti gli esempi da numeri di tipo integer a floating-point: ###Code mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 ###Output _____no_output_____ ###Markdown Costruisci il modello `tf.keras.Sequential` tramite layer. Scegli un metodo di ottimizzazione e una funzione obiettivo per l'addestramento: ###Code model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Addestra e valuta il modello: ###Code model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test, verbose=2) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Guida rapida a Tensorflow 2 per principianti Visualizza su TensorFlow.org Esegui in Google Colab Visualizza il sorgente su GitHub Scarica il notebook Note: La nostra comunità di Tensorflow ha tradotto questi documenti. Poichè queste traduzioni sono *best-effort*, non è garantito che rispecchino in maniera precisa e aggiornata la [documentazione ufficiale in inglese](https://www.tensorflow.org/?hl=en). Se avete suggerimenti per migliorare questa traduzione, mandate per favore una pull request al repository Github [tensorflow/docs](https://github.com/tensorflow/docs). Per proporsi come volontari alla scrittura o alla review delle traduzioni della comunità contattate la [mailing list [email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). Questa breve introduzione usa [Keras](https://www.tensorflow.org/guide/keras/overview) per:1. Costruire una rete neurale che classifica immagini.2. Addestrare questa rete neurale.3. E, infine, valutare l'accuratezza del modello. Questo è un [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. I programmi Python sono eseguiti direttamente nel browser—un ottimo modo per imparare e utilizzare TensorFlow. Per seguire questo tutorial, esegui il file notebook in Google Colab cliccando sul bottone in cima a questa pagina.1. All'interno di Colab, connettiti al runtime di Python: In alto a destra della barra dei menu, seleziona *CONNECT*.2. Esegui tutte le celle di codice di notebook: Seleziona *Runtime* > *Run all*. Scarica e installa il package TensorFlow 2. Importa TensorFlow nel tuo codice: ###Code from __future__ import absolute_import, division, print_function, unicode_literals # Install TensorFlow try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow as tf ###Output _____no_output_____ ###Markdown Carica e prepara il [dataset MNIST](http://yann.lecun.com/exdb/mnist/). Converti gli esempi da numeri di tipo integer a floating-point: ###Code mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 ###Output _____no_output_____ ###Markdown Costruisci il modello `tf.keras.Sequential` tramite layer. Scegli un metodo di ottimizzazione e una funzione obiettivo per l'addestramento: ###Code model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Addestra e valuta il modello: ###Code model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test, verbose=2) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Guida rapida a Tensorflow 2 per principianti Visualizza su TensorFlow.org Esegui in Google Colab Visualizza il sorgente su GitHub Scarica il notebook Note: La nostra comunità di Tensorflow ha tradotto questi documenti. Poichè questa traduzioni della comunità sono *best-effort*, non c'è garanzia che questa sia un riflesso preciso e aggiornato della [documentazione ufficiale in inglese](https://www.tensorflow.org/?hl=en). Se avete suggerimenti per migliorare questa traduzione, mandate per favore una pull request al repository Github [tensorflow/docs](https://github.com/tensorflow/docs). Per proporsi come volontari alla scrittura o alla review delle traduzioni della comunità contattate la [mailing list [email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). Questa breve introduzione usa [Keras](https://www.tensorflow.org/guide/keras/overview) per:1. Costruire una rete neurale che classifica immagini.2. Addestrare questa rete neurale.3. E, infine, valutare l'accuratezza del modello. Questo è un [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. Programmi python sono eseguiti run direttamente nel browser—un ottimo modo per imparare e utilizzare TensorFlow. Per seguire questo tutorial, esegui il file notebook in Google Colab cliccando sul bottone in cima a questa pagina.1. All'interno di Colab, connettiti al runtime di Python: In alto a destra della barra dei menu, seleziona *CONNECT*.2. Esegui tutte le celle di codice di notebook: Seleziona *Runtime* > *Run all*. Scarica e installa il package TensorFlow 2. Importa TensorFlow nel tuo codice: ###Code from __future__ import absolute_import, division, print_function, unicode_literals # Install TensorFlow try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow as tf ###Output _____no_output_____ ###Markdown Carica e prepara il [dataset MNIST](http://yann.lecun.com/exdb/mnist/). Converti gli esempi da numeri di tipo integer a floating-point: ###Code mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 ###Output _____no_output_____ ###Markdown Costruisci il modello `tf.keras.Sequential` tramite layer. Scegli un metodo di ottimizzazione e una funzione obiettivo per l'addestramento: ###Code model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Addestra e valuta il modello: ###Code model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test, verbose=2) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Guida rapida a Tensorflow 2 per principianti Visualizza su TensorFlow.org Esegui in Google Colab Visualizza il sorgente su GitHub Scarica il notebook Note: La nostra comunità di Tensorflow ha tradotto questi documenti. Poichè queste traduzioni sono *best-effort*, non è garantito che rispecchino in maniera precisa e aggiornata la [documentazione ufficiale in inglese](https://www.tensorflow.org/?hl=en). Se avete suggerimenti per migliorare questa traduzione, mandate per favore una pull request al repository Github [tensorflow/docs](https://github.com/tensorflow/docs). Per proporsi come volontari alla scrittura o alla review delle traduzioni della comunità contattate la [mailing list [email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). Questa breve introduzione usa [Keras](https://www.tensorflow.org/guide/keras/overview) per:1. Costruire una rete neurale che classifica immagini.2. Addestrare questa rete neurale.3. E, infine, valutare l'accuratezza del modello. Questo è un [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. I programmi Python sono eseguiti direttamente nel browser—un ottimo modo per imparare e utilizzare TensorFlow. Per seguire questo tutorial, esegui il file notebook in Google Colab cliccando sul bottone in cima a questa pagina.1. All'interno di Colab, connettiti al runtime di Python: In alto a destra della barra dei menu, seleziona *CONNECT*.2. Esegui tutte le celle di codice di notebook: Seleziona *Runtime* > *Run all*. Scarica e installa il package TensorFlow 2. Importa TensorFlow nel tuo codice: ###Code # Install TensorFlow import tensorflow as tf ###Output _____no_output_____ ###Markdown Carica e prepara il [dataset MNIST](http://yann.lecun.com/exdb/mnist/). Converti gli esempi da numeri di tipo integer a floating-point: ###Code mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 ###Output _____no_output_____ ###Markdown Costruisci il modello `tf.keras.Sequential` tramite layer. Scegli un metodo di ottimizzazione e una funzione obiettivo per l'addestramento: ###Code model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Addestra e valuta il modello: ###Code model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test, verbose=2) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Guida rapida a Tensorflow 2 per principianti Visualizza su TensorFlow.org Esegui in Google Colab Visualizza il sorgente su GitHub Scarica il notebook Note: La nostra comunità di Tensorflow ha tradotto questi documenti. Poichè queste traduzioni sono *best-effort*, non è garantito che rispecchino in maniera precisa e aggiornata la [documentazione ufficiale in inglese](https://www.tensorflow.org/?hl=en). Se avete suggerimenti per migliorare questa traduzione, mandate per favore una pull request al repository Github [tensorflow/docs](https://github.com/tensorflow/docs). Per proporsi come volontari alla scrittura o alla review delle traduzioni della comunità contattate la [mailing list [email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). Questa breve introduzione usa [Keras](https://www.tensorflow.org/guide/keras/overview) per:1. Costruire una rete neurale che classifica immagini.2. Addestrare questa rete neurale.3. E, infine, valutare l'accuratezza del modello. Questo è un [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. I programmi Python sono eseguiti direttamente nel browser—un ottimo modo per imparare e utilizzare TensorFlow. Per seguire questo tutorial, esegui il file notebook in Google Colab cliccando sul bottone in cima a questa pagina.1. All'interno di Colab, connettiti al runtime di Python: In alto a destra della barra dei menu, seleziona *CONNECT*.2. Esegui tutte le celle di codice di notebook: Seleziona *Runtime* > *Run all*. Scarica e installa il package TensorFlow 2. Importa TensorFlow nel tuo codice: ###Code # Install TensorFlow import tensorflow as tf ###Output _____no_output_____ ###Markdown Carica e prepara il [dataset MNIST](http://yann.lecun.com/exdb/mnist/). Converti gli esempi da numeri di tipo integer a floating-point: ###Code mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 ###Output _____no_output_____ ###Markdown Costruisci il modello `tf.keras.Sequential` tramite layer. Scegli un metodo di ottimizzazione e una funzione obiettivo per l'addestramento: ###Code model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Addestra e valuta il modello: ###Code model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test, verbose=2) ###Output _____no_output_____
Assignment3/ImageGeneration.ipynb
###Markdown Image GenerationIn this notebook we will continue our exploration of image gradients using the deep model that was pretrained on TinyImageNet. We will explore various ways of using these image gradients to generate images. We will implement class visualizations, feature inversion, and DeepDream. ###Code # As usual, a bit of setup import time, os, json import numpy as np from scipy.misc import imread, imresize import matplotlib.pyplot as plt from cs231n.classifiers.pretrained_cnn import PretrainedCNN from cs231n.data_utils import load_tiny_imagenet from cs231n.image_utils import blur_image, deprocess_image, preprocess_image %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 ###Output _____no_output_____ ###Markdown TinyImageNet and pretrained modelAs in the previous notebook, load the TinyImageNet dataset and the pretrained model. ###Code data = load_tiny_imagenet('cs231n/datasets/tiny-imagenet-100-A', subtract_mean=True) model = PretrainedCNN(h5_file='cs231n/datasets/pretrained_model.h5') ###Output _____no_output_____ ###Markdown Class visualizationBy starting with a random noise image and performing gradient ascent on a target class, we can generate an image that the network will recognize as the target class. This idea was first presented in [1]; [2] extended this idea by suggesting several regularization techniques that can improve the quality of the generated image.Concretely, let $I$ be an image and let $y$ be a target class. Let $s_y(I)$ be the score that a convolutional network assigns to the image $I$ for class $y$; note that these are raw unnormalized scores, not class probabilities. We wish to generate an image $I^*$ that achieves a high score for the class $y$ by solving the problem$$I^* = \arg\max_I s_y(I) + R(I)$$where $R$ is a (possibly implicit) regularizer. We can solve this optimization problem using gradient descent, computing gradients with respect to the generated image. We will use (explicit) L2 regularization of the form$$R(I) + \lambda \|I\|_2^2$$and implicit regularization as suggested by [2] by peridically blurring the generated image. We can solve this problem using gradient ascent on the generated image.In the cell below, complete the implementation of the `create_class_visualization` function.[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks: VisualisingImage Classification Models and Saliency Maps", ICLR Workshop 2014.[2] Yosinski et al, "Understanding Neural Networks Through Deep Visualization", ICML 2015 Deep Learning Workshop ###Code def create_class_visualization(target_y, model, **kwargs): """ Perform optimization over the image to generate class visualizations. Inputs: - target_y: Integer in the range [0, 100) giving the target class - model: A PretrainedCNN that will be used for generation Keyword arguments: - learning_rate: Floating point number giving the learning rate - blur_every: An integer; how often to blur the image as a regularizer - l2_reg: Floating point number giving L2 regularization strength on the image; this is lambda in the equation above. - max_jitter: How much random jitter to add to the image as regularization - num_iterations: How many iterations to run for - show_every: How often to show the image """ learning_rate = kwargs.pop('learning_rate', 10000) blur_every = kwargs.pop('blur_every', 1) l2_reg = kwargs.pop('l2_reg', 1e-6) max_jitter = kwargs.pop('max_jitter', 4) num_iterations = kwargs.pop('num_iterations', 100) show_every = kwargs.pop('show_every', 25) X = np.random.randn(1, 3, 64, 64) for t in xrange(num_iterations): # As a regularizer, add random jitter to the image ox, oy = np.random.randint(-max_jitter, max_jitter+1, 2) X = np.roll(np.roll(X, ox, -1), oy, -2) dX = None ############################################################################ # TODO: Compute the image gradient dX of the image with respect to the # # target_y class score. This should be similar to the fooling images. Also # # add L2 regularization to dX and update the image X using the image # # gradient and the learning rate. # ############################################################################ pass ############################################################################ # END OF YOUR CODE # ############################################################################ # Undo the jitter X = np.roll(np.roll(X, -ox, -1), -oy, -2) # As a regularizer, clip the image X = np.clip(X, -data['mean_image'], 255.0 - data['mean_image']) # As a regularizer, periodically blur the image if t % blur_every == 0: X = blur_image(X) # Periodically show the image if t % show_every == 0: plt.imshow(deprocess_image(X, data['mean_image'])) plt.gcf().set_size_inches(3, 3) plt.axis('off') plt.show() return X ###Output _____no_output_____ ###Markdown You can use the code above to generate some cool images! An example is shown below. Try to generate a cool-looking image. If you want you can try to implement the other regularization schemes from Yosinski et al, but it isn't required. ###Code target_y = 43 # Tarantula print data['class_names'][target_y] X = create_class_visualization(target_y, model, show_every=25) ###Output _____no_output_____ ###Markdown Feature InversionIn an attempt to understand the types of features that convolutional networks learn to recognize, a recent paper [1] attempts to reconstruct an image from its feature representation. We can easily implement this idea using image gradients from the pretrained network.Concretely, given a image $I$, let $\phi_\ell(I)$ be the activations at layer $\ell$ of the convolutional network $\phi$. We wish to find an image $I^*$ with a similar feature representation as $I$ at layer $\ell$ of the network $\phi$ by solving the optimization problem$$I^* = \arg\min_{I'} \|\phi_\ell(I) - \phi_\ell(I')\|_2^2 + R(I')$$where $\|\cdot\|_2^2$ is the squared Euclidean norm. As above, $R$ is a (possibly implicit) regularizer. We can solve this optimization problem using gradient descent, computing gradients with respect to the generated image. We will use (explicit) L2 regularization of the form$$R(I') + \lambda \|I'\|_2^2$$together with implicit regularization by periodically blurring the image, as recommended by [2].Implement this method in the function below.[1] Aravindh Mahendran, Andrea Vedaldi, "Understanding Deep Image Representations by Inverting them", CVPR 2015[2] Yosinski et al, "Understanding Neural Networks Through Deep Visualization", ICML 2015 Deep Learning Workshop ###Code def invert_features(target_feats, layer, model, **kwargs): """ Perform feature inversion in the style of Mahendran and Vedaldi 2015, using L2 regularization and periodic blurring. Inputs: - target_feats: Image features of the target image, of shape (1, C, H, W); we will try to generate an image that matches these features - layer: The index of the layer from which the features were extracted - model: A PretrainedCNN that was used to extract features Keyword arguments: - learning_rate: The learning rate to use for gradient descent - num_iterations: The number of iterations to use for gradient descent - l2_reg: The strength of L2 regularization to use; this is lambda in the equation above. - blur_every: How often to blur the image as implicit regularization; set to 0 to disable blurring. - show_every: How often to show the generated image; set to 0 to disable showing intermediate reuslts. Returns: - X: Generated image of shape (1, 3, 64, 64) that matches the target features. """ learning_rate = kwargs.pop('learning_rate', 10000) num_iterations = kwargs.pop('num_iterations', 500) l2_reg = kwargs.pop('l2_reg', 1e-7) blur_every = kwargs.pop('blur_every', 1) show_every = kwargs.pop('show_every', 50) X = np.random.randn(1, 3, 64, 64) for t in xrange(num_iterations): ############################################################################ # TODO: Compute the image gradient dX of the reconstruction loss with # # respect to the image. You should include L2 regularization penalizing # # large pixel values in the generated image using the l2_reg parameter; # # then update the generated image using the learning_rate from above. # ############################################################################ pass ############################################################################ # END OF YOUR CODE # ############################################################################ # As a regularizer, clip the image X = np.clip(X, -data['mean_image'], 255.0 - data['mean_image']) # As a regularizer, periodically blur the image if (blur_every > 0) and t % blur_every == 0: X = blur_image(X) if (show_every > 0) and (t % show_every == 0 or t + 1 == num_iterations): plt.imshow(deprocess_image(X, data['mean_image'])) plt.gcf().set_size_inches(3, 3) plt.axis('off') plt.title('t = %d' % t) plt.show() ###Output _____no_output_____ ###Markdown Shallow feature reconstructionAfter implementing the feature inversion above, run the following cell to try and reconstruct features from the fourth convolutional layer of the pretrained model. You should be able to reconstruct the features using the provided optimization parameters. ###Code filename = 'kitten.jpg' layer = 3 # layers start from 0 so these are features after 4 convolutions img = imresize(imread(filename), (64, 64)) plt.imshow(img) plt.gcf().set_size_inches(3, 3) plt.title('Original image') plt.axis('off') plt.show() # Preprocess the image before passing it to the network: # subtract the mean, add a dimension, etc img_pre = preprocess_image(img, data['mean_image']) # Extract features from the image feats, _ = model.forward(img_pre, end=layer) # Invert the features kwargs = { 'num_iterations': 400, 'learning_rate': 5000, 'l2_reg': 1e-8, 'show_every': 100, 'blur_every': 10, } X = invert_features(feats, layer, model, **kwargs) ###Output _____no_output_____ ###Markdown Deep feature reconstructionReconstructing images using features from deeper layers of the network tends to give interesting results. In the cell below, try to reconstruct the best image you can by inverting the features after 7 layers of convolutions. You will need to play with the hyperparameters to try and get a good result.HINT: If you read the paper by Mahendran and Vedaldi, you'll see that reconstructions from deep features tend not to look much like the original image, so you shouldn't expect the results to look like the reconstruction above. You should be able to get an image that shows some discernable structure within 1000 iterations. ###Code filename = 'kitten.jpg' layer = 6 # layers start from 0 so these are features after 7 convolutions img = imresize(imread(filename), (64, 64)) plt.imshow(img) plt.gcf().set_size_inches(3, 3) plt.title('Original image') plt.axis('off') plt.show() # Preprocess the image before passing it to the network: # subtract the mean, add a dimension, etc img_pre = preprocess_image(img, data['mean_image']) # Extract features from the image feats, _ = model.forward(img_pre, end=layer) # Invert the features # You will need to play with these parameters. kwargs = { 'num_iterations': 1000, 'learning_rate': 0, 'l2_reg': 0, 'show_every': 100, 'blur_every': 0, } X = invert_features(feats, layer, model, **kwargs) ###Output _____no_output_____ ###Markdown DeepDreamIn the summer of 2015, Google released a [blog post](http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html) describing a new method of generating images from neural networks, and they later [released code](https://github.com/google/deepdream) to generate these images.The idea is very simple. We pick some layer from the network, pass the starting image through the network to extract features at the chosen layer, set the gradient at that layer equal to the activations themselves, and then backpropagate to the image. This has the effect of modifying the image to amplify the activations at the chosen layer of the network.For DeepDream we usually extract features from one of the convolutional layers, allowing us to generate images of any resolution.We can implement this idea using our pretrained network. The results probably won't look as good as Google's since their network is much bigger, but we should still be able to generate some interesting images. ###Code def deepdream(X, layer, model, **kwargs): """ Generate a DeepDream image. Inputs: - X: Starting image, of shape (1, 3, H, W) - layer: Index of layer at which to dream - model: A PretrainedCNN object Keyword arguments: - learning_rate: How much to update the image at each iteration - max_jitter: Maximum number of pixels for jitter regularization - num_iterations: How many iterations to run for - show_every: How often to show the generated image """ X = X.copy() learning_rate = kwargs.pop('learning_rate', 5.0) max_jitter = kwargs.pop('max_jitter', 16) num_iterations = kwargs.pop('num_iterations', 100) show_every = kwargs.pop('show_every', 25) for t in xrange(num_iterations): # As a regularizer, add random jitter to the image ox, oy = np.random.randint(-max_jitter, max_jitter+1, 2) X = np.roll(np.roll(X, ox, -1), oy, -2) dX = None ############################################################################ # TODO: Compute the image gradient dX using the DeepDream method. You'll # # need to use the forward and backward methods of the model object to # # extract activations and set gradients for the chosen layer. After # # computing the image gradient dX, you should use the learning rate to # # update the image X. # ############################################################################ pass ############################################################################ # END OF YOUR CODE # ############################################################################ # Undo the jitter X = np.roll(np.roll(X, -ox, -1), -oy, -2) # As a regularizer, clip the image mean_pixel = data['mean_image'].mean(axis=(1, 2), keepdims=True) X = np.clip(X, -mean_pixel, 255.0 - mean_pixel) # Periodically show the image if t == 0 or (t + 1) % show_every == 0: img = deprocess_image(X, data['mean_image'], mean='pixel') plt.imshow(img) plt.title('t = %d' % (t + 1)) plt.gcf().set_size_inches(8, 8) plt.axis('off') plt.show() return X ###Output _____no_output_____ ###Markdown Generate some images!Try and generate a cool-looking DeepDeam image using the pretrained network. You can try using different layers, or starting from different images. You can reduce the image size if it runs too slowly on your machine, or increase the image size if you are feeling ambitious. ###Code def read_image(filename, max_size): """ Read an image from disk and resize it so its larger side is max_size """ img = imread(filename) H, W, _ = img.shape if H >= W: img = imresize(img, (max_size, int(W * float(max_size) / H))) elif H < W: img = imresize(img, (int(H * float(max_size) / W), max_size)) return img filename = 'kitten.jpg' max_size = 256 img = read_image(filename, max_size) plt.imshow(img) plt.axis('off') # Preprocess the image by converting to float, transposing, # and performing mean subtraction. img_pre = preprocess_image(img, data['mean_image'], mean='pixel') out = deepdream(img_pre, 7, model, learning_rate=2000) ###Output _____no_output_____
notebooks/unsupervised/pca.ipynb
###Markdown ###Code %matplotlib inline import numpy as np from sklearn.decomposition import PCA # X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2], [4, 3], [4, -1]]) X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2], [4, 3], [0, 0]]) # X = np.array([[-1, 1], [-2, 2], [-3, 3], [1, 1], [2, 2], [3, 3], [4, 4]]) X import matplotlib.pyplot as plt plt.figure(figsize=(10,10)) plt.scatter(X[:, 0], X[:, 1]) plt.xlim(-4, 4) plt.ylim(-4, 4) pca = PCA(n_components=2) pca.fit(X) pca.explained_variance_ # sum is 1, first pc has a very high variance, i.e. is very good, second could be deleted pca.explained_variance_ratio_ pcs = pca.components_ pcs x_points = pca.transform([[-1, 0], [1, 0]]) x_points y_points = pca.transform([[0, -1], [0, 1]]) y_points plt.figure(figsize=(10,10)) plt.scatter(X[:, 0], X[:, 1]) plt.plot(x_points.transpose()[0], x_points.transpose()[1], 'go-') plt.plot(y_points.transpose()[0], y_points.transpose()[1], 'mo-') plt.xlim(-4, 4) plt.ylim(-4, 4) X X_transformed = pca.transform(X) X_transformed plt.figure(figsize=(10,10)) plt.scatter(X_transformed[:, 0], X_transformed[:, 1]) plt.xlim(-4, 4) plt.ylim(-4, 4) ###Output _____no_output_____ ###Markdown Reduction to 1 ###Code pca = PCA(n_components=1) pca.fit(X) pca.explained_variance_ # sum is 1, first pc has a very high variance, i.e. is very good, second could be deleted pca.explained_variance_ratio_ X_transformed = pca.transform(X) X_transformed plt.figure(figsize=(10,2)) plt.scatter(X_transformed, np.zeros(len(X_transformed))) plt.xlim(-4, 4) plt.ylim(-1, 1) ###Output _____no_output_____
Tutorials_python/pandas_multiple_condition_selection.ipynb
###Markdown load the dataset ###Code df_flights = pd.read_csv('../datasets/flights.csv') df_flights.head() df_flights.shape df_flights['carrier'].unique() df_flights['origin'].unique() ###Output _____no_output_____ ###Markdown Logical operators ###Code True and False True or False not False ###Output _____no_output_____ ###Markdown Bitwise operators ###Code print(np.binary_repr(10)) print(np.binary_repr(15)) print(np.binary_repr(10 & 15)) print(np.binary_repr(10 | 15)) True & False True | False ###Output _____no_output_____ ###Markdown Element-wise bitwise operation using numpy ###Code np.array([True, True, False]) & np.array([True, False, False]) np.array([True, True, False]) | np.array([True, False, False]) np.array([True, True, False]) & \ np.array([True, False, False]) & \ np.array([True, False, True]) & \ np.array([True, True, True]) ###Output _____no_output_____ ###Markdown numpy.logical_and ###Code np.logical_and([True, True, False], [True, False, False]) np.logical_or([True, True, False], [True, False, False]) np.logical_and.reduce([[True, True, False], [True, False, False], [True, False, True], [True, True, True]]) ###Output _____no_output_____ ###Markdown performing selection by multiple conditions ###Code df_flights.loc[ np.logical_and(df_flights['carrier'] == "AA", df_flights['origin'] == "SEA") ].head() df_flights.loc[ (df_flights['carrier'] == "AA") & (df_flights['origin'] == "SEA") ].head() df_flights.loc[ np.all([df_flights['carrier'] == "AA", df_flights['origin'] == "SEA"], axis=0) ].head() ###Output _____no_output_____ ###Markdown wrong way ###Code (df_flights['carrier'] == "AA") and (df_flights['origin'] == "SEA") ###Output _____no_output_____ ###Markdown explains the use of numpy.all, numpy.any ###Code np.all([df_flights['carrier'] == "AA", df_flights['origin'] == "SEA"], axis=1) np.all([df_flights['carrier'] == "AA", df_flights['origin'] == "SEA"]) ###Output _____no_output_____
Model_CF_local.ipynb
###Markdown Import data from processed database ###Code #Set up data path = '/db/amazon_book_reviews_final.db' def import_data(db_path): conn = sq.connect(db_path) #sqliteDB path goes in parantheses crsr = conn.cursor() df = pd.read_sql_query(''' SELECT * FROM processed ; ''', conn) df['star_rating'] = df['star_rating'].astype(float) df['star_rating'] = df['star_rating'].astype(int) #convert rating to integer type df['helpful_votes'] = df['helpful_votes'].astype(int) #convert rating to integer type return df df = import_data(path) df.head(5) len(df) df.dtypes ###Output _____no_output_____ ###Markdown YellowBrick Viz SKIP ###Code num_dat = df[["star_rating","helpful_votes","product_title_length","review_body_length","cleaned_sentiment_star_rating","difference"]] from yellowbrick.features import Rank2D %matplotlib inline visualizer = Rank2D(algorithm="covariance") visualizer.fit_transform(num_dat) visualizer.poof() from yellowbrick.features import JointPlotVisualizer visualizer = JointPlotVisualizer(feature='star_rating', target='difference') visualizer.fit(df['star_rating'], df['difference']) visualizer.poof() from yellowbrick.features import JointPlotVisualizer visualizer = JointPlotVisualizer(feature='sentiment_star_rating', target='cleaned_sentiment_star_rating') visualizer.fit(X_dat['sentiment_star_rating'], X_dat['cleaned_sentiment_star_rating']) visualizer.poof() from yellowbrick.features import JointPlotVisualizer visualizer = JointPlotVisualizer(feature='sentiment_star_rating', target='star_rating') visualizer.fit(X_dat['sentiment_star_rating'], X_dat['star_rating']) visualizer.poof() np.corrcoef(X_dat['sentiment_star_rating'], X_dat['star_rating']) from yellowbrick.features import JointPlotVisualizer visualizer = JointPlotVisualizer(feature='cleaned_sentiment_star_rating', target='star_rating') visualizer.fit(X_dat['cleaned_sentiment_star_rating'], X_dat['star_rating']) visualizer.poof() np.corrcoef(X_dat['cleaned_sentiment_star_rating'], X_dat['star_rating']) from yellowbrick.features import JointPlotVisualizer visualizer = JointPlotVisualizer(feature='difference', target='star_rating') visualizer.fit(num_dat['difference'], num_dat['star_rating']) visualizer.poof() np.corrcoef(num_dat['difference'], num_dat['star_rating']) ###Output _____no_output_____ ###Markdown Clustering SKIP ###Code from sklearn.cluster import MiniBatchKMeans from yellowbrick.cluster import KElbowVisualizer # Instantiate the clustering model and visualizer visualizer = KElbowVisualizer(MiniBatchKMeans(), k=(4,12)) visualizer.fit(df) # Fit the training data to the visualizer visualizer.poof() # Draw/show/poof the data from sklearn.cluster import MiniBatchKMeans from yellowbrick.cluster import SilhouetteVisualizer # Instantiate the clustering model and visualizer model = MiniBatchKMeans(7) visualizer = SilhouetteVisualizer(model) visualizer.fit(X_dat) # Fit the training data to the visualizer visualizer.poof() # Draw/show/poof the data ###Output _____no_output_____ ###Markdown Modeling in Sci-Kit Learn SVD ###Code from scipy.sparse import csr_matrix from sklearn.decomposition import TruncatedSVD from sklearn.model_selection import GridSearchCV df_pivot = df.pivot_table(index='customer_id',columns='product_title',values='star_rating',fill_value=0) X = df_pivot.T #parameters = {'n_components': [5, 10, 15, 20, 25, 30] } SVD = TruncatedSVD(n_components=10, random_state=15) SVD.fit(X) #search = GridSearchCV(SVD, parameters, scoring='roc_auc') #matrix = search.transform(X) matrix = SVD.fit_transform(X) corr = np.corrcoef(matrix) book_title = df_pivot.columns print(SVD.explained_variance_) def print_recs(book_title, corr, title): book_list = book_title.tolist() book_title = np.asarray(book_title) book_idx = book_list.index(title) corr_target = corr[book_idx] corrs = np.concatenate((book_title,corr_target),axis=0) top_5_idx = np.argsort(corr_target)[-6:-1] top_5_values = [book_title[i] for i in top_5_idx] print(top_5_values) print_recs(book_title, corr, "The Stand") ###Output _____no_output_____ ###Markdown NMF ###Code from scipy.sparse import csr_matrix from sklearn.decomposition import NMF df_pivot = df.pivot_table(index='customer_id',columns='product_title',values='star_rating',fill_value=0) X = df_pivot.T NMFmod = NMF(n_components=12) matrix = NMFmod.fit_transform(X) corr = np.corrcoef(matrix) book_title = df_pivot.columns def print_recs(book_title, corr, title): book_list = book_title.tolist() book_title = np.asarray(book_title) book_idx = book_list.index(title) corr_target = corr[book_idx] corrs = np.concatenate((book_title,corr_target),axis=0) top_5_idx = np.argsort(corr_target)[-6:-1] top_5_values = [book_title[i] for i in top_5_idx] print(top_5_values) print_recs(book_title, corr, "The Stand") ###Output _____no_output_____ ###Markdown Modeling in LightFMWill allow for incorporation of product metadata! ###Code def create_interaction_matrix(df,user_col, item_col, rating_col, norm= False, threshold = None): ''' Function to create an interaction matrix dataframe from transactional type interactions Required Input - - df = Pandas DataFrame containing user-item interactions - user_col = column name containing user's identifier - item_col = column name containing item's identifier - rating col = column name containing user feedback on interaction with a given item - norm (optional) = True if a normalization of ratings is needed - threshold (required if norm = True) = value above which the rating is favorable Expected output - - Pandas dataframe with user-item interactions ready to be fed in a recommendation algorithm ''' interactions = df.groupby([user_col, item_col])[rating_col] \ .sum().unstack().reset_index(). \ fillna(0).set_index(user_col) if norm: interactions = interactions.applymap(lambda x: 1 if x > threshold else 0) return interactions def create_user_dict(interactions): ''' Function to create a user dictionary based on their index and number in interaction dataset Required Input - interactions - dataset create by create_interaction_matrix Expected Output - user_dict - Dictionary type output containing interaction_index as key and user_id as value ''' user_id = list(interactions.index) user_dict = {} counter = 0 for i in user_id: user_dict[i] = counter counter += 1 return user_dict def create_item_dict(df,id_col,name_col): ''' Function to create an item dictionary based on their item_id and item name Required Input - - df = Pandas dataframe with Item information - id_col = Column name containing unique identifier for an item - name_col = Column name containing name of the item Expected Output - item_dict = Dictionary type output containing item_id as key and item_name as value ''' item_dict ={} for i in range(df.shape[0]): item_dict[(df.loc[i,id_col])] = df.loc[i,name_col] return item_dict def runMF(interactions, n_components=30, loss='warp', k=15, epoch=30,n_jobs = 4): ''' Function to run matrix-factorization algorithm Required Input - - interactions = dataset create by create_interaction_matrix - n_components = number of embeddings you want to create to define Item and user - loss = loss function other options are logistic, brp - epoch = number of epochs to run - n_jobs = number of cores used for execution Expected Output - Model - Trained model ''' x = csr_matrix(interactions.values) model = LightFM(no_components= n_components, loss=loss,k=k) model.fit(x,epochs=epoch,num_threads = n_jobs) return model def create_item_emdedding_distance_matrix(model,interactions): ''' Function to create item-item distance embedding matrix Required Input - - model = Trained matrix factorization model - interactions = dataset used for training the model Expected Output - - item_emdedding_distance_matrix = Pandas dataframe containing cosine distance matrix b/w items ''' df_item_norm_sparse = csr_matrix(model.item_embeddings) similarities = cosine_similarity(df_item_norm_sparse) print(similarities[0]) item_emdedding_distance_matrix = pd.DataFrame(similarities) item_emdedding_distance_matrix.columns = interactions.columns item_emdedding_distance_matrix.index = interactions.columns return item_emdedding_distance_matrix def item_item_recommendation(item_emdedding_distance_matrix, item_id, item_dict, n_items = 10, show = True): ''' Function to create item-item recommendation Required Input - - item_emdedding_distance_matrix = Pandas dataframe containing cosine distance matrix b/w items - item_id = item ID for which we need to generate recommended items - item_dict = Dictionary type input containing item_id as key and item_name as value - n_items = Number of items needed as an output Expected Output - - recommended_items = List of recommended items ''' recommended_items = list(pd.Series(item_emdedding_distance_matrix.loc[item_id]. \ sort_values(ascending = False).head(n_items+1). \ index[1:n_items+1])) if show == True: print("Item of interest :{0}".format(item_id)) print("Item similar to the above item:") counter = 1 for i in recommended_items: print(str(counter) + '- ' + i) counter+=1 return recommended_items # Creating interaction matrix using rating data interactions = create_interaction_matrix(df = df, user_col = 'customer_id', item_col = 'product_title', rating_col = 'star_rating') interactions.head() # Create User Dict user_dict = create_user_dict(interactions=interactions) # Create Item dict movies_dict = create_item_dict(df = df, id_col = 'product_id', name_col = 'product_title') from scipy.sparse import csr_matrix from lightfm import LightFM mf_model = runMF(interactions = interactions, n_components = 30, loss = 'warp', epoch = 30, n_jobs = 4) ## Creating item-item distance matrix from sklearn.metrics.pairwise import cosine_similarity item_item_dist = create_item_emdedding_distance_matrix(model = mf_model, interactions = interactions) import _pickle as p outfile = "/db/lightfm_item_matrix2.p" with open(outfile, 'wb') as pickle_file: p.dump(item_item_dist, pickle_file) outfile2 = "/db/lightfm_movie_dict.p" with open(outfile2, 'wb') as pickle2: p.dump(movies_dict, pickle2) ## Calling 10 recommended items for item id rec_list = item_item_recommendation(item_emdedding_distance_matrix = item_item_dist, item_id = 'The Stand', item_dict = movies_dict, n_items = 10) ###Output _____no_output_____ ###Markdown Modeling in Suprise Works best for User-Item and no metadata ###Code from surprise import Reader, Dataset # to load dataset from pandas df, we need `load_fromm_df` method in surprise lib ratings_dict = {'itemID': list(df.product_title), 'userID': list(df.customer_id), 'rating': list(df.star_rating)} df = pd.DataFrame(ratings_dict) # A reader is still needed but only the rating_scale param is required. # The Reader class is used to parse a file containing ratings. reader = Reader(rating_scale=(1, 5)) # The columns must correspond to user id, item id and ratings (in that order). data = Dataset.load_from_df(df[['userID', 'itemID', 'rating']], reader) from __future__ import (absolute_import, division, print_function, unicode_literals) from collections import defaultdict from surprise import SVD from surprise import Dataset def get_top_k(predictions, k): '''Return a top_k dicts where keys are user ids and values are lists of tuples [(item id, rating estimation) ...]. Takes in a list of predictions as returned by the test method. ''' # First map the predictions to each user. top_k = defaultdict(list) for uid, iid, true_r, est, _ in predictions: top_k[uid].append((iid, est)) # Then sort the predictions for each user and retrieve the k highest ones. for uid, user_ratings in top_k.items(): user_ratings.sort(key=lambda x:x[1], reverse=True) top_k[uid] = user_ratings[:k] return top_k trainset = data.build_full_trainset() algo = SVD() algo.fit(trainset) # We are here testing on the WHOLE dataset. Which means that all the ratings we # are predicting are already known, but it does not really matter. testset = data.construct_testset(raw_testset=data.raw_ratings) predictions = algo.test(testset) #accuracy.rmse(predictions, verbose=True) # ~ 0.68 (which is low) #print(predictions) top_k = get_top_k(predictions, 5) # Print the recommended items for uid, user_ratings in top_k.items(): print(uid, [iid for (iid, _) in user_ratings]) # Compute the total number of recommended items. all_recommended_items = set(iid for (_, user_ratings) in top_k.items() for (iid, _) in user_ratings) print('Number of recommended items:', len(all_recommended_items), 'over', len(top_k), 'users') ###Output _____no_output_____
p1-2-structure-1.ipynb
###Markdown 程序的基本结构 我们前面说过,编程就像教孩子,更具体地说,就像给孩子讲故事。编故事和讲故事是有套路的,最基本的离不开“角色”和“情节”,角色要生动独特,情节要跌宕起伏一波三折,就比较吸引人,最后完成了一个主旨的表述。以计算机为目标对象写故事,就要用编程语言的语法规则,在语法规则的体系内写清楚角色和情节,写出来的故事就是源代码(*source code*),然后通过编程语言的编译器或解释器让计算机运行源代码,得到我们想要的结果。理解程序的结构,就是理解编程语言提供给我们的“表达方式”有哪些,我们可以怎么去讲我们的故事。好的编程语言有很强的表达能力,用适合自己的编程语言书写代码经常会产生愉悦感,就来源于编程语言的表达力。编程语言这个东西,在程序员的世界里就好像时尚品牌,有喜欢这个牌子也有喜欢那个牌子的,程序员之间的闲聊多半都会导向“哪个编程语言最好”的话题,当然一般是得不出结果的。我个人认为好的编程语言应该具备几个特征:* 强表达能力:能够用简练的语法完成对各种逻辑的表达;* 有助于抽象:能够帮助程序员更好地进行问题的抽象;* 易于理解和学习:语法自然和流畅,容易为人所理解;* 有独特性:至少在某个特定领域有独特的优势,而不是面面俱到但处处平淡。随着学习越来越深入,大家也会有自己的感受,到时候我们也可以来谈谈,什么是我们心目中最好的编程语言。对我们目前阶段来说,Python 是一种非常好的教学语言,基本上满足上面列出的特征,大家可以在学习过程中自行体会。下面我们就以 Python 为例来理解程序的一般结构,也就是程序的表达方式。 程序的基本结构(一):值与变量 如果源代码是我们写出来的故事,那么“值和变量”就是故事的角色,是源代码中的“主语”和“宾语”。在程序的世界里出现的角色无非是各种数据,最基本的数据包括:* 逻辑上的真和假,大名叫布尔值(*boolean*);* 各种数字,包括整数和小数(在计算机里叫浮点数 *float*);* 表示文本的字符和字符串(*string*)。除了这些最基本的数据,还有几类“高级”一点的:* 对象:是我们可以自由定义的数据类型,一个对象可以有各种各样的属性(*attribute*)和方法(*method*);在这一部分的后面会专门介绍对象;* 数据容器:可以容纳数据的数据,比如一列数字;我们会在第四部分专门学习数据容器;* 函数:函数是完成特定工作的一段源代码,函数是程序本身的组成部分,但函数也是数据,我们会在第四部分专门学习函数。 数据在程序里的存在有两种形式,一种是“值(*value*)”,一种是“变量(*variable*)”。下面是一些值的例子:```python423.14'a''abracadabra'TrueFalse```值是有类型的,Python 提供了一个函数叫做 `type()`,可以告诉我们某个值是什么类型,我们可以试一下: ###Code type(42) type(3.14) type('abracadabra') type(False) ###Output _____no_output_____ ###Markdown 上面的运行结果告诉我们:* `42` 类型是 `int`,这是 *integer* 的简写,即**整数**;* `3.14` 类型是 `float`,即**浮点数**,我们可以理解为就是**小数**;* `'abracadabra'` 类型是 `str`,这是 *string* 的简写,即**字符串**;* `False` 类型是 `bool`,这是 *boolean* 的简写,即**布尔值**,布尔值在计算机中代表逻辑上的真和假,只有两个布尔值,`True` 和 `False`。一般来说,同样类型的数据之间是可以发生一些“运算”的,比如数字之间可以加减乘除,字符串之间可以串接起来,布尔值之间可以进行逻辑运算,而不同类型的数据之间就好像不同物种一样,没法交互。所以搞清楚数据的类型是很重要的事情。 有了值,我们就可以做计算,但是写不出真正的程序,为什么呢?因为程序写出来是希望能被很多人反复使用的——这些人叫做**用户**(*user*)——这样才比较划算,每次使用,用户输入一些数据,程序进行处理,然后给出用户想要的结果,每次的输入可能是同一类型,但具体的值是多少在我们写程序时是不知道的,所以我们需要一种东西能够来代替值写在程序中,而在程序运行时这些东西才根据用户的输入变成实际的“值”,这些“东西”就是**变量**。变量使得我们可以编写一个程序来处理一类值,而不管具体值是多少。我们学习代数的时候用 *a + b = b + a* 就表达了“加法交换律”这样高度抽象的普适规律,而不管 *a* 和 *b* 到底是多少,程序里的变量也一样,让我们能够进行“数据的抽象”,这个概念很重要,后面我们还会不断深化它。所有编程语言都支持值和变量,也支持把特定的值赋予某个变量,这样的操作叫做“**赋值**(*assignment*)”,下面是一些例子: ###Code a = 12 b = 30 f = 3.14 s = 'abracadabra' l = [1, 2, 3] t = True f = False ###Output _____no_output_____ ###Markdown 赋值之后变量就具有了相应的值,**同时也具有了该值对应的类型**,所以变量一经赋值就有了类型: ###Code type(a) type(s) type(t) type(l) ###Output _____no_output_____ ###Markdown 最后的这个 `l` 是新东西,叫做**列表**(*list*),是一种数据容器,我们以后会详细介绍。 如果我们想消灭一个变量不再使用,可以用 `del` 命令: ###Code del b ###Output _____no_output_____ ###Markdown 这之后变量 `b` 就变成没有赋值的状态了,既没有值也没有类型,实际上无法使用——在哪儿用都会报错,不信你可以试试。 Python 还支持“多重赋值”,就是一次给多个变量赋值,比如这样: ###Code a, b = 12, 30 ###Output _____no_output_____ ###Markdown 之后 `a` 的值就是 `12`,而 `b` 的值就是 `30`。 赋值语句的右边不仅可以是值,也可以是变量,当一个变量写在赋值语句右边时,可以把它看做是“值的名字”,Python 解释器会把它替换成它的值(这个过程叫“**求值**”),比如: ###Code g = f ###Output _____no_output_____ ###Markdown 程序的基本结构 我们前面说过,编程就像教孩子,更具体地说,就像给孩子讲故事。编故事和讲故事是有套路的,最基本的离不开“角色”和“情节”,角色要生动独特,情节要跌宕起伏一波三折,就比较吸引人,最后完成了一个主旨的表述。以计算机为目标对象写故事,就要用编程语言的语法规则,在语法规则的体系内写清楚角色和情节,写出来的故事就是源代码(*source code*),然后通过编程语言的编译器或解释器让计算机运行源代码,得到我们想要的结果。理解程序的结构,就是理解编程语言提供给我们的“表达方式”有哪些,我们可以怎么去讲我们的故事。好的编程语言有很强的表达能力,用适合自己的编程语言书写代码经常会产生愉悦感,就来源于编程语言的表达力。编程语言这个东西,在程序员的世界里就好像时尚品牌,有喜欢这个牌子也有喜欢那个牌子的,程序员之间的闲聊多半都会导向“哪个编程语言最好”的话题,当然一般是得不出结果的。我个人认为好的编程语言应该具备几个特征:* 强表达能力:能够用简练的语法完成对各种逻辑的表达;* 有助于抽象:能够帮助程序员更好地进行问题的抽象;* 易于理解和学习:语法自然和流畅,容易为人所理解;* 有独特性:至少在某个特定领域有独特的优势,而不是面面俱到但处处平淡。随着学习越来越深入,大家也会有自己的感受,到时候我们也可以来谈谈,什么是我们心目中最好的编程语言。对我们目前阶段来说,Python 是一种非常好的教学语言,基本上满足上面列出的特征,大家可以在学习过程中自行体会。下面我们就以 Python 为例来理解程序的一般结构,也就是程序的表达方式。 程序的基本结构(一):值与变量 如果源代码是我们写出来的故事,那么“值和变量”就是故事的角色,是源代码中的“主语”和“宾语”。在程序的世界里出现的角色无非是各种数据,最基本的数据包括:* 逻辑上的真和假,大名叫布尔值(*boolean*);* 各种数字,包括整数和小数(在计算机里叫浮点数 *float*);* 表示文本的字符和字符串(*string*)。除了这些最基本的数据,还有几类“高级”一点的:* 对象:是我们可以自由定义的数据类型,一个对象可以有各种各样的属性(*attribute*)和方法(*method*);在这一部分的后面会专门介绍对象;* 数据容器:可以容纳数据的数据,比如一列数字;我们会在第四部分专门学习数据容器;* 函数:函数是完成特定工作的一段源代码,函数是程序本身的组成部分,但函数也是数据,我们会在第四部分专门学习函数。 数据在程序里的存在有两种形式,一种是“值(*value*)”,一种是“变量(*variable*)”。下面是一些值的例子:```python423.14'a''abracadabra'TrueFalse```值是有类型的,Python 提供了一个函数叫做 `type()`,可以告诉我们某个值是什么类型,我们可以试一下: ###Code type(42) type(3.14) type('abracadabra') type(False) ###Output _____no_output_____ ###Markdown 上面的运行结果告诉我们:* `42` 类型是 `int`,这是 *integer* 的简写,即**整数**;* `3.14` 类型是 `float`,即**浮点数**,我们可以理解为就是**小数**;* `'abracadabra'` 类型是 `str`,这是 *string* 的简写,即**字符串**;* `False` 类型是 `bool`,这是 *boolean* 的简写,即**布尔值**,布尔值在计算机中代表逻辑上的真和假,只有两个布尔值,`True` 和 `False`。一般来说,同样类型的数据之间是可以发生一些“运算”的,比如数字之间可以加减乘除,字符串之间可以串接起来,布尔值之间可以进行逻辑运算,而不同类型的数据之间就好像不同物种一样,没法交互。所以搞清楚数据的类型是很重要的事情。 有了值,我们就可以做计算,但是写不出真正的程序,为什么呢?因为程序写出来是希望能被很多人反复使用的——这些人叫做**用户**(*user*)——这样才比较划算,每次使用,用户输入一些数据,程序进行处理,然后给出用户想要的结果,每次的输入可能是同一类型,但具体的值是多少在我们写程序时是不知道的,所以我们需要一种东西能够来代替值写在程序中,而在程序运行时这些东西才根据用户的输入变成实际的“值”,这些“东西”就是**变量**。变量使得我们可以编写一个程序来处理一类值,而不管具体值是多少。我们学习代数的时候用 *a + b = b + a* 就表达了“加法交换律”这样高度抽象的普适规律,而不管 *a* 和 *b* 到底是多少,程序里的变量也一样,让我们能够进行“数据的抽象”,这个概念很重要,后面我们还会不断深化它。所有编程语言都支持值和变量,也支持把特定的值赋予某个变量,这样的操作叫做“**赋值**(*assignment*)”,下面是一些例子: ###Code a = 12 b = 30 f = 3.14 s = 'abracadabra' l = [1, 2, 3] t = True f = False ###Output _____no_output_____ ###Markdown 赋值之后变量就具有了相应的值,**同时也具有了该值对应的类型**,所以变量一经赋值就有了类型: ###Code type(a) type(s) type(t) type(l) ###Output _____no_output_____ ###Markdown 最后的这个 `l` 是新东西,叫做**列表**(*list*),是一种数据容器,我们以后会详细介绍。 如果我们想消灭一个变量不再使用,可以用 `del` 命令: ###Code del b ###Output _____no_output_____ ###Markdown 这之后变量 `b` 就变成没有赋值的状态了,既没有值也没有类型,实际上无法使用——在哪儿用都会报错,不信你可以试试。 Python 还支持“多重赋值”,就是一次给多个变量赋值,比如这样: ###Code a, b = 12, 30 ###Output _____no_output_____ ###Markdown 之后 `a` 的值就是 `12`,而 `b` 的值就是 `30`。 赋值语句的右边不仅可以是值,也可以是变量,当一个变量写在赋值语句右边时,可以把它看做是“值的名字”,Python 解释器会把它替换成它的值(这个过程叫“**求值**”),比如: ###Code g = f ###Output _____no_output_____
colabs/summer_schools/intro_to_graph_nets_tutorial_with_jraph.ipynb
###Markdown Introduction to Graph Neural Nets with JAX/jraph*Lisa Wang, DeepMind ([email protected]), Nikola Jovanović, ETH Zurich ([email protected])***Colab Runtime:**If possible, please use a GPU hardware accelerator to run this colab. You can choose that under *Runtime > Change Runtime Type*.**Prerequisites:*** Some familiarity with [JAX](https://github.com/google/jax), you can refer to this [colab](https://colab.sandbox.google.com/github/google/jax/blob/master/docs/jax-101/01-jax-basics.ipynb) for an introduction to JAX.* Neural network basics* Graph theory basics (MIT Open Courseware [slides](https://ocw.mit.edu/courses/civil-and-environmental-engineering/1-022-introduction-to-network-models-fall-2018/lecture-notes/MIT1_022F18_lec2.pdf) by Amir Ajorlou)We recommend watching the [Theoretical Foundations of Graph Neural Networks Lecture](https://www.youtube.com/watch?v=uF53xsT7mjc&) by Petar Veličković before working through this colab. The talk provides a theoretical introduction to Graph Neural Networks (GNNs), historical context and motivating examples.**Outline:*** [Fundamental Graph Concepts](scrollTo=gsKA-syx_LUi)* [Graph Prediction Tasks](scrollTo=spQGRxhPN8Eo)* [Intro to the jraph Library](scrollTo=3C5YI9M0vwvb)* [Graph Convolutional Network (GCN) Layer](scrollTo=NZRMF2d-h2pd)* [Build GCN Model with Multiple Layers](scrollTo=lha8rbQ78l3S)* [Node Classification with GCN on Karate Club Dataset](scrollTo=Z5t7kw7SE_h4)* [Graph Attention (GAT) Layer](scrollTo=yg8g96NdBCK6)* [Train GAT Model on Karate Club Dataset](scrollTo=anfVGJwBe27v)* [Graph Classification on MUTAG (Molecules)](scrollTo=n5TxaTGzBkBa)* [Link Prediction on CORA (Citation Network)](scrollTo=OwVE88dTRC6V)* [Bonus: Intro to Graph Adversarial Attacks](scrollTo=35kbP8GZRFEm)**Additional Resources:*** Battaglia et al. (2018): [Relational inductive biases, deep learning, and graph networks](https://arxiv.org/pdf/1806.01261)---Some sections in this colab build on the [GraphNets Tutorial colab in pytorch](https://github.com/eemlcommunity/PracticalSessions2021/blob/main/graphnets/graphnets_tutorial.ipynb) by Nikola Jovanović.We would like to thank Razvan Pascanu and Petar Veličković for their valuable input and feedback.---*Copyright 2022 by the Authors.**Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0**Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.* Setup: Install and Import libraries ###Code !pip install git+git://github.com/deepmind/jraph.git !pip install flax !pip install dm-haiku # Imports %matplotlib inline import functools import matplotlib.pyplot as plt import jax import jax.numpy as jnp import jax.tree_util as tree import jraph import flax import haiku as hk import optax import pickle import numpy as onp import networkx as nx from typing import Any, Callable, Dict, List, Optional, Tuple ###Output _____no_output_____ ###Markdown Fundamental Graph ConceptsA graph consists of a set of nodes and a set of edges, where edges form connections between nodes.More formally, a graph is defined as $ \mathcal{G} = (\mathcal{V}, \mathcal{E})$ where $\mathcal{V}$ is the set of vertices / nodes, and $\mathcal{E}$ is the set of edges.In an **undirected** graph, each edge is an unordered pair of two nodes $ \in \mathcal{V}$. E.g. a friend network can be represented as an undirected graph, assuming that the relationship "*A is friends with B*" implies "*B is friends with A*".In a **directed** graph, each edge is an ordered pair of nodes $ \in \mathcal{V}$. E.g. a citation network would be best represented with a directed graph, since the relationship "*A cites B*" does not imply "*B cites A*".The **degree** of a node is defined as the number of edges incident on it, i.e. the sum of incoming and outgoing edges for that node.The **in-degree** is the sum of incoming edges only, and the **out-degree** is the sum of outgoing edges only.There are several ways to represent $\mathcal{E}$:1. As a **list of edges**: a list of pairs $(u,v)$, where $(u,v)$ means that there is an edge going from node $u$ to node $v$.2. As an **adjacency matrix**: a binary square matrix $A$ of size $|\mathcal{V}| \times |\mathcal{V}|$, where $A_{u,v}=1$ iff there is a connection between nodes $u$ and $v$.3. As an **adjacency list**: An array of $|\mathcal{V}|$ unordered lists, where the $i$th list corresponds to the $i$th node, and contains all the nodes directly connected to node $i$. Example: Below is a directed graph with four nodes and five edges.The arrows on the edges indicate the direction of each edge, e.g. there is an edge going from node 0 to node 1. Between node 0 and node 3, there are two edges: one going from node 0 to node 3 and one from node 3 to node 0.Node 0 has out-degree of 2, since it has two outgoing edges, and an in-degree of 2, since it has two incoming edges.The list of edges is:$$[(0, 1), (0, 3), (1, 2), (2, 0), (3, 0)]$$As adjacency matrix:$$\begin{array}{l|llll} source \setminus dest & n_0 & n_1 & n_2 & n_3 \\ \hlinen_0 & 0 & 1 & 0 & 1 \\n_1 & 0 & 0 & 1 & 0 \\n_2 & 1 & 0 & 0 & 0 \\n_3 & 1 & 0 & 0 & 0\end{array}$$As adjacency list:$$[\{1, 3\}, \{2\}, \{0\}, \{0\}]$$ Graph Prediction TasksWhat are the kinds of problems we want to solve on graphs?The tasks fall into roughly three categories:1. **Node Classification**: E.g. what is the topic of a paper given a citation network of papers?2. **Link Prediction / Edge Classification**: E.g. are two people in a social network friends?3. **Graph Classification**: E.g. is this protein molecule (represented as a graph) likely going to be effective?*The three main graph learning tasks. Image source: Petar Veličković.*Which examples of graph prediction tasks come to your mind? Which task types do they correspond to?We will create and train models on all three task types in this tutorial. Intro to the jraph LibraryIn the following sections, we will learn how to represent graphs and build GNNs in Python. We will use[jraph](https://github.com/deepmind/jraph), a lightweight library for working with GNNs in [JAX](https://github.com/google/jax). Representing a graph in jraphIn jraph, a graph is represented with a `GraphsTuple` object. In addition to defining the graph structure of nodes and edges, you can also store node features, edge features and global graph features in a `GraphsTuple`.In the `GraphsTuple`, edges are represented in two aligned arrays of node indices: senders (source nodes) and receivers (destinaton nodes).Each index corresponds to one edge, e.g. edge `i` goes from `senders[i]` to `receivers[i]`.You can even store multiple graphs in one `GraphsTuple` object.We will start with creating a simple directed graph with 4 nodes and 5 edges. We will also add toy features to the nodes, using `2*node_index` as the feature.We will later use this toy graph in the GCN demo. ###Code def build_toy_graph() -> jraph.GraphsTuple: """Define a four node graph, each node has a scalar as its feature.""" # Nodes are defined implicitly by their features. # We will add four nodes, each with a feature, e.g. # node 0 has feature [0.], # node 1 has featre [2.] etc. # len(node_features) is the number of nodes. node_features = jnp.array([[0.], [2.], [4.], [6.]]) # We will now specify 5 directed edges connecting the nodes we defined above. # We define this with `senders` (source node indices) and `receivers` # (destination node indices). # For example, to add an edge from node 0 to node 1, we append 0 to senders, # and 1 to receivers. # We can do the same for all 5 edges: # 0 -> 1 # 1 -> 2 # 2 -> 0 # 3 -> 0 # 0 -> 3 senders = jnp.array([0, 1, 2, 3, 0]) receivers = jnp.array([1, 2, 0, 0, 3]) # You can optionally add edge attributes to the 5 edges. edges = jnp.array([[5.], [6.], [7.], [8.], [8.]]) # We then save the number of nodes and the number of edges. # This information is used to make running GNNs over multiple graphs # in a GraphsTuple possible. n_node = jnp.array([4]) n_edge = jnp.array([5]) # Optionally you can add `global` information, such as a graph label. global_context = jnp.array([[1]]) # Same feature dims as nodes and edges. graph = jraph.GraphsTuple( nodes=node_features, edges=edges, senders=senders, receivers=receivers, n_node=n_node, n_edge=n_edge, globals=global_context ) return graph graph = build_toy_graph() ###Output _____no_output_____ ###Markdown Inspecting the GraphsTuple ###Code # Number of nodes # Note that `n_node` returns an array. The length of `n_node` corresponds to # the number of graphs stored in one `GraphsTuple`. # In this case, we only have one graph, so n_node has length 1. graph.n_node # Number of edges graph.n_edge # Node features graph.nodes # Edge features graph.edges # Edges graph.senders graph.receivers # Graph-level features graph.globals ###Output _____no_output_____ ###Markdown Visualizing the GraphTo visualize the graph structure of the graph we created above, we will use the [`networkx`](networkx.org) library because it already has functions for drawing graphs.We first convert the `jraph.GraphsTuple` to a `networkx.DiGraph`. ###Code def convert_jraph_to_networkx_graph(jraph_graph: jraph.GraphsTuple) -> nx.Graph: nodes, edges, receivers, senders, _, _, _ = jraph_graph nx_graph = nx.DiGraph() if nodes is None: for n in range(jraph_graph.n_node[0]): nx_graph.add_node(n) else: for n in range(jraph_graph.n_node[0]): nx_graph.add_node(n, node_feature=nodes[n]) if edges is None: for e in range(jraph_graph.n_edge[0]): nx_graph.add_edge(int(senders[e]), int(receivers[e])) else: for e in range(jraph_graph.n_edge[0]): nx_graph.add_edge( int(senders[e]), int(receivers[e]), edge_feature=edges[e]) return nx_graph def draw_jraph_graph_structure(jraph_graph: jraph.GraphsTuple) -> None: nx_graph = convert_jraph_to_networkx_graph(jraph_graph) pos = nx.spring_layout(nx_graph) nx.draw( nx_graph, pos=pos, with_labels=True, node_size=500, font_color='yellow') draw_jraph_graph_structure(graph) ###Output _____no_output_____ ###Markdown Graph Convolutional Network (GCN) LayerNow let's implement our first graph network!The graph convolutional network, introduced by by Kipf et al. (2017) in https://arxiv.org/abs/1609.02907, is one of the basic graph network architectures. We will build its core building block, the graph convolutional layer.In a convolutional neural network (CNN), a convolutional filter (e.g. 3x3) is applied repeatedly to different parts of a larger input (e.g. 64x64) by striding across the input.In a GCN, a convolution filter is applied to the neighbourhoods around a node in a graph.However, there are also some differences to point out:In contrast to the CNN filter, the neighbourhoods in a GCN can be of different sizes, and there is no ordering of inputs. To see that, note that the CNN filter performs a weighted sum aggregation over the inputs with learnable weights, where each filter input has its own weight. In the GCN, the same weight is applied to all neighbours and the aggregation function is not learned. In other words, in a GCN, each neighbor contributes equally. This is why the CNN filter is not order-invariant, but the GCN filter is.Comparison of CNN and GCN filters.Image source: https://arxiv.org/pdf/1901.00596.pdfMore specifically, the GCN layer performs two steps:1. _Compute messages / update node features_: Create a feature vector $\vec{h}_n$ for each node $n$ (e.g. with an MLP). This is going to be the message that this node will pass to neighboring nodes.2. _Message-passing / aggregate node features_: For each node, calculate a new feature vector $\vec{h}'_n$ based on the messages (features) from the nodes in its neighborhood. In a directed graph, only nodes from incoming edges are counted as neighbors. The image below shows this aggregation step. There are multiple options for aggregation in a GCN, e.g. taking the mean, the sum, the min or max. (Later in this tutorial, we will also see how we can make the aggregation function dependent on the node features by adding an attention mechanism in the Graph Attention Network.)*\"A generic overview of a graph convolution operation, highlighting the relevant information for deriving the next-level features for every node in the graph.\"* Image source: Petar Veličković (https://github.com/PetarV-/TikZ) Simple GCN Layer ###Code def apply_simplified_gcn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: # Unpack GraphsTuple nodes, _, receivers, senders, _, _, _ = graph # 1. Update node features # For simplicity, we will first use an identify function here, and replace it # with a trainable MLP block later. update_node_fn = lambda nodes: nodes nodes = update_node_fn(nodes) # 2. Aggregate node features over nodes in neighborhood # Equivalent to jnp.sum(n_node), but jittable total_num_nodes = tree.tree_leaves(nodes)[0].shape[0] aggregate_nodes_fn = jax.ops.segment_sum # Compute new node features by aggregating messages from neighboring nodes nodes = tree.tree_map(lambda x: aggregate_nodes_fn(x[senders], receivers, total_num_nodes), nodes) out_graph = graph._replace(nodes=nodes) return out_graph ###Output _____no_output_____ ###Markdown We can now run the graph convolution on our toy graph from before. ###Code graph = build_toy_graph() ###Output _____no_output_____ ###Markdown Here is the visualized graph. ###Code draw_jraph_graph_structure(graph) out_graph = apply_simplified_gcn(graph) ###Output _____no_output_____ ###Markdown Since we used the identity function for updating nodes and sum aggregation, we can verify the results pretty easily. As a reminder, in this toy graph, the node features are the same as the node index.Node 0: sum of features from node 2 and node 3 $\rightarrow$ 10.Node 1: sum of features from node 0 $\rightarrow$ 0.Node 2: sum of features from node 1 $\rightarrow$ 2.Node 3: sum of features from node 0 $\rightarrow$ 0. ###Code out_graph.nodes ###Output _____no_output_____ ###Markdown Add Trainable Parameters to GCN layerSo far our graph convolution operation doesn't have any learnable parameters.Let's add an MLP block to the update function to make it trainable. ###Code class MLP(hk.Module): def __init__(self, features: jnp.ndarray): super().__init__() self.features = features def __call__(self, x: jnp.ndarray) -> jnp.ndarray: layers = [] for feat in self.features[:-1]: layers.append(hk.Linear(feat)) layers.append(jax.nn.relu) layers.append(hk.Linear(self.features[-1])) mlp = hk.Sequential(layers) return mlp(x) # Use MLP block to define the update node function update_node_fn = lambda x: MLP(features=[8, 4])(x) ###Output _____no_output_____ ###Markdown Check outputs of `update_node_fn` with MLP Block ###Code graph = build_toy_graph() update_node_module = hk.without_apply_rng(hk.transform(update_node_fn)) params = update_node_module.init(jax.random.PRNGKey(42), graph.nodes) out = update_node_module.apply(params, graph.nodes) ###Output _____no_output_____ ###Markdown As output, we expect the updated node features. We should see one array of dim 4 for each of the 4 nodes, which is the result of applying a single MLP block to the features of each node individually. ###Code out ###Output _____no_output_____ ###Markdown Add Self-Edges (Edges connecting a node to itself)For each node, add an edge of the node onto itself. This way, nodes will include themselves in the aggregation step. ###Code def add_self_edges_fn(receivers: jnp.ndarray, senders: jnp.ndarray, total_num_nodes: int) -> Tuple[jnp.ndarray, jnp.ndarray]: """Adds self edges. Assumes self edges are not in the graph yet.""" receivers = jnp.concatenate((receivers, jnp.arange(total_num_nodes)), axis=0) senders = jnp.concatenate((senders, jnp.arange(total_num_nodes)), axis=0) return receivers, senders ###Output _____no_output_____ ###Markdown Add Symmetric NormalizationNote that the nodes may have different numbers of neighbors / degrees.This could lead to instabilities during neural network training, e.g. exploding or vanishing gradients. To address that, normalization is a commonly used method. In this case, we will normalize by node degrees.As a first attempt, we could count the number of incoming edges (including self-edge) and divide by that value.More formally, let $A$ be the adjacency matrix defining the edges of the graph.Then we define the degree matrix $D$ as a diagonal matrix with $D_{ii} = \sum_jA_{ij}$ (the degree of node $i$)Now we can normalize $AH$ by dividing it by the node degrees:$${D}^{-1}AH$$To take both the in and out degrees into account, we can use symmetric normalization, which is also what Kipf and Welling proposed in their [paper](https://arxiv.org/abs/1609.02907):$$D^{-\frac{1}{2}}AD^{-\frac{1}{2}}H$$ General GCN LayerNow we can write a more general and configurable version of the Graph Convolution layer, allowing the caller to specify:* **`update_node_fn`**: Function to use to update node features (e.g. the MLP block version we just implemented)* **`aggregate_nodes_fn`**: Aggregation function to use to aggregate messages from neighbourhood.* **`add_self_edges`**: Whether to add self edges for aggregation step.* **`symmetric_normalization`**: Whether to add symmetric normalization. ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/_src/models.py#L506 def GraphConvolution(update_node_fn: Callable, aggregate_nodes_fn: Callable = jax.ops.segment_sum, add_self_edges: bool = False, symmetric_normalization: bool = True) -> Callable: """Returns a method that applies a Graph Convolution layer. Graph Convolutional layer as in https://arxiv.org/abs/1609.02907, NOTE: This implementation does not add an activation after aggregation. If you are stacking layers, you may want to add an activation between each layer. Args: update_node_fn: function used to update the nodes. In the paper a single layer MLP is used. aggregate_nodes_fn: function used to aggregates the sender nodes. add_self_edges: whether to add self edges to nodes in the graph as in the paper definition of GCN. Defaults to False. symmetric_normalization: whether to use symmetric normalization. Defaults to True. Returns: A method that applies a Graph Convolution layer. """ def _ApplyGCN(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Applies a Graph Convolution layer.""" nodes, _, receivers, senders, _, _, _ = graph # First pass nodes through the node updater. nodes = update_node_fn(nodes) # Equivalent to jnp.sum(n_node), but jittable total_num_nodes = tree.tree_leaves(nodes)[0].shape[0] if add_self_edges: # We add self edges to the senders and receivers so that each node # includes itself in aggregation. # In principle, a `GraphsTuple` should partition by n_edge, but in # this case it is not required since a GCN is agnostic to whether # the `GraphsTuple` is a batch of graphs or a single large graph. conv_receivers, conv_senders = add_self_edges_fn(receivers, senders, total_num_nodes) else: conv_senders = senders conv_receivers = receivers # pylint: disable=g-long-lambda if symmetric_normalization: # Calculate the normalization values. count_edges = lambda x: jax.ops.segment_sum( jnp.ones_like(conv_senders), x, total_num_nodes) sender_degree = count_edges(conv_senders) receiver_degree = count_edges(conv_receivers) # Pre normalize by sqrt sender degree. # Avoid dividing by 0 by taking maximum of (degree, 1). nodes = tree.tree_map( lambda x: x * jax.lax.rsqrt(jnp.maximum(sender_degree, 1.0))[:, None], nodes, ) # Aggregate the pre-normalized nodes. nodes = tree.tree_map( lambda x: aggregate_nodes_fn(x[conv_senders], conv_receivers, total_num_nodes), nodes) # Post normalize by sqrt receiver degree. # Avoid dividing by 0 by taking maximum of (degree, 1). nodes = tree.tree_map( lambda x: (x * jax.lax.rsqrt(jnp.maximum(receiver_degree, 1.0))[:, None]), nodes, ) else: nodes = tree.tree_map( lambda x: aggregate_nodes_fn(x[conv_senders], conv_receivers, total_num_nodes), nodes) # pylint: enable=g-long-lambda return graph._replace(nodes=nodes) return _ApplyGCN ###Output _____no_output_____ ###Markdown Test General GCN Layer ###Code gcn_layer = GraphConvolution( update_node_fn=lambda n: MLP(features=[8, 4])(n), aggregate_nodes_fn=jax.ops.segment_sum, add_self_edges=True, symmetric_normalization=True ) graph = build_toy_graph() network = hk.without_apply_rng(hk.transform(gcn_layer)) params = network.init(jax.random.PRNGKey(42), graph) out_graph = network.apply(params, graph) out_graph.nodes ###Output _____no_output_____ ###Markdown Build GCN Model with Multiple LayersWith a single GCN layer, a node's representation after the GCN layer is onlyinfluenced by its direct neighbourhood. However, we may want to consider larger neighbourhoods, i.e. more than just 1 hop away. To achieve that, we can stackmultiple GCN layers, similar to how stacking CNN layers expands the input region.We will define a network with three GCN layers: ###Code def gcn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Defines a graph neural network with 3 GCN layers. Args: graph: GraphsTuple the network processes. Returns: output graph with updated node values. """ gn = GraphConvolution( update_node_fn=lambda n: jax.nn.relu(hk.Linear(8)(n)), add_self_edges=True) graph = gn(graph) gn = GraphConvolution( update_node_fn=lambda n: jax.nn.relu(hk.Linear(4)(n)), add_self_edges=True) graph = gn(graph) gn = GraphConvolution( update_node_fn=hk.Linear(2)) graph = gn(graph) return graph graph = build_toy_graph() network = hk.without_apply_rng(hk.transform(gcn)) params = network.init(jax.random.PRNGKey(42), graph) out_graph = network.apply(params, graph) out_graph.nodes ###Output _____no_output_____ ###Markdown Node Classification with GCN on Karate Club DatasetTime to try out our GCN on our first graph prediction task! Zachary's Karate Club Dataset[Zachary's karate club](https://en.wikipedia.org/wiki/Zachary%27s_karate_club) is a small dataset commonly used as an example for a social graph. It's great for demo purposes, as it's easy to visualize and quick to train a model on it.A node represents a student or instructor in the club. An edge means that those two people have interacted outside of the class. There are two instructors in the club.Each student is assigned to one of two instructors. Optimizing the GCN on the Karate Club Node Classification TaskThe task is to predict the assignment of students to instructors, given the social graph and only knowing the assignment of two nodes (the two instructors) a priori.In other words, out of the 34 nodes, only two nodes are labeled, and we are trying to optimize the assignment of the other 32 nodes, by **maximizing the log-likelihood of the two known node assignments**.We will compute the accuracy of our node assignments by comparing to the ground-truth assignments. **Note that the ground-truth for the 32 student nodes is not used in the loss function itself.** Let's load the dataset: ###Code """Zachary's karate club example. From https://github.com/deepmind/jraph/blob/master/jraph/examples/zacharys_karate_club.py. Here we train a graph neural network to process Zachary's karate club. https://en.wikipedia.org/wiki/Zachary%27s_karate_club Zachary's karate club is used in the literature as an example of a social graph. Here we use a graphnet to optimize the assignments of the students in the karate club to two distinct karate instructors (Mr. Hi and John A). """ def get_zacharys_karate_club() -> jraph.GraphsTuple: """Returns GraphsTuple representing Zachary's karate club.""" social_graph = [ (1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2), (4, 0), (5, 0), (6, 0), (6, 4), (6, 5), (7, 0), (7, 1), (7, 2), (7, 3), (8, 0), (8, 2), (9, 2), (10, 0), (10, 4), (10, 5), (11, 0), (12, 0), (12, 3), (13, 0), (13, 1), (13, 2), (13, 3), (16, 5), (16, 6), (17, 0), (17, 1), (19, 0), (19, 1), (21, 0), (21, 1), (25, 23), (25, 24), (27, 2), (27, 23), (27, 24), (28, 2), (29, 23), (29, 26), (30, 1), (30, 8), (31, 0), (31, 24), (31, 25), (31, 28), (32, 2), (32, 8), (32, 14), (32, 15), (32, 18), (32, 20), (32, 22), (32, 23), (32, 29), (32, 30), (32, 31), (33, 8), (33, 9), (33, 13), (33, 14), (33, 15), (33, 18), (33, 19), (33, 20), (33, 22), (33, 23), (33, 26), (33, 27), (33, 28), (33, 29), (33, 30), (33, 31), (33, 32)] # Add reverse edges. social_graph += [(edge[1], edge[0]) for edge in social_graph] n_club_members = 34 return jraph.GraphsTuple( n_node=jnp.asarray([n_club_members]), n_edge=jnp.asarray([len(social_graph)]), # One-hot encoding for nodes, i.e. argmax(nodes) = node index. nodes=jnp.eye(n_club_members), # No edge features. edges=None, globals=None, senders=jnp.asarray([edge[0] for edge in social_graph]), receivers=jnp.asarray([edge[1] for edge in social_graph])) def get_ground_truth_assignments_for_zacharys_karate_club() -> jnp.ndarray: """Returns ground truth assignments for Zachary's karate club.""" return jnp.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) graph = get_zacharys_karate_club() print(f'Number of nodes: {graph.n_node[0]}') print(f'Number of edges: {graph.n_edge[0]}') ###Output _____no_output_____ ###Markdown Visualize the karate club graph with circular node layout: ###Code nx_graph = convert_jraph_to_networkx_graph(graph) pos = nx.circular_layout(nx_graph) plt.figure(figsize=(6, 6)) nx.draw(nx_graph, pos=pos, with_labels = True, node_size=500, font_color='yellow') ###Output _____no_output_____ ###Markdown Define the GCN with the `GraphConvolution` layers we implemented: ###Code def gcn_definition(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Defines a GCN for the karate club task. Args: graph: GraphsTuple the network processes. Returns: output graph with updated node values. """ gn = GraphConvolution( update_node_fn=lambda n: jax.nn.relu(hk.Linear(8)(n)), add_self_edges=True) graph = gn(graph) gn = GraphConvolution( update_node_fn=hk.Linear(2)) # output dim is 2 because we have 2 output classes. graph = gn(graph) return graph ###Output _____no_output_____ ###Markdown Training and evaluation code: ###Code def optimize_club(network: hk.Transformed, num_steps: int) -> jnp.ndarray: """Solves the karate club problem by optimizing the assignments of students.""" zacharys_karate_club = get_zacharys_karate_club() labels = get_ground_truth_assignments_for_zacharys_karate_club() params = network.init(jax.random.PRNGKey(42), zacharys_karate_club) @jax.jit def predict(params: hk.Params) -> jnp.ndarray: decoded_graph = network.apply(params, zacharys_karate_club) return jnp.argmax(decoded_graph.nodes, axis=1) @jax.jit def prediction_loss(params: hk.Params) -> jnp.ndarray: decoded_graph = network.apply(params, zacharys_karate_club) # We interpret the decoded nodes as a pair of logits for each node. log_prob = jax.nn.log_softmax(decoded_graph.nodes) # The only two assignments we know a-priori are those of Mr. Hi (Node 0) # and John A (Node 33). return -(log_prob[0, 0] + log_prob[33, 1]) opt_init, opt_update = optax.adam(1e-2) opt_state = opt_init(params) @jax.jit def update(params: hk.Params, opt_state) -> Tuple[hk.Params, Any]: """Returns updated params and state.""" g = jax.grad(prediction_loss)(params) updates, opt_state = opt_update(g, opt_state) return optax.apply_updates(params, updates), opt_state @jax.jit def accuracy(params: hk.Params) -> jnp.ndarray: decoded_graph = network.apply(params, zacharys_karate_club) return jnp.mean(jnp.argmax(decoded_graph.nodes, axis=1) == labels) for step in range(num_steps): print(f"step {step} accuracy {accuracy(params).item():.2f}") params, opt_state = update(params, opt_state) return predict(params) ###Output _____no_output_____ ###Markdown Let's train the GCN! We expect this model reach an accuracy of about 0.91. ###Code network = hk.without_apply_rng(hk.transform(gcn_definition)) result = optimize_club(network, num_steps=15) ###Output _____no_output_____ ###Markdown Try modifying the model parameters to see if you can improve the accuracy!You can also modify the dataset itself, and see how that influences model training. Node assignments predicted by the model at the end of training: ###Code result ###Output _____no_output_____ ###Markdown Visualize ground truth and predicted node assignments:What do you think of the results? ###Code zacharys_karate_club = get_zacharys_karate_club() nx_graph = convert_jraph_to_networkx_graph(zacharys_karate_club) pos = nx.circular_layout(nx_graph) fig = plt.figure(figsize=(15, 7)) ax1 = fig.add_subplot(121) nx.draw( nx_graph, pos=pos, with_labels=True, node_size=500, node_color=result.tolist(), font_color='white') ax1.title.set_text('Predicted Node Assignments with GCN') gt_labels = get_ground_truth_assignments_for_zacharys_karate_club() ax2 = fig.add_subplot(122) nx.draw( nx_graph, pos=pos, with_labels=True, node_size=500, node_color=gt_labels.tolist(), font_color='white') ax2.title.set_text('Ground-Truth Node Assignments') fig.suptitle('Do you spot the difference? 😐', y=-0.01) plt.show() ###Output _____no_output_____ ###Markdown Graph Attention (GAT) LayerWhile the GCN we covered in the previous section can learn meaningful representations, it also has some shortcomings. Can you think of any?In the GCN layer, the messages from all its neighbours and the node itself are equally weighted. This may lead to loss of node-specific information. E.g., consider the case when a set of nodes shares the same set of neighbors, and start out with different node features. Then because of averaging, their resulting output features would be the same. Adding self-edges mitigates this issue by a small amount, but this problem is magnified with increasing number of GCN layers and number of edges connecting to a node.The graph attention (GAT) mechanism, as proposed by [Velickovic et al. ( 2017)](https://arxiv.org/abs/1710.10903), allows the network to learn how to weigh / assign importance to the node features from the neighbourhood when computing the new node features. This is very similar to the idea of using attention in Transformers, which were introduced in [Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762).(One could even argue that Transformers are graph attention networks operating on the special case of fully-connected graphs.)In the figure below, $\vec{h}$ are the node features and $\vec{\alpha}$ are the learned attention weights.Figure Credit: [Velickovic et al. ( 2017)](https://arxiv.org/abs/1710.10903).(Detail: This image is showing multi-headed attention with 3 heads, each color corresponding to a different head. At the end, an aggregation function is applied over all the heads.)To obtain the output node features of the GAT layer, we compute:$$ \vec{h}'_i = \sum _{j \in \mathcal{N}(i)}\alpha_{ij} \mathbf{W} \vec{h}_j$$Here, $\mathbf{W}$ is a weight matrix which performs a linear transformation on the input. How do we obtain $\alpha$, or in other words, learn what to pay attention to?Intuitively, the attention coefficient $\alpha_{ij}$ should rely on both the transformed features from nodes $i$ and $j$. So let's first define an attention mechanism function $\mathrm{attention\_fn}$ that computes the intermediary attention coefficients $e_{ij}$:$$ e_{ij} = \mathrm{attention\_fn}(\mathbf{W}\vec{h}_i, \mathbf{W}\vec{h}_j)$$To obtain normalized attention weights $\alpha$, we apply a softmax:$$\alpha_{ij} = \frac{\exp(e_{ij})}{\sum _{j \in \mathcal{N}(i)}\exp(e_{ij})}$$For the function $a$, the authors of the GAT paper chose to concatenate the transformed node features (denoted by $||$) and apply a single-layer feedforward network, parameterized by a weight vector $\vec{\mathbf{a}}$ and with LeakyRelu as non-linearity.In the implementation below, we refer to $\mathbf{W}$ as `attention_query_fn` and $att\_fn$ as `attention_logit_fn`.$$\mathrm{attention\_fn}(\mathbf{W}\vec{h}_i, \mathbf{W}\vec{h}_j) = \text{LeakyReLU}(\vec{\mathbf{a}}(\mathbf{W}\vec{h}_i || \mathbf{W}\vec{h}_j))$$The figure below summarizes this attention mechanism visually.Figure Credit: Petar Velickovic.<!-- $\sum_{j \in \mathcal{N}(i)}\vec{\alpha}_{ij} \stackrel{!}{=}1 $ --> ###Code # GAT implementation adapted from https://github.com/deepmind/jraph/blob/master/jraph/_src/models.py#L442. def GAT(attention_query_fn: Callable, attention_logit_fn: Callable, node_update_fn: Optional[Callable] = None, add_self_edges: bool = True) -> Callable: """Returns a method that applies a Graph Attention Network layer. Graph Attention message passing as described in https://arxiv.org/pdf/1710.10903.pdf. This model expects node features as a jnp.array, may use edge features for computing attention weights, and ignore global features. It does not support nests. Args: attention_query_fn: function that generates attention queries from sender node features. attention_logit_fn: function that converts attention queries into logits for softmax attention. node_update_fn: function that updates the aggregated messages. If None, will apply leaky relu and concatenate (if using multi-head attention). Returns: A function that applies a Graph Attention layer. """ # pylint: disable=g-long-lambda if node_update_fn is None: # By default, apply the leaky relu and then concatenate the heads on the # feature axis. node_update_fn = lambda x: jnp.reshape( jax.nn.leaky_relu(x), (x.shape[0], -1)) def _ApplyGAT(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Applies a Graph Attention layer.""" nodes, edges, receivers, senders, _, _, _ = graph # Equivalent to the sum of n_node, but statically known. try: sum_n_node = nodes.shape[0] except IndexError: raise IndexError('GAT requires node features') # Pass nodes through the attention query function to transform # node features, e.g. with an MLP. nodes = attention_query_fn(nodes) total_num_nodes = tree.tree_leaves(nodes)[0].shape[0] if add_self_edges: # We add self edges to the senders and receivers so that each node # includes itself in aggregation. receivers, senders = add_self_edges_fn(receivers, senders, total_num_nodes) # We compute the softmax logits using a function that takes the # embedded sender and receiver attributes. sent_attributes = nodes[senders] received_attributes = nodes[receivers] att_softmax_logits = attention_logit_fn(sent_attributes, received_attributes, edges) # Compute the attention softmax weights on the entire tree. att_weights = jraph.segment_softmax( att_softmax_logits, segment_ids=receivers, num_segments=sum_n_node) # Apply attention weights. messages = sent_attributes * att_weights # Aggregate messages to nodes. nodes = jax.ops.segment_sum(messages, receivers, num_segments=sum_n_node) # Apply an update function to the aggregated messages. nodes = node_update_fn(nodes) return graph._replace(nodes=nodes) # pylint: enable=g-long-lambda return _ApplyGAT ###Output _____no_output_____ ###Markdown Test GAT Layer ###Code def attention_logit_fn(sender_attr: jnp.ndarray, receiver_attr: jnp.ndarray, edges: jnp.ndarray) -> jnp.ndarray: del edges x = jnp.concatenate((sender_attr, receiver_attr), axis=1) return hk.Linear(1)(x) gat_layer = GAT( attention_query_fn=lambda n: hk.Linear(8) (n), # Applies W to the node features attention_logit_fn=attention_logit_fn, node_update_fn=None, add_self_edges=True, ) graph = build_toy_graph() network = hk.without_apply_rng(hk.transform(gat_layer)) params = network.init(jax.random.PRNGKey(42), graph) out_graph = network.apply(params, graph) out_graph.nodes ###Output _____no_output_____ ###Markdown Train GAT Model on Karate Club DatasetWe will now repeat the karate club experiment with a GAT network. ###Code def gat_definition(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Defines a GAT network for the karate club node classification task. Args: graph: GraphsTuple the network processes. Returns: output graph with updated node values. """ def _attention_logit_fn(sender_attr: jnp.ndarray, receiver_attr: jnp.ndarray, edges: jnp.ndarray) -> jnp.ndarray: del edges x = jnp.concatenate((sender_attr, receiver_attr), axis=1) return hk.Linear(1)(x) gn = GAT( attention_query_fn=lambda n: hk.Linear(8)(n), attention_logit_fn=_attention_logit_fn, node_update_fn=None, add_self_edges=True) graph = gn(graph) gn = GAT( attention_query_fn=lambda n: hk.Linear(8)(n), attention_logit_fn=_attention_logit_fn, node_update_fn=hk.Linear(2), add_self_edges=True) graph = gn(graph) return graph ###Output _____no_output_____ ###Markdown Let's train the model!We expect the model to reach an accuracy of about 0.97. ###Code network = hk.without_apply_rng(hk.transform(gat_definition)) result = optimize_club(network, num_steps=15) ###Output _____no_output_____ ###Markdown The final node assignment predicted by the trained model: ###Code result zacharys_karate_club = get_zacharys_karate_club() nx_graph = convert_jraph_to_networkx_graph(zacharys_karate_club) pos = nx.circular_layout(nx_graph) fig = plt.figure(figsize=(15, 7)) ax1 = fig.add_subplot(121) nx.draw( nx_graph, pos=pos, with_labels=True, node_size=500, node_color=result.tolist(), font_color='white') ax1.title.set_text('Predicted Node Assignments with GAT') gt_labels = get_ground_truth_assignments_for_zacharys_karate_club() ax2 = fig.add_subplot(122) nx.draw( nx_graph, pos=pos, with_labels=True, node_size=500, node_color=gt_labels.tolist(), font_color='white') ax2.title.set_text('Ground-Truth Node Assignments') fig.suptitle('Do you spot the difference? 😐', y=-0.01) plt.show() ###Output _____no_output_____ ###Markdown Graph Classification on MUTAG (Molecules) In the previous section, we used our GCN and GAT networks on a node classification problem. Now, let's use the same model architectures on a **graph classification task**. The main difference from our previous setup is that instead of observing individual node latents, we are now attempting to summarize them into one embedding vector, representative of the entire graph, which we then use to predict the class of this graph.We will do this on one of the most common tasks of this type -- **molecular property prediction**, where molecules are represented as graphs. Nodes correspond to atoms, and edges represent the bonds between them. We will use the **MUTAG** dataset for this example, a common dataset from the [TUDatasets](https://chrsmrrs.github.io/datasets/) collection.We have converted this dataset to be compatible with jraph and will download it in the cell below.Citation for TUDatasets: [Morris, Christopher, et al. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663. 2020.](https://chrsmrrs.github.io/datasets/) ###Code # Download jraph version of MUTAG. !wget -P /tmp/ https://storage.googleapis.com/dm-educational/assets/graph-nets/jraph_datasets/mutag.pickle with open('/tmp/mutag.pickle', 'rb') as f: mutag_ds = pickle.load(f) ###Output _____no_output_____ ###Markdown The dataset is saved as a list of examples, each example is a dictionary containing an input_graph and its corresponding target. ###Code len(mutag_ds) # Inspect the first graph g = mutag_ds[0]['input_graph'] print(f'Number of nodes: {g.n_node[0]}') print(f'Number of edges: {g.n_edge[0]}') print(f'Node features shape: {g.nodes.shape}') print(f'Edge features shape: {g.edges.shape}') draw_jraph_graph_structure(g) # Target for first graph print(f"Target: {mutag_ds[0]['target']}") ###Output _____no_output_____ ###Markdown We see that there are 188 graphs, to be classified in one of 2 classes, representing "their mutagenic effect on a specific gram negative bacterium". Node features represent the 1-hot encoding of the atom type (0=C, 1=N, 2=O, 3=F, 4=I, 5=Cl, 6=Br). Edge features (`edge_attr`) represent the bond type, which we will here ignore.Let's split the dataset to use the first 150 graphs as the training set (and the rest as the test set). ###Code train_mutag_ds = mutag_ds[:150] test_mutag_ds = mutag_ds[150:] ###Output _____no_output_____ ###Markdown Padding Graphs to Speed Up TrainingSince jax recompiles the program for each graph size, training would take a long time due to recompilation for different graph sizes. To address that, we pad the number of nodes and edges in the graphs to nearest power of two. Since jax maintains a cacheof compiled programs, the compilation cost is amortized. ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py def _nearest_bigger_power_of_two(x: int) -> int: """Computes the nearest power of two greater than x for padding.""" y = 2 while y < x: y *= 2 return y def pad_graph_to_nearest_power_of_two( graphs_tuple: jraph.GraphsTuple) -> jraph.GraphsTuple: """Pads a batched `GraphsTuple` to the nearest power of two. For example, if a `GraphsTuple` has 7 nodes, 5 edges and 3 graphs, this method would pad the `GraphsTuple` nodes and edges: 7 nodes --> 8 nodes (2^3) 5 edges --> 8 edges (2^3) And since padding is accomplished using `jraph.pad_with_graphs`, an extra graph and node is added: 8 nodes --> 9 nodes 3 graphs --> 4 graphs Args: graphs_tuple: a batched `GraphsTuple` (can be batch size 1). Returns: A graphs_tuple batched to the nearest power of two. """ # Add 1 since we need at least one padding node for pad_with_graphs. pad_nodes_to = _nearest_bigger_power_of_two(jnp.sum(graphs_tuple.n_node)) + 1 pad_edges_to = _nearest_bigger_power_of_two(jnp.sum(graphs_tuple.n_edge)) # Add 1 since we need at least one padding graph for pad_with_graphs. # We do not pad to nearest power of two because the batch size is fixed. pad_graphs_to = graphs_tuple.n_node.shape[0] + 1 return jraph.pad_with_graphs(graphs_tuple, pad_nodes_to, pad_edges_to, pad_graphs_to) ###Output _____no_output_____ ###Markdown Graph Network Model DefinitionWe will use `jraph.GraphNetwork()` to build our graph model. The `GraphNetwork` architecture is defined in [Battaglia et al. (2018)](https://arxiv.org/pdf/1806.01261.pdf).We first define update functions for nodes, edges, and the full graph (global). We will use MLP blocks for all three. ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py @jraph.concatenated_args def edge_update_fn(feats: jnp.ndarray) -> jnp.ndarray: """Edge update function for graph net.""" net = hk.Sequential( [hk.Linear(128), jax.nn.relu, hk.Linear(128)]) return net(feats) @jraph.concatenated_args def node_update_fn(feats: jnp.ndarray) -> jnp.ndarray: """Node update function for graph net.""" net = hk.Sequential( [hk.Linear(128), jax.nn.relu, hk.Linear(128)]) return net(feats) @jraph.concatenated_args def update_global_fn(feats: jnp.ndarray) -> jnp.ndarray: """Global update function for graph net.""" # MUTAG is a binary classification task, so output pos neg logits. net = hk.Sequential( [hk.Linear(128), jax.nn.relu, hk.Linear(2)]) return net(feats) def net_fn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: # Add a global paramater for graph classification. graph = graph._replace(globals=jnp.zeros([graph.n_node.shape[0], 1])) embedder = jraph.GraphMapFeatures( hk.Linear(128), hk.Linear(128), hk.Linear(128)) net = jraph.GraphNetwork( update_node_fn=node_update_fn, update_edge_fn=edge_update_fn, update_global_fn=update_global_fn) return net(embedder(graph)) ###Output _____no_output_____ ###Markdown Loss and Accuracy FunctionDefine the classification cross-entropy loss and accuracy function. ###Code def compute_loss(params: hk.Params, graph: jraph.GraphsTuple, label: jnp.ndarray, net: jraph.GraphsTuple) -> Tuple[jnp.ndarray, jnp.ndarray]: """Computes loss and accuracy.""" pred_graph = net.apply(params, graph) preds = jax.nn.log_softmax(pred_graph.globals) targets = jax.nn.one_hot(label, 2) # Since we have an extra 'dummy' graph in our batch due to padding, we want # to mask out any loss associated with the dummy graph. # Since we padded with `pad_with_graphs` we can recover the mask by using # get_graph_padding_mask. mask = jraph.get_graph_padding_mask(pred_graph) # Cross entropy loss. loss = -jnp.mean(preds * targets * mask[:, None]) # Accuracy taking into account the mask. accuracy = jnp.sum( (jnp.argmax(pred_graph.globals, axis=1) == label) * mask) / jnp.sum(mask) return loss, accuracy ###Output _____no_output_____ ###Markdown Training and Evaluation Functions ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py def train(dataset: List[Dict[str, Any]], num_train_steps: int) -> hk.Params: """Training loop.""" # Transform impure `net_fn` to pure functions with hk.transform. net = hk.without_apply_rng(hk.transform(net_fn)) # Get a candidate graph and label to initialize the network. graph = dataset[0]['input_graph'] # Initialize the network. params = net.init(jax.random.PRNGKey(42), graph) # Initialize the optimizer. opt_init, opt_update = optax.adam(1e-4) opt_state = opt_init(params) compute_loss_fn = functools.partial(compute_loss, net=net) # We jit the computation of our loss, since this is the main computation. # Using jax.jit means that we will use a single accelerator. If you want # to use more than 1 accelerator, use jax.pmap. More information can be # found in the jax documentation. compute_loss_fn = jax.jit(jax.value_and_grad( compute_loss_fn, has_aux=True)) for idx in range(num_train_steps): graph = dataset[idx % len(dataset)]['input_graph'] label = dataset[idx % len(dataset)]['target'] # Jax will re-jit your graphnet every time a new graph shape is encountered. # In the limit, this means a new compilation every training step, which # will result in *extremely* slow training. To prevent this, pad each # batch of graphs to the nearest power of two. Since jax maintains a cache # of compiled programs, the compilation cost is amortized. graph = pad_graph_to_nearest_power_of_two(graph) # Since padding is implemented with pad_with_graphs, an extra graph has # been added to the batch, which means there should be an extra label. label = jnp.concatenate([label, jnp.array([0])]) (loss, acc), grad = compute_loss_fn(params, graph, label) updates, opt_state = opt_update(grad, opt_state, params) params = optax.apply_updates(params, updates) if idx % 50 == 0: print(f'step: {idx}, loss: {loss}, acc: {acc}') print('Training finished') return params def evaluate(dataset: List[Dict[str, Any]], params: hk.Params) -> Tuple[jnp.ndarray, jnp.ndarray]: """Evaluation Script.""" # Transform impure `net_fn` to pure functions with hk.transform. net = hk.without_apply_rng(hk.transform(net_fn)) # Get a candidate graph and label to initialize the network. graph = dataset[0]['input_graph'] accumulated_loss = 0 accumulated_accuracy = 0 compute_loss_fn = jax.jit(functools.partial(compute_loss, net=net)) for idx in range(len(dataset)): graph = dataset[idx]['input_graph'] label = dataset[idx]['target'] graph = pad_graph_to_nearest_power_of_two(graph) label = jnp.concatenate([label, jnp.array([0])]) loss, acc = compute_loss_fn(params, graph, label) accumulated_accuracy += acc accumulated_loss += loss if idx % 100 == 0: print(f'Evaluated {idx + 1} graphs') print('Completed evaluation.') loss = accumulated_loss / idx accuracy = accumulated_accuracy / idx print(f'Eval loss: {loss}, accuracy {accuracy}') return loss, accuracy params = train(train_mutag_ds, num_train_steps=500) evaluate(test_mutag_ds, params) ###Output _____no_output_____ ###Markdown We converge at ~76% test accuracy. We could of course further tune the parameters to improve this result. Link prediction on CORA (Citation Network) The final problem type we will explore is **link prediction**, an instance of an **edge-level** task. Given a graph, our goal is to predict whether a certain edge $(u,v)$ should be present or not. This is often useful in the recommender system settings (e.g., propose new friends in a social network, propose a movie to a user).As before, the first step is to obtain node latents $h_i$ using a GNN. In this context we will use the autoencoder language and call this GNN **encoder**. Then, we learn a binary classifier $f: (h_i, h_j) \to z_{i,j}$ (**decoder**), predicting if an edge $(i,j)$ should exist or not. While we could use a more elaborate decoder (e.g., an MLP), a common approach we will also use here is to focus on obtaining good node embeddings, and for the decoder simply use the similarity between node latents, i.e. $z_{i,j} = h_i^T h_j$. For this problem we will use the [**Cora** dataset](https://linqs.github.io/linqs-website/datasets/cora), a citation graph containing 2708 scientific publications. For each publication we have a 1433-dimensional feature vector, which is a bag-of-words representation (with a small, fixed dictionary) of the paper text. The edges in this graph represent citations, and are commonly treated as undirected. Each paper is in one of seven topics (classes) so you can also use this dataset for node classification.Similar to MUTAG, we have converted this dataset to jraph for you.Citation for the use of the Cora dataset:- [Qing Lu and Lise Getoor. Link-Based Classification. International Conference on Machine Learning. 2003.](https://linqs.github.io/linqs-website/publications/id:lu-icml03)- [Sen, Prithviraj, et al. Collective classification in network data. AI magazine 29.3. 2008.](https://linqs.github.io/linqs-website/datasets/cora)- [Dataset download link](https://linqs.github.io/linqs-website/datasets/cora) ###Code # Download jraph version of Cora. !wget -P /tmp/ https://storage.googleapis.com/dm-educational/assets/graph-nets/jraph_datasets/cora.pickle with open('/tmp/cora.pickle', 'rb') as f: cora_ds = pickle.load(f) ###Output _____no_output_____ ###Markdown Splitting Edges and Adding "Negative" EdgesFor the link prediction task, we split the edges into train, val and test sets and also add "negative" examples (edges that do not correspond to a citation). We will ignore the topic classes.For the validation and test splits, we add the same number of existing edges ("positive examples") and non-existing edges ("negative examples").In contrast to the validation and test splits, the training split only contains positive examples (set $T_+$). The $|T_+|$ negative examples to be used during training will be sampled ad hoc in each epoch and uniformly at random from all edges that are not in $T_+$. This allows the model to see a wider range of negative examples. ###Code def train_val_test_split_edges(graph: jraph.GraphsTuple, val_perc: float = 0.05, test_perc: float = 0.1): """Split edges in input graph into train, val and test splits. For val and test sets, also include negative edges. Based on torch_geometric.utils.train_test_split_edges. """ mask = graph.senders < graph.receivers senders = graph.senders[mask] receivers = graph.receivers[mask] num_val = int(val_perc * senders.shape[0]) num_test = int(test_perc * senders.shape[0]) permuted_indices = onp.random.permutation(range(senders.shape[0])) senders = senders[permuted_indices] receivers = receivers[permuted_indices] if graph.edges is not None: edges = graph.edges[permuted_indices] val_senders = senders[:num_val] val_receivers = receivers[:num_val] if graph.edges is not None: val_edges = edges[:num_val] test_senders = senders[num_val:num_val + num_test] test_receivers = receivers[num_val:num_val + num_test] if graph.edges is not None: test_edges = edges[num_val:num_val + num_test] train_senders = senders[num_val + num_test:] train_receivers = receivers[num_val + num_test:] train_edges = None if graph.edges is not None: train_edges = edges[num_val + num_test:] # make training edges undirected by adding reverse edges back in train_senders_undir = jnp.concatenate((train_senders, train_receivers)) train_receivers_undir = jnp.concatenate((train_receivers, train_senders)) train_senders = train_senders_undir train_receivers = train_receivers_undir # Negative edges. num_nodes = graph.n_node[0] # Create a negative adjacency mask, s.t. mask[i, j] = True iff edge i->j does # not exist in the original graph. neg_adj_mask = onp.ones((num_nodes, num_nodes), dtype=onp.uint8) # upper triangular part neg_adj_mask = onp.triu(neg_adj_mask, k=1) neg_adj_mask[graph.senders, graph.receivers] = 0 neg_adj_mask = neg_adj_mask.astype(onp.bool) neg_senders, neg_receivers = neg_adj_mask.nonzero() perm = onp.random.permutation(range(len(neg_senders))) neg_senders = neg_senders[perm] neg_receivers = neg_receivers[perm] val_neg_senders = neg_senders[:num_val] val_neg_receivers = neg_receivers[:num_val] test_neg_senders = neg_senders[num_val:num_val + num_test] test_neg_receivers = neg_receivers[num_val:num_val + num_test] train_graph = jraph.GraphsTuple( nodes=graph.nodes, edges=train_edges, senders=train_senders, receivers=train_receivers, n_node=graph.n_node, n_edge=jnp.array([len(train_senders)]), globals=graph.globals) return train_graph, neg_adj_mask, val_senders, val_receivers, val_neg_senders, val_neg_receivers, test_senders, test_receivers, test_neg_senders, test_neg_receivers ###Output _____no_output_____ ###Markdown Test the Edge Splitting Function ###Code graph = cora_ds[0]['input_graph'] train_graph, neg_adj_mask, val_pos_senders, val_pos_receivers, val_neg_senders, val_neg_receivers, test_pos_senders, test_pos_receivers, test_neg_senders, test_neg_receivers = train_val_test_split_edges(graph) print(f'Train set: {train_graph.senders.shape[0]} positive edges, we will sample the same number of negative edges at runtime') print(f'Val set: {val_pos_senders.shape[0]} positive edges, {val_neg_senders.shape[0]} negative edges') print(f'Test set: {test_pos_senders.shape[0]} positive edges, {test_neg_senders.shape[0]} negative edges') print(f'Negative adjacency mask shape: {neg_adj_mask.shape}') print(f'Numbe of negative edges to sample from: {neg_adj_mask.sum()}') ###Output _____no_output_____ ###Markdown *Note*: It will often happen during training that as a negative example, we sample an initially existing edge (that is now e.g. a positive example in the test set). We are however not allowed to check for this, as we should be unaware of the existence of test edges during training.Assuming our dot product decoder, we are essentially attempting to bring the latents of endpoints of edges from $T_+$ closer together, and make the latents of all other pairs of nodes as distant as possible. As this is impossible to fully satisfy, the hope is that the model will "fail" to distance those pairs of nodes where the edges should actually exist (positive examples from the test set). Graph Network Model DefinitionWe will use jraph.GraphNetwork to build our graph net model.We first define update functions for node features. We are not using edge or global features for this task. ###Code @jraph.concatenated_args def node_update_fn(feats: jnp.ndarray) -> jnp.ndarray: """Node update function for graph net.""" net = hk.Sequential([hk.Linear(128), jax.nn.relu, hk.Linear(64)]) return net(feats) def net_fn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Network definition.""" graph = graph._replace(globals=jnp.zeros([graph.n_node.shape[0], 1])) net = jraph.GraphNetwork( update_node_fn=node_update_fn, update_edge_fn=None, update_global_fn=None) return net(graph) def decode(pred_graph: jraph.GraphsTuple, senders: jnp.ndarray, receivers: jnp.ndarray) -> jnp.ndarray: """Given a set of candidate edges, take dot product of respective nodes. Args: pred_graph: input graph. senders: Senders of candidate edges. receivers: Receivers of candidate edges. Returns: For each edge, computes dot product of the features of the two nodes. """ return jnp.squeeze( jnp.sum(pred_graph.nodes[senders] * pred_graph.nodes[receivers], axis=1)) ###Output _____no_output_____ ###Markdown To evaluate our model, we first apply the sigmoid function to obtained dot products to get a score $s_{i,j} \in [0,1]$ for each edge. Now, we can pick a threshold $\tau$ and say that we predict all pairs $(i,j)$ s.t. $s_{i,j} \geq \tau$ as edges (and all the rest as non-edges). Loss and ROC-AUC-Metric FunctionDefine the binary classification cross-entropy loss.To aggregate the results over all choices of $\tau$, we will use ROC-AUC (the area under the ROC curve) as our evaluation metric. ###Code from sklearn.metrics import roc_auc_score def compute_bce_with_logits_loss(x: jnp.ndarray, y: jnp.ndarray) -> jnp.ndarray: """Computes binary cross-entropy with logits loss. Combines sigmoid and BCE, and uses log-sum-exp trick for numerical stability. See https://stackoverflow.com/a/66909858 if you want to learn more. Args: x: Predictions (logits). y: Labels. Returns: Binary cross-entropy loss with mean aggregation. """ max_val = jnp.clip(x, 0, None) loss = x - x * y + max_val + jnp.log( jnp.exp(-max_val) + jnp.exp((-x - max_val))) return loss.mean() def compute_loss(params: hk.Params, graph: jraph.GraphsTuple, senders: jnp.ndarray, receivers: jnp.ndarray, labels: jnp.ndarray, net: hk.Transformed) -> Tuple[jnp.ndarray, jnp.ndarray]: """Computes loss.""" pred_graph = net.apply(params, graph) preds = decode(pred_graph, senders, receivers) loss = compute_bce_with_logits_loss(preds, labels) return loss, preds def compute_roc_auc_score(preds: jnp.ndarray, labels: jnp.ndarray) -> jnp.ndarray: """Computes roc auc (area under the curve) score for classification.""" s = jax.nn.sigmoid(preds) roc_auc = roc_auc_score(labels, s) return roc_auc ###Output _____no_output_____ ###Markdown Helper function for sampling negative edges during training. ###Code def negative_sampling( graph: jraph.GraphsTuple, num_neg_samples: int, key: jnp.DeviceArray) -> Tuple[jnp.DeviceArray, jnp.DeviceArray]: """Samples negative edges, i.e. edges that don't exist in the input graph.""" num_nodes = graph.n_node[0] total_possible_edges = num_nodes**2 # convert 2D edge indices to 1D representation. pos_idx = graph.senders * num_nodes + graph.receivers # Percentage to oversample edges, so most likely will sample enough neg edges. alpha = jnp.abs(1 / (1 - 1.1 * (graph.senders.shape[0] / total_possible_edges))) perm = jax.random.randint( key, shape=(int(alpha * num_neg_samples),), minval=0, maxval=total_possible_edges, dtype=jnp.uint32) # mask where sampled edges are positive edges. mask = jnp.isin(perm, pos_idx) # remove positive edges. perm = perm[~mask][:num_neg_samples] # convert 1d back to 2d edge indices. neg_senders = perm // num_nodes neg_receivers = perm % num_nodes return neg_senders, neg_receivers ###Output _____no_output_____ ###Markdown Let's write the training loop: ###Code def train(dataset: List[Dict[str, Any]], num_epochs: int) -> hk.Params: """Training loop.""" key = jax.random.PRNGKey(42) # Transform impure `net_fn` to pure functions with hk.transform. net = hk.without_apply_rng(hk.transform(net_fn)) # Get a candidate graph and label to initialize the network. graph = dataset[0]['input_graph'] train_graph, _, val_pos_s, val_pos_r, val_neg_s, val_neg_r, test_pos_s, \ test_pos_r, test_neg_s, test_neg_r = train_val_test_split_edges( graph) # Prepare the validation and test data. val_senders = jnp.concatenate((val_pos_s, val_neg_s)) val_receivers = jnp.concatenate((val_pos_r, val_neg_r)) val_labels = jnp.concatenate( (jnp.ones(len(val_pos_s)), jnp.zeros(len(val_neg_s)))) test_senders = jnp.concatenate((test_pos_s, test_neg_s)) test_receivers = jnp.concatenate((test_pos_r, test_neg_r)) test_labels = jnp.concatenate( (jnp.ones(len(test_pos_s)), jnp.zeros(len(test_neg_s)))) # Initialize the network. params = net.init(key, train_graph) # Initialize the optimizer. opt_init, opt_update = optax.adam(1e-4) opt_state = opt_init(params) compute_loss_fn = functools.partial(compute_loss, net=net) # We jit the computation of our loss, since this is the main computation. # Using jax.jit means that we will use a single accelerator. If you want # to use more than 1 accelerator, use jax.pmap. More information can be # found in the jax documentation. compute_loss_fn = jax.jit(jax.value_and_grad(compute_loss_fn, has_aux=True)) for epoch in range(num_epochs): num_neg_samples = train_graph.senders.shape[0] train_neg_senders, train_neg_receivers = negative_sampling( train_graph, num_neg_samples=num_neg_samples, key=key) train_senders = jnp.concatenate((train_graph.senders, train_neg_senders)) train_receivers = jnp.concatenate( (train_graph.receivers, train_neg_receivers)) train_labels = jnp.concatenate( (jnp.ones(len(train_graph.senders)), jnp.zeros(len(train_neg_senders)))) (train_loss, train_preds), grad = compute_loss_fn(params, train_graph, train_senders, train_receivers, train_labels) updates, opt_state = opt_update(grad, opt_state, params) params = optax.apply_updates(params, updates) if epoch % 10 == 0 or epoch == (num_epochs - 1): train_roc_auc = compute_roc_auc_score(train_preds, train_labels) val_loss, val_preds = compute_loss(params, train_graph, val_senders, val_receivers, val_labels, net) val_roc_auc = compute_roc_auc_score(val_preds, val_labels) print(f'epoch: {epoch}, train_loss: {train_loss:.3f}, ' f'train_roc_auc: {train_roc_auc:.3f}, val_loss: {val_loss:.3f}, ' f'val_roc_auc: {val_roc_auc:.3f}') test_loss, test_preds = compute_loss(params, train_graph, test_senders, test_receivers, test_labels, net) test_roc_auc = compute_roc_auc_score(test_preds, test_labels) print('Training finished') print( f'epoch: {epoch}, test_loss: {test_loss:.3f}, test_roc_auc: {test_roc_auc:.3f}' ) return params ###Output _____no_output_____ ###Markdown Let's train the model! We expect the model to reach roughly test_roc_auc of 0.84.(Note that ROC-AUC is a scalar between 0 and 1, with 1 being the ROC-AUC of a perfect classifier.) ###Code params = train(cora_ds, num_epochs=200) ###Output _____no_output_____ ###Markdown Introduction to Graph Neural Nets with JAX/jraph*Lisa Wang, DeepMind ([email protected]), Nikola Jovanović, ETH Zurich ([email protected])***Colab Runtime:**If possible, please use a GPU hardware accelerator to run this colab. You can choose that under *Runtime > Change Runtime Type*.**Prerequisites:*** Some familiarity with [JAX](https://github.com/google/jax), you can refer to this [colab](https://colab.sandbox.google.com/github/google/jax/blob/master/docs/jax-101/01-jax-basics.ipynb) for an introduction to JAX.* Neural network basics* Graph theory basics (MIT Open Courseware [slides](https://ocw.mit.edu/courses/civil-and-environmental-engineering/1-022-introduction-to-network-models-fall-2018/lecture-notes/MIT1_022F18_lec2.pdf) by Amir Ajorlou)We recommend watching the [Theoretical Foundations of Graph Neural Networks Lecture](https://www.youtube.com/watch?v=uF53xsT7mjc&) by Petar Veličković before working through this colab. The talk provides a theoretical introduction to Graph Neural Networks (GNNs), historical context and motivating examples.**Outline:*** [Fundamental Graph Concepts](scrollTo=gsKA-syx_LUi)* [Graph Prediction Tasks](scrollTo=spQGRxhPN8Eo)* [Intro to the jraph Library](scrollTo=3C5YI9M0vwvb)* [Graph Convolutional Network (GCN) Layer](scrollTo=NZRMF2d-h2pd)* [Build GCN Model with Multiple Layers](scrollTo=lha8rbQ78l3S)* [Node Classification with GCN on Karate Club Dataset](scrollTo=Z5t7kw7SE_h4)* [Graph Attention (GAT) Layer](scrollTo=yg8g96NdBCK6)* [Train GAT Model on Karate Club Dataset](scrollTo=anfVGJwBe27v)* [Graph Classification on MUTAG (Molecules)](scrollTo=n5TxaTGzBkBa)* [Link Prediction on CORA (Citation Network)](scrollTo=OwVE88dTRC6V)* [Bonus: Intro to Graph Adversarial Attacks](scrollTo=35kbP8GZRFEm)**Additional Resources:*** Battaglia et al. (2018): [Relational inductive biases, deep learning, and graph networks](https://arxiv.org/pdf/1806.01261)---Some sections in this colab build on the [GraphNets Tutorial colab in pytorch](https://github.com/eemlcommunity/PracticalSessions2021/blob/main/graphnets/graphnets_tutorial.ipynb) by Nikola Jovanović.We would like to thank Razvan Pascanu and Petar Veličković for their valuable input and feedback.---*Copyright 2021 by the Authors.**Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0**Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.* Setup: Install and Import libraries ###Code !pip install git+git://github.com/deepmind/jraph.git !pip install flax !pip install dm-haiku # Imports %matplotlib inline import functools import matplotlib.pyplot as plt import jax import jax.numpy as jnp import jax.tree_util as tree import jraph import flax import haiku as hk import optax import numpy as onp import networkx as nx from typing import Tuple ###Output _____no_output_____ ###Markdown Fundamental Graph ConceptsA graph consists of a set of nodes and a set of edges, where edges form connections between nodes.More formally, a graph is defined as $ \mathcal{G} = (\mathcal{V}, \mathcal{E})$ where $\mathcal{V}$ is the set of vertices / nodes, and $\mathcal{E}$ is the set of edges.In an **undirected** graph, each edge is an unordered pair of two nodes $ \in \mathcal{V}$. E.g. a friend network can be represented as an undirected graph, assuming that the relationship "*A is friends with B*" implies "*B is friends with A*".In a **directed** graph, each edge is an ordered pair of nodes $ \in \mathcal{V}$. E.g. a citation network would be best represented with a directed graph, since the relationship "*A cites B*" does not imply "*B cites A*".The **degree** of a node is defined as the number of edges incident on it, i.e. the sum of incoming and outgoing edges for that node.The **in-degree** is the sum of incoming edges only, and the **out-degree** is the sum of outgoing edges only.There are two common ways to represent $\mathcal{E}$:1. As an **adjacency matrix**: a binary square matrix $A$ of size $|\mathcal{V}| \times |\mathcal{V}|$, where $A_{u,v}=1$ iff there is a connection between nodes $u$ and $v$.2. As an **adjacency list**: a list of ordered pairs $(u,v)$. Example: Below is a directed graph with four nodes and four edges.The arrows on the edges indicate the direction of each edge, e.g. there is an edge going from node 0 to node 1.node 0 has out-degree of 1, since it has one outgoing edge, and an in-degree of 2, since it has two incoming edges.The adjacency list representation of edges is:[(0, 1), (1, 2), (2, 0), (3, 0)]And adjacency matrix:$$\begin{array}{l|llll} source \setminus dest & n_0 & n_1 & n_2 & n_3 \\ \hlinen_0 & 0 & 1 & 0 & 0 \\n_1 & 0 & 0 & 1 & 0 \\n_2 & 1 & 0 & 0 & 0 \\n_3 & 1 & 0 & 0 & 0\end{array}$$ Graph Prediction TasksWhat are the kinds of problems we want to solve on graphs?The tasks fall into roughly three categories:1. **Node Classification**: E.g. what is the topic of a paper given a citation network of papers?2. **Link Prediction / Edge Classification**: E.g. are two people in a social network friends?3. **Graph Classification**: E.g. is this protein molecule (represented as a graph) likely going to be effective?The three main graph learning tasks. Image source: Petar Veličković.Which examples of graph prediction tasks come to your mind? Which task types do they correspond to?We will create and train models on all three task types in this tutorial. Intro to the jraph LibraryIn the following sections, we will learn how represent graphs and build GNNs in Python. We will use[jraph](https://github.com/deepmind/jraph), a lightweight library for working with GNNs in [JAX](https://github.com/google/jax). Representing a graph in jraphIn jraph, a graph is represented with a `GraphsTuple` object. In addition to defining the graph structure of nodes and edges, you can also store node features, edge features and global graph features in a `GraphsTuple`.In the `GraphsTuple`, edges are represented with an adjacency list, which is stored in two aligned arrays of node indices: senders (source nodes) and receivers (destinaton nodes).Each index corresponds to one edge, e.g. edge `i` goes from `senders[i]` to `receivers[i]`.You can even store multiple graphs in one `GraphsTuple` object.We will start with creating a simple directed graph with 4 nodes and 5 edges. We will also add toy features to the nodes, using `2*node_index` as the feature.We will later use this toy graph in the GCN demo. ###Code def build_toy_graph(): """Define a four node graph, each node has a scalar as its feature.""" # Nodes are defined implicitly by their features. # We will add four nodes, each with a feature, e.g. # node 0 has feature [0.], # node 1 has featre [2.] etc. # len(node_features) is the number of nodes. node_features = jnp.array([[0.], [2.], [4.], [6.]]) # We will now specify 5 directed edges connecting the nodes we defined above. # We define this with `senders` (source node indices) and `receivers` # (destination node indices). # For example, to add an edge from node 0 to node 1, we append 0 to senders, # and 1 to receivers. # We can do the same for all 5 edges: # 0 -> 1 # 1 -> 2 # 2 -> 0 # 3 -> 0 # 0 -> 3 senders = jnp.array([0, 1, 2, 3, 0]) receivers = jnp.array([1, 2, 0, 0, 3]) # You can optionally add edge attributes to the 5 edges. edges = jnp.array([[5.], [6.], [7.], [8.], [8.]]) # We then save the number of nodes and the number of edges. # This information is used to make running GNNs over multiple graphs # in a GraphsTuple possible. n_node = jnp.array([4]) n_edge = jnp.array([5]) # Optionally you can add `global` information, such as a graph label. global_context = jnp.array([[1]]) # Same feature dimensions as nodes and edges. graph = jraph.GraphsTuple( nodes=node_features, edges=edges, senders=senders, receivers=receivers, n_node=n_node, n_edge=n_edge, globals=global_context ) return graph graph = build_toy_graph() ###Output _____no_output_____ ###Markdown Inspecting the GraphsTuple ###Code # Number of nodes # Note that `n_node` returns an array. The length of `n_node` corresponds to # the number of graphs stored in one `GraphsTuple`. # In this case, we only have one graph, so n_node has length 1. graph.n_node # Number of edges graph.n_edge # Node features graph.nodes # Edge features graph.edges # Edges graph.senders graph.receivers # Graph-level features graph.globals ###Output _____no_output_____ ###Markdown Visualizing the GraphTo visualize the graph structure of the graph we created above, we will use the [`networkx`](networkx.org) library because it already has functions for drawing graphs.We first convert the `jraph.GraphsTuple` to a `networkx.DiGraph`. ###Code def convert_jraph_to_networkx_graph(jraph_graph): nodes, edges, receivers, senders, _, _, _ = jraph_graph nx_graph = nx.DiGraph() if nodes is None: for n in range(jraph_graph.n_node[0]): nx_graph.add_node(n) else: for n in range(jraph_graph.n_node[0]): nx_graph.add_node(n, node_feature=nodes[n]) if edges is None: for e in range(jraph_graph.n_edge[0]): nx_graph.add_edge(int(senders[e]), int(receivers[e])) else: for e in range(jraph_graph.n_edge[0]): nx_graph.add_edge(int(senders[e]), int(receivers[e]), edge_feature=edges[e]) return nx_graph def draw_jraph_graph_structure(jraph_graph: jraph.GraphsTuple): nx_graph = convert_jraph_to_networkx_graph(jraph_graph) pos = nx.spring_layout(nx_graph) nx.draw(nx_graph, pos=pos, with_labels = True, node_size=500, font_color='yellow') draw_jraph_graph_structure(graph) ###Output _____no_output_____ ###Markdown Graph Convolutional Network (GCN) LayerNow let's implement our first graph network!The graph convolutional network, introduced by by Kipf et al. (2017) in https://arxiv.org/abs/1609.02907, is one of the basic graph network architectures. We will build its core building block, the graph convolutional layer.In a convolutional neural network (CNN), a convolutional filter (e.g. 3x3) is applied repeatedly to different parts of a larger input (e.g. 64x64) by striding across the input.In a GCN, a convolution filter is applied to the neighbourhoods around a node in a graph.However, there are also some differences to point out:In contrast to the CNN filter, the neighbourhoods in a GCN can be of different sizes, and there is no ordering of inputs. To see that, note that the CNN filter performs a weighted sum aggregation over the inputs with learnable weights, where each filter input has its own weight. In the GCN, the same weight is applied to all neighbours and the aggregation function is not learned. In other words, in a GCN, each neighbor contributes equally. This is why the CNN filter is not order-invariant, but the GCN filter is.Comparison of CNN and GCN filters.Image source: https://arxiv.org/pdf/1901.00596.pdfMore specifically, the GCN layer performs two steps:1. _Compute messages / update node features_: Create a feature vector $\vec{h}_n$ for each node $n$ (e.g. with an MLP). This is going to be the message that this node will pass to neighboring nodes.2. _Message-passing / aggregate node features_: For each node, calculate a new feature vector $\vec{h}'_n$ based on the messages (features) from the nodes in its neighborhood. In a directed graph, only nodes from incoming edges are counted as neighbors. The image below shows this aggregation step. There are multiple options for aggregation in a GCN, e.g. taking the mean, the sum, the min or max. (Later in this tutorial, we will also see how we can make the aggregation function dependent on the node features by adding an attention mechanism in the Graph Attention Network.)*\"A generic overview of a graph convolution operation, highlighting the relevant information for deriving the next-level features for every node in the graph.\"* Image source: Petar Veličković (https://github.com/PetarV-/TikZ) Simple GCN Layer ###Code def apply_simplified_gcn(graph: jraph.GraphsTuple): # Unpack GraphsTuple nodes, _, receivers, senders, _, _, _ = graph # 1. Update node features # For simplicity, we will first use an identify function here, and replace it # with a trainable MLP block later. update_node_fn = lambda nodes: nodes nodes = update_node_fn(nodes) # 2. Aggregate node features over nodes in neighborhood # Equivalent to jnp.sum(n_node), but jittable total_num_nodes = tree.tree_leaves(nodes)[0].shape[0] aggregate_nodes_fn = jax.ops.segment_sum # Compute new node features by aggregating messages from neighboring nodes nodes = tree.tree_map(lambda x: aggregate_nodes_fn(x[senders], receivers, total_num_nodes), nodes) out_graph = graph._replace(nodes=nodes) return out_graph ###Output _____no_output_____ ###Markdown We can now run the graph convolution on our toy graph from before. ###Code graph = build_toy_graph() ###Output _____no_output_____ ###Markdown Here is the visualized graph. ###Code draw_jraph_graph_structure(graph) out_graph = apply_simplified_gcn(graph) ###Output _____no_output_____ ###Markdown Since we used the identity function for updating nodes and sum aggregation, we can verify the results pretty easily. As a reminder, in this toy graph, the node features are the same as the node index.Node 0: sum of features from node 2 and node 3 $\rightarrow$ 10.Node 1: sum of features from node 0 $\rightarrow$ 0.Node 2: sum of features from node 1 $\rightarrow$ 2.Node 3: sum of features from node 0 $\rightarrow$ 0. ###Code out_graph.nodes ###Output _____no_output_____ ###Markdown Add Trainable Parameters to GCN layerSo far our graph convolution operation doesn't have any learnable parameters.Let's add an MLP block to the update function to make it trainable. ###Code class MLP(hk.Module): def __init__(self, features: jnp.ndarray): super().__init__() self.features = features def __call__(self, x: jnp.ndarray): layers = [] for feat in self.features[:-1]: layers.append(hk.Linear(feat)) layers.append(jax.nn.relu) layers.append(hk.Linear(self.features[-1])) mlp = hk.Sequential(layers) return mlp(x) # Use MLP block to define the update node function update_node_fn = lambda x: MLP(features=[8, 4])(x) ###Output _____no_output_____ ###Markdown Check outputs of `update_node_fn` with MLP Block ###Code graph = build_toy_graph() update_node_module = hk.without_apply_rng(hk.transform(update_node_fn)) params = update_node_module.init(jax.random.PRNGKey(42), graph.nodes) out = update_node_module.apply(params, graph.nodes) ###Output _____no_output_____ ###Markdown As output, we expect the updated node features. We should see one array of dim 4 for each of the 4 nodes, which is the result of applying a single MLP block to the features of each node individually. ###Code out ###Output _____no_output_____ ###Markdown Add Self-Edges (Edges connecting a node to itself)For each node, add an edge of the node onto itself. This way, nodes will include themselves in the aggregation step. ###Code def add_self_edges_fn(receivers, senders, total_num_nodes): """Adds self edges. Assumes self edges are not in the graph yet.""" receivers = jnp.concatenate((receivers, jnp.arange(total_num_nodes)), axis=0) senders = jnp.concatenate((senders, jnp.arange(total_num_nodes)), axis=0) return receivers, senders ###Output _____no_output_____ ###Markdown Add Symmetric NormalizationNote that the nodes may have different numbers of neighbors / degrees.This could lead to instabilities during neural network training, e.g. exploding or vanishing gradients. To address that, normalization is a commonly used method. In this case, we will normalize by node degrees.As a first attempt, we could count the number of incoming edges (including self-edge) and divide by that value.More formally, let $A$ be the adjacency matrix defining the edges of the graph.Then we define the degree matrix $D$ as a diagonal matrix with $D_{ii} = \sum_jA_{ij}$ (the degree of node $i$)Now we can normalize $AH$ by dividing it by the node degrees:$${D}^{-1}AH$$To take both the in and out degrees into account, we can use symmetric normalization, which is also what Kipf and Welling proposed in their [paper](https://arxiv.org/abs/1609.02907):$$D^{-\frac{1}{2}}AD^{-\frac{1}{2}}H$$ General GCN LayerNow we can write a more general and configurable version of the Graph Convolution layer, allowing the caller to specify:* **`update_node_fn`**: Function to use to update node features (e.g. the MLP block version we just implemented)* **`aggregate_nodes_fn`**: Aggregation function to use to aggregate messages from neighbourhood.* **`add_self_edges`**: Whether to add self edges for aggregation step.* **`symmetric_normalization`**: Whether to add symmetric normalization. ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/_src/models.py#L506 def GraphConvolution( update_node_fn, aggregate_nodes_fn=jax.ops.segment_sum, add_self_edges: bool = False, symmetric_normalization: bool = True): """Returns a method that applies a Graph Convolution layer. Graph Convolutional layer as in https://arxiv.org/abs/1609.02907, NOTE: This implementation does not add an activation after aggregation. If you are stacking layers, you may want to add an activation between each layer. Args: update_node_fn: function used to update the nodes. In the paper a single layer MLP is used. aggregate_nodes_fn: function used to aggregates the sender nodes. add_self_edges: whether to add self edges to nodes in the graph as in the paper definition of GCN. Defaults to False. symmetric_normalization: whether to use symmetric normalization. Defaults to True. Returns: A method that applies a Graph Convolution layer. """ def _ApplyGCN(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Applies a Graph Convolution layer.""" nodes, _, receivers, senders, _, _, _ = graph # First pass nodes through the node updater. nodes = update_node_fn(nodes) # Equivalent to jnp.sum(n_node), but jittable total_num_nodes = tree.tree_leaves(nodes)[0].shape[0] if add_self_edges: # We add self edges to the senders and receivers so that each node # includes itself in aggregation. # In principle, a `GraphsTuple` should partition by n_edge, but in # this case it is not required since a GCN is agnostic to whether # the `GraphsTuple` is a batch of graphs or a single large graph. conv_receivers, conv_senders = add_self_edges_fn(receivers, senders, total_num_nodes) else: conv_senders = senders conv_receivers = receivers # pylint: disable=g-long-lambda if symmetric_normalization: # Calculate the normalization values. count_edges = lambda x: jax.ops.segment_sum( jnp.ones_like(conv_senders), x, total_num_nodes) sender_degree = count_edges(conv_senders) receiver_degree = count_edges(conv_receivers) # Pre normalize by sqrt sender degree. # Avoid dividing by 0 by taking maximum of (degree, 1). nodes = tree.tree_map( lambda x: x * jax.lax.rsqrt(jnp.maximum(sender_degree, 1.0))[:, None], nodes, ) # Aggregate the pre-normalized nodes. nodes = tree.tree_map( lambda x: aggregate_nodes_fn(x[conv_senders], conv_receivers, total_num_nodes), nodes) # Post normalize by sqrt receiver degree. # Avoid dividing by 0 by taking maximum of (degree, 1). nodes = tree.tree_map( lambda x: (x * jax.lax.rsqrt(jnp.maximum(receiver_degree, 1.0))[:, None]), nodes, ) else: nodes = tree.tree_map( lambda x: aggregate_nodes_fn(x[conv_senders], conv_receivers, total_num_nodes), nodes) # pylint: enable=g-long-lambda return graph._replace(nodes=nodes) return _ApplyGCN ###Output _____no_output_____ ###Markdown Test General GCN Layer ###Code gcn_layer = GraphConvolution( update_node_fn=lambda n: MLP(features=[8, 4])(n), aggregate_nodes_fn=jax.ops.segment_sum, add_self_edges=True, symmetric_normalization=True ) graph = build_toy_graph() network = hk.without_apply_rng(hk.transform(gcn_layer)) params = network.init(jax.random.PRNGKey(42), graph) out_graph = network.apply(params, graph) out_graph.nodes ###Output _____no_output_____ ###Markdown Build GCN Model with Multiple LayersWith a single GCN layer, a node's representation after the GCN layer is onlyinfluenced by its direct neighbourhood. However, we may want to consider larger neighbourhoods, i.e. more than just 1 hop away. To achieve that, we can stackmultiple GCN layers, similar to how stacking CNN layers expands the input region.We will define a network with three GCN layers: ###Code def gcn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Defines a graph neural network with 3 GCN layers. Args: graph: GraphsTuple the network processes. Returns: output graph with updated node values. """ gn = GraphConvolution( update_node_fn=lambda n: jax.nn.relu(hk.Linear(8)(n)), add_self_edges=True) graph = gn(graph) gn = GraphConvolution( update_node_fn=lambda n: jax.nn.relu(hk.Linear(4)(n)), add_self_edges=True) graph = gn(graph) gn = GraphConvolution( update_node_fn=hk.Linear(2)) graph = gn(graph) return graph graph = build_toy_graph() network = hk.without_apply_rng(hk.transform(gcn)) params = network.init(jax.random.PRNGKey(42), graph) out_graph = network.apply(params, graph) out_graph.nodes ###Output _____no_output_____ ###Markdown Node Classification with GCN on Karate Club DatasetTime to try out our GCN on our first graph prediction task! Zachary's Karate Club Dataset[Zachary's karate club](https://en.wikipedia.org/wiki/Zachary%27s_karate_club) is a small dataset commonly used as an example for a social graph. It's great for demo purposes, as it's easy to visualize and quick to train a model on it.A node represents a student or instructor in the club. An edge means that those two people have interacted outside of the class. There are two instructors in the club.Each student is assigned to one of two instructors. Optimizing the GCN on the Karate Club Node Classification TaskThe task is to predict the assignment of students to instructors, given the social graph and only knowing the assignment of two nodes (the two instructors) a priori.In other words, out of the 34 nodes, only two nodes are labeled, and we are trying to optimize the assignment of the other 32 nodes, by **maximizing the log-likelihood of the two known node assignments**.We will compute the accuracy of our node assignments by comparing to the ground-truth assignments. **Note that the ground-truth for the 32 student nodes is not used in the loss function itself.** Let's load the dataset: ###Code """Zachary's karate club example. From https://github.com/deepmind/jraph/blob/master/jraph/examples/zacharys_karate_club.py. Here we train a graph neural network to process Zachary's karate club. https://en.wikipedia.org/wiki/Zachary%27s_karate_club Zachary's karate club is used in the literature as an example of a social graph. Here we use a graphnet to optimize the assignments of the students in the karate club to two distinct karate instructors (Mr. Hi and John A). """ def get_zacharys_karate_club() -> jraph.GraphsTuple: """Returns GraphsTuple representing Zachary's karate club.""" social_graph = [ (1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2), (4, 0), (5, 0), (6, 0), (6, 4), (6, 5), (7, 0), (7, 1), (7, 2), (7, 3), (8, 0), (8, 2), (9, 2), (10, 0), (10, 4), (10, 5), (11, 0), (12, 0), (12, 3), (13, 0), (13, 1), (13, 2), (13, 3), (16, 5), (16, 6), (17, 0), (17, 1), (19, 0), (19, 1), (21, 0), (21, 1), (25, 23), (25, 24), (27, 2), (27, 23), (27, 24), (28, 2), (29, 23), (29, 26), (30, 1), (30, 8), (31, 0), (31, 24), (31, 25), (31, 28), (32, 2), (32, 8), (32, 14), (32, 15), (32, 18), (32, 20), (32, 22), (32, 23), (32, 29), (32, 30), (32, 31), (33, 8), (33, 9), (33, 13), (33, 14), (33, 15), (33, 18), (33, 19), (33, 20), (33, 22), (33, 23), (33, 26), (33, 27), (33, 28), (33, 29), (33, 30), (33, 31), (33, 32)] # Add reverse edges. social_graph += [(edge[1], edge[0]) for edge in social_graph] n_club_members = 34 return jraph.GraphsTuple( n_node=jnp.asarray([n_club_members]), n_edge=jnp.asarray([len(social_graph)]), # One-hot encoding for nodes, i.e. argmax(nodes) = node index. nodes=jnp.eye(n_club_members), # No edge features. edges=None, globals=None, senders=jnp.asarray([edge[0] for edge in social_graph]), receivers=jnp.asarray([edge[1] for edge in social_graph])) def get_ground_truth_assignments_for_zacharys_karate_club() -> jnp.ndarray: """Returns ground truth assignments for Zachary's karate club.""" return jnp.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) graph = get_zacharys_karate_club() print(f'Number of nodes: {graph.n_node[0]}') print(f'Number of edges: {graph.n_edge[0]}') ###Output _____no_output_____ ###Markdown Visualize the karate club graph with circular node layout: ###Code nx_graph = convert_jraph_to_networkx_graph(graph) pos = nx.circular_layout(nx_graph) plt.figure(figsize=(6, 6)) nx.draw(nx_graph, pos=pos, with_labels = True, node_size=500, font_color='yellow') ###Output _____no_output_____ ###Markdown Define the GCN with the `GraphConvolution` layers we implemented: ###Code def gcn_definition(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Defines a GCN for the karate club task. Args: graph: GraphsTuple the network processes. Returns: output graph with updated node values. """ gn = GraphConvolution( update_node_fn=lambda n: jax.nn.relu(hk.Linear(8)(n)), add_self_edges=True) graph = gn(graph) gn = GraphConvolution( update_node_fn=hk.Linear(2)) # output dim is 2 because we have 2 output classes. graph = gn(graph) return graph ###Output _____no_output_____ ###Markdown Training and evaluation code: ###Code def optimize_club(network, num_steps: int): """Solves the karate club problem by optimizing the assignments of students.""" zacharys_karate_club = get_zacharys_karate_club() labels = get_ground_truth_assignments_for_zacharys_karate_club() params = network.init(jax.random.PRNGKey(42), zacharys_karate_club) @jax.jit def predict(params): decoded_graph = network.apply(params, zacharys_karate_club) return jnp.argmax(decoded_graph.nodes, axis=1) @jax.jit def prediction_loss(params): decoded_graph = network.apply(params, zacharys_karate_club) # We interpret the decoded nodes as a pair of logits for each node. log_prob = jax.nn.log_softmax(decoded_graph.nodes) # The only two assignments we know a-priori are those of Mr. Hi (Node 0) # and John A (Node 33). return -(log_prob[0, 0] + log_prob[33, 1]) opt_init, opt_update = optax.adam(1e-2) opt_state = opt_init(params) @jax.jit def update(params, opt_state): g = jax.grad(prediction_loss)(params) updates, opt_state = opt_update(g, opt_state) return optax.apply_updates(params, updates), opt_state @jax.jit def accuracy(params): decoded_graph = network.apply(params, zacharys_karate_club) return jnp.mean(jnp.argmax(decoded_graph.nodes, axis=1) == labels) for step in range(num_steps): print(f"step {step} accuracy {accuracy(params).item():.2f}") params, opt_state = update(params, opt_state) return predict(params) ###Output _____no_output_____ ###Markdown Let's train the GCN! We expect this model reach an accuracy of about 0.91. ###Code network = hk.without_apply_rng(hk.transform(gcn_definition)) result = optimize_club(network, num_steps=15) ###Output _____no_output_____ ###Markdown Try modifying the model parameters to see if you can improve the accuracy!You can also modify the dataset itself, and see how that influences model training. Node assignments predicted by the model at the end of training: ###Code result ###Output _____no_output_____ ###Markdown Visualize ground truth and predicted node assignments:What do you think of the results? ###Code zacharys_karate_club = get_zacharys_karate_club() nx_graph = convert_jraph_to_networkx_graph(zacharys_karate_club) pos = nx.circular_layout(nx_graph) fig = plt.figure(figsize=(15, 7)) ax1 = fig.add_subplot(121) nx.draw(nx_graph, pos=pos, with_labels=True, node_size=500, node_color=result.tolist(), font_color='white') ax1.title.set_text('Predicted Node Assignments with GCN') gt_labels = get_ground_truth_assignments_for_zacharys_karate_club() ax2 = fig.add_subplot(122) nx.draw(nx_graph, pos=pos, with_labels=True, node_size=500, node_color=gt_labels.tolist(), font_color='white') ax2.title.set_text('Ground-Truth Node Assignments') fig.suptitle('Do you spot the difference? 😐', y=-0.01) plt.show() ###Output _____no_output_____ ###Markdown Graph Attention (GAT) LayerWhile the GCN we covered in the previous section can learn meaningful representations, it also has some shortcomings. Can you think of any?In the GCN layer, the messages from all its neighbours and the node itself are equally weighted. This may lead to loss of node-specific information. E.g., consider the case when a set of nodes shares the same set of neighbors, and start out with different node features. Then because of averaging, their resulting output features would be the same. Adding self-edges mitigates this issue by a small amount, but this problem is magnified with increasing number of GCN layers and number of edges connecting to a node.The graph attention (GAT) mechanism, as proposed by [Velickovic et al. ( 2017)](https://arxiv.org/abs/1710.10903), allows the network to learn how to weigh / assign importance to the node features from the neighbourhood when computing the new node features. This is very similar to the idea of using attention in Transformers, which were introduced in [Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762).(One could even argue that Transformers are graph attention networks operating on the special case of fully-connected graphs.)In the figure below, $\vec{h}$ are the node features and $\vec{\alpha}$ are the learned attention weights.Figure Credit: [Velickovic et al. ( 2017)](https://arxiv.org/abs/1710.10903).(Detail: This image is showing multi-headed attention with 3 heads, each color corresponding to a different head. At the end, an aggregation function is applied over all the heads.)To obtain the output node features of the GAT layer, we compute:$$ \vec{h}'_i = \sum _{j \in \mathcal{N}(i)}\alpha_{ij} \mathbf{W} \vec{h}_j$$Here, $\mathbf{W}$ is a weight matrix which performs a linear transformation on the input. How do we obtain $\alpha$, or in other words, learn what to pay attention to?Intuitively, the attention coefficient $\alpha_{ij}$ should rely on both the transformed features from nodes $i$ and $j$. So let's first define an attention mechanism function $\mathrm{attention\_fn}$ that computes the intermediary attention coefficients $e_{ij}$:$$ e_{ij} = \mathrm{attention\_fn}(\mathbf{W}\vec{h}_i, \mathbf{W}\vec{h}_j)$$To obtain normalized attention weights $\alpha$, we apply a softmax:$$\alpha_{ij} = \frac{\exp(e_{ij})}{\sum _{j \in \mathcal{N}(i)}\exp(e_{ij})}$$For the function $a$, the authors of the GAT paper chose to concatenate the transformed node features (denoted by $||$) and apply a single-layer feedforward network, parameterized by a weight vector $\vec{\mathbf{a}}$ and with LeakyRelu as non-linearity.In the implementation below, we refer to $\mathbf{W}$ as `attention_query_fn` and $att\_fn$ as `attention_logit_fn`.$$\mathrm{attention\_fn}(\mathbf{W}\vec{h}_i, \mathbf{W}\vec{h}_j) = \text{LeakyReLU}(\vec{\mathbf{a}}(\mathbf{W}\vec{h}_i || \mathbf{W}\vec{h}_j))$$The figure below summarizes this attention mechanism visually.Figure Credit: Petar Velickovic.<!-- $\sum_{j \in \mathcal{N}(i)}\vec{\alpha}_{ij} \stackrel{!}{=}1 $ --> ###Code # GAT implementation adapted from https://github.com/deepmind/jraph/blob/master/jraph/_src/models.py#L442. def GAT(attention_query_fn, attention_logit_fn, node_update_fn=None, add_self_edges=True): """Returns a method that applies a Graph Attention Network layer. Graph Attention message passing as described in https://arxiv.org/pdf/1710.10903.pdf. This model expects node features as a jnp.array, may use edge features for computing attention weights, and ignore global features. It does not support nests. Args: attention_query_fn: function that generates attention queries from sender node features. attention_logit_fn: function that converts attention queries into logits for softmax attention. node_update_fn: function that updates the aggregated messages. If None, will apply leaky relu and concatenate (if using multi-head attention). Returns: A function that applies a Graph Attention layer. """ # pylint: disable=g-long-lambda if node_update_fn is None: # By default, apply the leaky relu and then concatenate the heads on the # feature axis. node_update_fn = lambda x: jnp.reshape( jax.nn.leaky_relu(x), (x.shape[0], -1)) def _ApplyGAT(graph): """Applies a Graph Attention layer.""" nodes, edges, receivers, senders, _, _, _ = graph # Equivalent to the sum of n_node, but statically known. try: sum_n_node = nodes.shape[0] except IndexError: raise IndexError('GAT requires node features') # Pass nodes through the attention query function to transform # node features, e.g. with an MLP. nodes = attention_query_fn(nodes) total_num_nodes = tree.tree_leaves(nodes)[0].shape[0] if add_self_edges: # We add self edges to the senders and receivers so that each node # includes itself in aggregation. receivers, senders = add_self_edges_fn(receivers, senders, total_num_nodes) # We compute the softmax logits using a function that takes the # embedded sender and receiver attributes. sent_attributes = nodes[senders] received_attributes = nodes[receivers] att_softmax_logits = attention_logit_fn( sent_attributes, received_attributes, edges) # Compute the attention softmax weights on the entire tree. att_weights = jraph.segment_softmax(att_softmax_logits, segment_ids=receivers, num_segments=sum_n_node) # Apply attention weights. messages = sent_attributes * att_weights # Aggregate messages to nodes. nodes = jax.ops.segment_sum(messages, receivers, num_segments=sum_n_node) # Apply an update function to the aggregated messages. nodes = node_update_fn(nodes) return graph._replace(nodes=nodes) # pylint: enable=g-long-lambda return _ApplyGAT ###Output _____no_output_____ ###Markdown Test GAT Layer ###Code def attention_logit_fn(sender_attr, receiver_attr, edges): del edges x = jnp.concatenate((sender_attr, receiver_attr), axis=1) return hk.Linear(1)(x) gat_layer = GAT( attention_query_fn=lambda n: hk.Linear(8)(n), # Applies W to the node features attention_logit_fn=attention_logit_fn, node_update_fn=None, add_self_edges=True, ) graph = build_toy_graph() network = hk.without_apply_rng(hk.transform(gat_layer)) params = network.init(jax.random.PRNGKey(42), graph) out_graph = network.apply(params, graph) out_graph.nodes ###Output _____no_output_____ ###Markdown Train GAT Model on Karate Club DatasetWe will now repeat the karate club experiment with a GAT network. ###Code def gat_definition(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Defines a GAT network for the karate club node classification task. Args: graph: GraphsTuple the network processes. Returns: output graph with updated node values. """ def _attention_logit_fn( sender_attr, receiver_attr, edges): del edges x = jnp.concatenate((sender_attr, receiver_attr), axis=1) return hk.Linear(1)(x) gn = GAT( attention_query_fn=lambda n: hk.Linear(8)(n), attention_logit_fn=_attention_logit_fn, node_update_fn=None, add_self_edges=True) graph = gn(graph) gn = GAT( attention_query_fn=lambda n: hk.Linear(8)(n), attention_logit_fn=_attention_logit_fn, node_update_fn=hk.Linear(2), add_self_edges=True) graph = gn(graph) return graph ###Output _____no_output_____ ###Markdown Let's train the model!We expect the model to reach an accuracy of about 0.97. ###Code network = hk.without_apply_rng(hk.transform(gat_definition)) result = optimize_club(network, num_steps=15) ###Output _____no_output_____ ###Markdown The final node assignment predicted by the trained model: ###Code result zacharys_karate_club = get_zacharys_karate_club() nx_graph = convert_jraph_to_networkx_graph(zacharys_karate_club) pos = nx.circular_layout(nx_graph) fig = plt.figure(figsize=(15, 7)) ax1 = fig.add_subplot(121) nx.draw(nx_graph, pos=pos, with_labels=True, node_size=500, node_color=result.tolist(), font_color='white') ax1.title.set_text('Predicted Node Assignments with GAT') gt_labels = get_ground_truth_assignments_for_zacharys_karate_club() ax2 = fig.add_subplot(122) nx.draw(nx_graph, pos=pos, with_labels=True, node_size=500, node_color=gt_labels.tolist(), font_color='white') ax2.title.set_text('Ground-Truth Node Assignments') fig.suptitle('Do you spot the difference? 😐', y=-0.01) plt.show() ###Output _____no_output_____ ###Markdown Graph Classification on MUTAG (Molecules) In the previous section, we used our GCN and GAT networks on a node classification problem. Now, let's use the same model architectures on a **graph classification task**. The main difference from our previous setup is that instead of observing individual node latents, we are now attempting to summarize them into one embedding vector, representative of the entire graph, which we then use to predict the class of this graph.We will do this on one of the most common tasks of this type -- **molecular property prediction**, where molecules are represented as graphs. Nodes correspond to atoms, and edges represent the bonds between them. We will use the **MUTAG** dataset for this example, a common dataset from the [TUDatasets](https://chrsmrrs.github.io/datasets/) collection.We will download this graph dataset from pytorch geometric, and convert it to a jraph graph dataset. Please install pytorch and pytorch geometric (just for dataset purposes). ###Code # Install required packages. !pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.10.0+cu113.html !pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.10.0+cu113.html !pip install torch-geometric import torch_geometric from torch_geometric.datasets import TUDataset mutag_pytorch_dataset = TUDataset(root='./', name='MUTAG') #@title Convert Pytorch graph dataset to jraph def convert_pytorch_graph_to_jraph(pytorch_g: torch_geometric.data.Data) -> Tuple[jraph.GraphsTuple, jnp.ndarray]: """Converts a single pytorch graph Data object to a jraph Graphstuple. Args: pytorch_g: A pytorch-geometric Data object, containing one graph. Returns: Tuple of jraph Graphstuple containing a single graph, and the target. """ node_features, edge_features, senders, receivers, globals, targets = None, None, None, None, None, None if 'x' in pytorch_g: node_features = jnp.array(pytorch_g.x) if 'edge_attr' in pytorch_g: edge_features = jnp.array(pytorch_g.edge_attr) if 'edge_index' in pytorch_g: senders = jnp.array(pytorch_g.edge_index[0]) receivers = jnp.array(pytorch_g.edge_index[1]) if 'y' in pytorch_g: target = jnp.array(pytorch_g.y) n_node = jnp.array([pytorch_g.num_nodes]) n_edge = jnp.array([pytorch_g.num_edges]) jraph_g = jraph.GraphsTuple( nodes=node_features, edges=edge_features, senders=senders, receivers=receivers, n_node=n_node, n_edge=n_edge, globals=globals ) return jraph_g, target def convert_pytorch_dataset_to_jraph(pytorch_dataset): """Converts a pytorch dataset to a jraph graph dataset.""" jraph_dataset = [] for pytorch_g in pytorch_dataset: sample = {} sample['input_graph'], sample['target'] = convert_pytorch_graph_to_jraph(pytorch_g) jraph_dataset.append(sample) return jraph_dataset mutag_ds = convert_pytorch_dataset_to_jraph(mutag_pytorch_dataset) len(mutag_ds) # Inspect the first graph g = mutag_ds[0]['input_graph'] print(f'Number of nodes: {g.n_node[0]}') print(f'Number of edges: {g.n_edge[0]}') print(f'Node features shape: {g.nodes.shape}') print(f'Edge features shape: {g.edges.shape}') draw_jraph_graph_structure(g) print(f"Target: {mutag_ds[0]['target']}") ###Output _____no_output_____ ###Markdown We see that there are 188 graphs, to be classified in one of 2 classes, representing "their mutagenic effect on a specific gram negative bacterium". Node features represent the 1-hot encoding of the atom type (0=C, 1=N, 2=O, 3=F, 4=I, 5=Cl, 6=Br). Edge features (`edge_attr`) represent the bond type, which we will here ignore.Let's split the dataset to use the first 150 graphs as the training set (and the rest as the test set). ###Code train_mutag_ds = mutag_ds[:150] test_mutag_ds = mutag_ds[150:] ###Output _____no_output_____ ###Markdown Padding Graphs to Speed Up TrainingSince jax recompiles the program for each graph size, training would take a long time due to recompilation for different graph sizes. To address that, we pad the number of nodes and edges in the graphs to nearest power of two. Since jax maintains a cacheof compiled programs, the compilation cost is amortized. ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py def _nearest_bigger_power_of_two(x: int) -> int: """Computes the nearest power of two greater than x for padding.""" y = 2 while y < x: y *= 2 return y def pad_graph_to_nearest_power_of_two( graphs_tuple: jraph.GraphsTuple) -> jraph.GraphsTuple: """Pads a batched `GraphsTuple` to the nearest power of two. For example, if a `GraphsTuple` has 7 nodes, 5 edges and 3 graphs, this method would pad the `GraphsTuple` nodes and edges: 7 nodes --> 8 nodes (2^3) 5 edges --> 8 edges (2^3) And since padding is accomplished using `jraph.pad_with_graphs`, an extra graph and node is added: 8 nodes --> 9 nodes 3 graphs --> 4 graphs Args: graphs_tuple: a batched `GraphsTuple` (can be batch size 1). Returns: A graphs_tuple batched to the nearest power of two. """ # Add 1 since we need at least one padding node for pad_with_graphs. pad_nodes_to = _nearest_bigger_power_of_two(jnp.sum(graphs_tuple.n_node)) + 1 pad_edges_to = _nearest_bigger_power_of_two(jnp.sum(graphs_tuple.n_edge)) # Add 1 since we need at least one padding graph for pad_with_graphs. # We do not pad to nearest power of two because the batch size is fixed. pad_graphs_to = graphs_tuple.n_node.shape[0] + 1 return jraph.pad_with_graphs(graphs_tuple, pad_nodes_to, pad_edges_to, pad_graphs_to) ###Output _____no_output_____ ###Markdown Graph Network Model DefinitionWe will use `jraph.GraphNetwork()` to build our graph model. The `GraphNetwork` architecture is defined in [Battaglia et al. (2018)](https://arxiv.org/pdf/1806.01261.pdf).We first define update functions for nodes, edges, and the full graph (global). We will use MLP blocks for all three. ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py @jraph.concatenated_args def edge_update_fn(feats: jnp.ndarray) -> jnp.ndarray: """Edge update function for graph net.""" net = hk.Sequential( [hk.Linear(128), jax.nn.relu, hk.Linear(128)]) return net(feats) @jraph.concatenated_args def node_update_fn(feats: jnp.ndarray) -> jnp.ndarray: """Node update function for graph net.""" net = hk.Sequential( [hk.Linear(128), jax.nn.relu, hk.Linear(128)]) return net(feats) @jraph.concatenated_args def update_global_fn(feats: jnp.ndarray) -> jnp.ndarray: """Global update function for graph net.""" # MUTAG is a binary classification task, so output pos neg logits. net = hk.Sequential( [hk.Linear(128), jax.nn.relu, hk.Linear(2)]) return net(feats) def net_fn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: # Add a global paramater for graph classification. graph = graph._replace(globals=jnp.zeros([graph.n_node.shape[0], 1])) embedder = jraph.GraphMapFeatures( hk.Linear(128), hk.Linear(128), hk.Linear(128)) net = jraph.GraphNetwork( update_node_fn=node_update_fn, update_edge_fn=edge_update_fn, update_global_fn=update_global_fn) return net(embedder(graph)) ###Output _____no_output_____ ###Markdown Loss and Accuracy FunctionDefine the classification cross-entropy loss and accuracy function. ###Code def compute_loss(params: jnp.ndarray, graph: jraph.GraphsTuple, label: jnp.ndarray, net: jraph.GraphsTuple) -> jnp.ndarray: """Computes loss and accuracy.""" pred_graph = net.apply(params, graph) preds = jax.nn.log_softmax(pred_graph.globals) targets = jax.nn.one_hot(label, 2) # Since we have an extra 'dummy' graph in our batch due to padding, we want # to mask out any loss associated with the dummy graph. # Since we padded with `pad_with_graphs` we can recover the mask by using # get_graph_padding_mask. mask = jraph.get_graph_padding_mask(pred_graph) # Cross entropy loss. loss = -jnp.mean(preds * targets * mask[:, None]) # Accuracy taking into account the mask. accuracy = jnp.sum( (jnp.argmax(pred_graph.globals, axis=1) == label) * mask)/jnp.sum(mask) return loss, accuracy ###Output _____no_output_____ ###Markdown Training and Evaluation Functions ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py def train(dataset, num_train_steps: int) -> jnp.ndarray: """Training loop.""" # Transform impure `net_fn` to pure functions with hk.transform. net = hk.without_apply_rng(hk.transform(net_fn)) # Get a candidate graph and label to initialize the network. graph = dataset[0]['input_graph'] # Initialize the network. params = net.init(jax.random.PRNGKey(42), graph) # Initialize the optimizer. opt_init, opt_update = optax.adam(1e-4) opt_state = opt_init(params) compute_loss_fn = functools.partial(compute_loss, net=net) # We jit the computation of our loss, since this is the main computation. # Using jax.jit means that we will use a single accelerator. If you want # to use more than 1 accelerator, use jax.pmap. More information can be # found in the jax documentation. compute_loss_fn = jax.jit(jax.value_and_grad( compute_loss_fn, has_aux=True)) for idx in range(num_train_steps): graph = dataset[idx % len(dataset)]['input_graph'] label = dataset[idx % len(dataset)]['target'] # Jax will re-jit your graphnet every time a new graph shape is encountered. # In the limit, this means a new compilation every training step, which # will result in *extremely* slow training. To prevent this, pad each # batch of graphs to the nearest power of two. Since jax maintains a cache # of compiled programs, the compilation cost is amortized. graph = pad_graph_to_nearest_power_of_two(graph) # Since padding is implemented with pad_with_graphs, an extra graph has # been added to the batch, which means there should be an extra label. label = jnp.concatenate([label, jnp.array([0])]) (loss, acc), grad = compute_loss_fn(params, graph, label) updates, opt_state = opt_update(grad, opt_state, params) params = optax.apply_updates(params, updates) if idx % 50 == 0: print(f'step: {idx}, loss: {loss}, acc: {acc}') print('Training finished') return params def evaluate(dataset, params): """Evaluation Script.""" # Transform impure `net_fn` to pure functions with hk.transform. net = hk.without_apply_rng(hk.transform(net_fn)) # Get a candidate graph and label to initialize the network. graph = dataset[0]['input_graph'] accumulated_loss = 0 accumulated_accuracy = 0 compute_loss_fn = jax.jit(functools.partial(compute_loss, net=net)) for idx in range(len(dataset)): graph = dataset[idx]['input_graph'] label = dataset[idx]['target'] graph = pad_graph_to_nearest_power_of_two(graph) label = jnp.concatenate([label, jnp.array([0])]) loss, acc = compute_loss_fn(params, graph, label) accumulated_accuracy += acc accumulated_loss += loss if idx % 100 == 0: print(f'Evaluated {idx + 1} graphs') print('Completed evaluation.') loss = accumulated_loss / idx accuracy = accumulated_accuracy / idx print(f'Eval loss: {loss}, accuracy {accuracy}') return loss, accuracy params = train(train_mutag_ds, num_train_steps=500) evaluate(test_mutag_ds, params) ###Output _____no_output_____ ###Markdown We converge at ~76% test accuracy. We could of course further tune the parameters to improve this result. Link prediction on CORA (Citation Network) The final problem type we will explore is **link prediction**, an instance of an **edge-level** task. Given a graph, our goal is to predict whether a certain edge $(u,v)$ should be present or not. This is often useful in the recommender system settings (e.g., propose new friends in a social network, propose a movie to a user).As before, the first step is to obtain node latents $h_i$ using a GNN. In this context we will use the autoencoder language and call this GNN **encoder**. Then, we learn a binary classifier $f: (h_i, h_j) \to z_{i,j}$ (**decoder**), predicting if an edge $(i,j)$ should exist or not. While we could use a more elaborate decoder (e.g., an MLP), a common approach we will also use here is to focus on obtaining good node embeddings, and for the decoder simply use the similarity between node latents, i.e. $z_{i,j} = h_i^T h_j$. For this problem we will use the [**Cora** dataset](https://relational.fit.cvut.cz/dataset/CORA), a citation graph containing 2708 scientific publications. For each publication we have a 1433-dimensional feature vector, which is a bag-of-words representation (with a small, fixed dictionary) of the paper text. The edges in this graph represent citations, and are commonly treated as undirected. Each paper is in one of seven topics (classes) so you can also use this dataset for node classification. ###Code cora_pytorch_ds = torch_geometric.datasets.Planetoid(root='/', name='Cora') cora_ds = convert_pytorch_dataset_to_jraph(cora_pytorch_ds) ###Output _____no_output_____ ###Markdown Splitting Edges and Adding "Negative" EdgesFor the link prediction task, we split the edges into train, val and test sets and also add "negative" examples (edges that do not correspond to a citation). We will ignore the topic classes.For the validation and test splits, we add the same number of existing edges ("positive examples") and non-existing edges ("negative examples").In contrast to the validation and test splits, the training split only contains positive examples (set $T_+$). The $|T_+|$ negative examples to be used during training will be sampled ad hoc in each epoch and uniformly at random from all edges that are not in $T_+$. This allows the model to see a wider range of negative examples. ###Code def train_val_test_split_edges(graph: jraph.GraphsTuple, val_perc: float = 0.05, test_perc: float = 0.1): """Split edges in input graph into train, val and test splits. For val and test sets, also include negative edges. Based on torch_geometric.utils.train_test_split_edges. """ mask = graph.senders < graph.receivers senders = graph.senders[mask] receivers = graph.receivers[mask] num_val = int(val_perc * senders.shape[0]) num_test = int(test_perc * senders.shape[0]) permuted_indices = onp.random.permutation(range(senders.shape[0])) senders = senders[permuted_indices] receivers = receivers[permuted_indices] if graph.edges is not None: edges = graph.edges[permuted_indices] val_senders = senders[:num_val] val_receivers = receivers[:num_val] if graph.edges is not None: val_edges = edges[:num_val] test_senders = senders[num_val: num_val+num_test] test_receivers = receivers[num_val: num_val+num_test] if graph.edges is not None: test_edges = edges[num_val: num_val+num_test] train_senders = senders[num_val+num_test:] train_receivers = receivers[num_val+num_test:] train_edges = None if graph.edges is not None: train_edges = edges[num_val+num_test:] # make training edges undirected by adding reverse edges back in train_senders_undir = jnp.concatenate((train_senders, train_receivers)) train_receivers_undir = jnp.concatenate((train_receivers, train_senders)) train_senders = train_senders_undir train_receivers = train_receivers_undir # Negative edges. num_nodes = graph.n_node[0] # Create a negative adjacency mask, s.t. mask[i, j] = True iff edge i->j does # not exist in the original graph. neg_adj_mask = onp.ones((num_nodes, num_nodes), dtype=onp.uint8) # upper triangular part neg_adj_mask = onp.triu(neg_adj_mask, k=1) neg_adj_mask[graph.senders, graph.receivers] = 0 neg_adj_mask = neg_adj_mask.astype(onp.bool) neg_senders, neg_receivers = neg_adj_mask.nonzero() perm = onp.random.permutation(range(len(neg_senders))) neg_senders = neg_senders[perm] neg_receivers = neg_receivers[perm] val_neg_senders = neg_senders[:num_val] val_neg_receivers = neg_receivers[:num_val] test_neg_senders = neg_senders[num_val: num_val + num_test] test_neg_receivers = neg_receivers[num_val: num_val + num_test] train_graph = jraph.GraphsTuple( nodes=graph.nodes, edges=train_edges, senders=train_senders, receivers=train_receivers, n_node=graph.n_node, n_edge=jnp.array([len(train_senders)]), globals=graph.globals ) return train_graph, neg_adj_mask, val_senders, val_receivers, val_neg_senders, val_neg_receivers, test_senders, test_receivers, test_neg_senders, test_neg_receivers ###Output _____no_output_____ ###Markdown Test the Edge Splitting Function ###Code graph = cora_ds[0]['input_graph'] train_graph, neg_adj_mask, val_pos_senders, val_pos_receivers, val_neg_senders, val_neg_receivers, test_pos_senders, test_pos_receivers, test_neg_senders, test_neg_receivers = train_val_test_split_edges(graph) print(f'Train set: {train_graph.senders.shape[0]} positive edges, we will sample the same number of negative edges at runtime') print(f'Val set: {val_pos_senders.shape[0]} positive edges, {val_neg_senders.shape[0]} negative edges') print(f'Test set: {test_pos_senders.shape[0]} positive edges, {test_neg_senders.shape[0]} negative edges') print(f'Negative adjacency mask shape: {neg_adj_mask.shape}') print(f'Numbe of negative edges to sample from: {neg_adj_mask.sum()}') ###Output _____no_output_____ ###Markdown *Note*: It will often happen during training that as a negative example, we sample an initially existing edge (that is now e.g. a positive example in the test set). We are however not allowed to check for this, as we should be unaware of the existence of test edges during training.Assuming our dot product decoder, we are essentially attempting to bring the latents of endpoints of edges from $T_+$ closer together, and make the latents of all other pairs of nodes as distant as possible. As this is impossible to fully satisfy, the hope is that the model will "fail" to distance those pairs of nodes where the edges should actually exist (positive examples from the test set). Graph Network Model DefinitionWe will use jraph.GraphNetwork to build our graph net model.We first define update functions for node features. We are not using edge or global features for this task. ###Code @jraph.concatenated_args def node_update_fn(feats: jnp.ndarray) -> jnp.ndarray: """Node update function for graph net.""" net = hk.Sequential( [hk.Linear(128), jax.nn.relu, hk.Linear(64)]) return net(feats) def net_fn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Network definition.""" graph = graph._replace(globals=jnp.zeros([graph.n_node.shape[0], 1])) net = jraph.GraphNetwork( update_node_fn=node_update_fn, update_edge_fn=None, update_global_fn=None) return net(graph) def decode(pred_graph: jraph.GraphsTuple, senders, receivers) -> jnp.DeviceArray: """Given a set of candidate edges, take dot product of respective nodes. Args: pred_graph: input graph. senders: Senders of candidate edges. receivers: Receivers of candidate edges. Returns: For each edge, computes dot product of the features of the two nodes. """ return jnp.squeeze(jnp.sum(pred_graph.nodes[senders] * pred_graph.nodes[receivers], axis=1)) ###Output _____no_output_____ ###Markdown To evaluate our model, we first apply the sigmoid function to obtained dot products to get a score $s_{i,j} \in [0,1]$ for each edge. Now, we can pick a threshold $\tau$ and say that we predict all pairs $(i,j)$ s.t. $s_{i,j} \geq \tau$ as edges (and all the rest as non-edges). Loss and ROC-AUC-Metric FunctionDefine the binary classification cross-entropy loss.To aggregate the results over all choices of $\tau$, we will use ROC-AUC (the area under the ROC curve) as our evaluation metric. ###Code from sklearn.metrics import roc_auc_score def compute_bce_with_logits_loss(x: jnp.DeviceArray, y: jnp.DeviceArray) -> jnp.DeviceArray: """Computes binary cross-entropy with logits loss. Combines sigmoid and BCE, and uses log-sum-exp trick for numerical stability. See https://stackoverflow.com/a/66909858 if you want to learn more. Args: x: Predictions (logits). y: Labels. Returns: Binary cross-entropy loss with mean aggregation. """ max_val = jnp.clip(x, 0, None) loss = x - x * y + max_val + jnp.log(jnp.exp(-max_val) + jnp.exp((-x - max_val))) return loss.mean() def compute_loss(params, graph, senders, receivers, labels, net): """Computes loss.""" pred_graph = net.apply(params, graph) preds = decode(pred_graph, senders, receivers) loss = compute_bce_with_logits_loss(preds, labels) return loss, preds def compute_roc_auc_score(preds: jnp.DeviceArray, labels: jnp.DeviceArray) -> jnp.DeviceArray: """Computes roc auc (area under the curve) score for classification.""" s = jax.nn.sigmoid(preds) roc_auc = roc_auc_score(labels, s) return roc_auc ###Output _____no_output_____ ###Markdown Helper function for sampling negative edges during training. ###Code def negative_sampling( graph: jraph.GraphsTuple, num_neg_samples: int, key: jnp.DeviceArray) -> Tuple[jnp.DeviceArray, jnp.DeviceArray]: """Samples negative edges, i.e. edges that don't exist in the input graph.""" num_nodes = graph.n_node[0] total_possible_edges = num_nodes**2 # convert 2D edge indices to 1D representation. pos_idx = graph.senders * num_nodes + graph.receivers # Percentage to oversample edges, so most likely will sample enough neg edges. alpha = jnp.abs(1 / (1 - 1.1 * (graph.senders.shape[0] / total_possible_edges))) perm = jax.random.randint( key, shape=(int(alpha * num_neg_samples),), minval=0, maxval=total_possible_edges, dtype=jnp.uint32) # mask where sampled edges are positive edges. mask = jnp.isin(perm, pos_idx) # remove positive edges. perm = perm[~mask][:num_neg_samples] # convert 1d back to 2d edge indices. neg_senders = perm // num_nodes neg_receivers = perm % num_nodes return neg_senders, neg_receivers ###Output _____no_output_____ ###Markdown Let's write the training loop: ###Code def train(dataset, num_epochs: int): """Training loop.""" key = jax.random.PRNGKey(42) # Transform impure `net_fn` to pure functions with hk.transform. net = hk.without_apply_rng(hk.transform(net_fn)) # Get a candidate graph and label to initialize the network. graph = dataset[0]['input_graph'] train_graph, _, val_pos_s, val_pos_r, val_neg_s, val_neg_r, test_pos_s, \ test_pos_r, test_neg_s, test_neg_r = train_val_test_split_edges( graph) # Prepare the validation and test data. val_senders = jnp.concatenate((val_pos_s, val_neg_s)) val_receivers = jnp.concatenate((val_pos_r, val_neg_r)) val_labels = jnp.concatenate( (jnp.ones(len(val_pos_s)), jnp.zeros(len(val_neg_s)))) test_senders = jnp.concatenate((test_pos_s, test_neg_s)) test_receivers = jnp.concatenate((test_pos_r, test_neg_r)) test_labels = jnp.concatenate( (jnp.ones(len(test_pos_s)), jnp.zeros(len(test_neg_s)))) # Initialize the network. params = net.init(key, train_graph) # Initialize the optimizer. opt_init, opt_update = optax.adam(1e-4) opt_state = opt_init(params) compute_loss_fn = functools.partial(compute_loss, net=net) # We jit the computation of our loss, since this is the main computation. # Using jax.jit means that we will use a single accelerator. If you want # to use more than 1 accelerator, use jax.pmap. More information can be # found in the jax documentation. compute_loss_fn = jax.jit(jax.value_and_grad(compute_loss_fn, has_aux=True)) for epoch in range(num_epochs): num_neg_samples = train_graph.senders.shape[0] train_neg_senders, train_neg_receivers = negative_sampling( train_graph, num_neg_samples=num_neg_samples, key=key) train_senders = jnp.concatenate((train_graph.senders, train_neg_senders)) train_receivers = jnp.concatenate( (train_graph.receivers, train_neg_receivers)) train_labels = jnp.concatenate( (jnp.ones(len(train_graph.senders)), jnp.zeros(len(train_neg_senders)))) (train_loss, train_preds), grad = compute_loss_fn(params, train_graph, train_senders, train_receivers, train_labels) updates, opt_state = opt_update(grad, opt_state, params) params = optax.apply_updates(params, updates) if epoch % 10 == 0 or epoch == (num_epochs - 1): train_roc_auc = compute_roc_auc_score(train_preds, train_labels) val_loss, val_preds = compute_loss(params, train_graph, val_senders, val_receivers, val_labels, net) val_roc_auc = compute_roc_auc_score(val_preds, val_labels) print( f'epoch: {epoch}, train_loss: {train_loss:.3f}, ' f'train_roc_auc: {train_roc_auc:.3f}, val_loss: {val_loss:.3f}, ' f'val_roc_auc: {val_roc_auc:.3f}' ) test_loss, test_preds = compute_loss(params, train_graph, test_senders, test_receivers, test_labels, net) test_roc_auc = compute_roc_auc_score(test_preds, test_labels) print('Training finished') print( f'epoch: {epoch}, test_loss: {test_loss:.3f}, test_roc_auc: {test_roc_auc:.3f}' ) return params ###Output _____no_output_____ ###Markdown Let's train the model! We expect the model to reach roughly test_roc_auc of 0.84.(Note that ROC-AUC is a scalar between 0 and 1, with 1 being the ROC-AUC of a perfect classifier.) ###Code params = train(cora_ds, num_epochs=200) ###Output _____no_output_____ ###Markdown Introduction to Graph Neural Nets with JAX/jraph*Lisa Wang, DeepMind ([email protected]), Nikola Jovanović, ETH Zurich ([email protected])***Colab Runtime:**If possible, please use a GPU hardware accelerator to run this colab. You can choose that under *Runtime > Change Runtime Type*.**Prerequisites:*** Some familiarity with [JAX](https://github.com/google/jax), you can refer to this [colab](https://colab.sandbox.google.com/github/google/jax/blob/master/docs/jax-101/01-jax-basics.ipynb) for an introduction to JAX.* Neural network basics* Graph theory basics (MIT Open Courseware [slides](https://ocw.mit.edu/courses/civil-and-environmental-engineering/1-022-introduction-to-network-models-fall-2018/lecture-notes/MIT1_022F18_lec2.pdf) by Amir Ajorlou)We recommend watching the [Theoretical Foundations of Graph Neural Networks Lecture](https://www.youtube.com/watch?v=uF53xsT7mjc&) by Petar Veličković before working through this colab. The talk provides a theoretical introduction to Graph Neural Networks (GNNs), historical context and motivating examples.**Outline:*** [Fundamental Graph Concepts](scrollTo=gsKA-syx_LUi)* [Graph Prediction Tasks](scrollTo=spQGRxhPN8Eo)* [Intro to the jraph Library](scrollTo=3C5YI9M0vwvb)* [Graph Convolutional Network (GCN) Layer](scrollTo=NZRMF2d-h2pd)* [Build GCN Model with Multiple Layers](scrollTo=lha8rbQ78l3S)* [Node Classification with GCN on Karate Club Dataset](scrollTo=Z5t7kw7SE_h4)* [Graph Attention (GAT) Layer](scrollTo=yg8g96NdBCK6)* [Train GAT Model on Karate Club Dataset](scrollTo=anfVGJwBe27v)* [Graph Classification on MUTAG (Molecules)](scrollTo=n5TxaTGzBkBa)* [Link Prediction on CORA (Citation Network)](scrollTo=OwVE88dTRC6V)* [Bonus: Intro to Graph Adversarial Attacks](scrollTo=35kbP8GZRFEm)**Additional Resources:*** Battaglia et al. (2018): [Relational inductive biases, deep learning, and graph networks](https://arxiv.org/pdf/1806.01261)---Some sections in this colab build on the [GraphNets Tutorial colab in pytorch](https://github.com/eemlcommunity/PracticalSessions2021/blob/main/graphnets/graphnets_tutorial.ipynb) by Nikola Jovanović.We would like to thank Razvan Pascanu and Petar Veličković for their valuable input and feedback.---*Copyright 2022 by the Authors.**Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0**Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.* Setup: Install and Import libraries ###Code !pip install git+https://github.com/deepmind/jraph.git !pip install flax !pip install dm-haiku # Imports %matplotlib inline import functools import matplotlib.pyplot as plt import jax import jax.numpy as jnp import jax.tree_util as tree import jraph import flax import haiku as hk import optax import pickle import numpy as onp import networkx as nx from typing import Any, Callable, Dict, List, Optional, Tuple ###Output _____no_output_____ ###Markdown Fundamental Graph ConceptsA graph consists of a set of nodes and a set of edges, where edges form connections between nodes.More formally, a graph is defined as $ \mathcal{G} = (\mathcal{V}, \mathcal{E})$ where $\mathcal{V}$ is the set of vertices / nodes, and $\mathcal{E}$ is the set of edges.In an **undirected** graph, each edge is an unordered pair of two nodes $ \in \mathcal{V}$. E.g. a friend network can be represented as an undirected graph, assuming that the relationship "*A is friends with B*" implies "*B is friends with A*".In a **directed** graph, each edge is an ordered pair of nodes $ \in \mathcal{V}$. E.g. a citation network would be best represented with a directed graph, since the relationship "*A cites B*" does not imply "*B cites A*".The **degree** of a node is defined as the number of edges incident on it, i.e. the sum of incoming and outgoing edges for that node.The **in-degree** is the sum of incoming edges only, and the **out-degree** is the sum of outgoing edges only.There are several ways to represent $\mathcal{E}$:1. As a **list of edges**: a list of pairs $(u,v)$, where $(u,v)$ means that there is an edge going from node $u$ to node $v$.2. As an **adjacency matrix**: a binary square matrix $A$ of size $|\mathcal{V}| \times |\mathcal{V}|$, where $A_{u,v}=1$ iff there is a connection between nodes $u$ and $v$.3. As an **adjacency list**: An array of $|\mathcal{V}|$ unordered lists, where the $i$th list corresponds to the $i$th node, and contains all the nodes directly connected to node $i$. Example: Below is a directed graph with four nodes and five edges.The arrows on the edges indicate the direction of each edge, e.g. there is an edge going from node 0 to node 1. Between node 0 and node 3, there are two edges: one going from node 0 to node 3 and one from node 3 to node 0.Node 0 has out-degree of 2, since it has two outgoing edges, and an in-degree of 2, since it has two incoming edges.The list of edges is:$$[(0, 1), (0, 3), (1, 2), (2, 0), (3, 0)]$$As adjacency matrix:$$\begin{array}{l|llll} source \setminus dest & n_0 & n_1 & n_2 & n_3 \\ \hlinen_0 & 0 & 1 & 0 & 1 \\n_1 & 0 & 0 & 1 & 0 \\n_2 & 1 & 0 & 0 & 0 \\n_3 & 1 & 0 & 0 & 0\end{array}$$As adjacency list:$$[\{1, 3\}, \{2\}, \{0\}, \{0\}]$$ Graph Prediction TasksWhat are the kinds of problems we want to solve on graphs?The tasks fall into roughly three categories:1. **Node Classification**: E.g. what is the topic of a paper given a citation network of papers?2. **Link Prediction / Edge Classification**: E.g. are two people in a social network friends?3. **Graph Classification**: E.g. is this protein molecule (represented as a graph) likely going to be effective?*The three main graph learning tasks. Image source: Petar Veličković.*Which examples of graph prediction tasks come to your mind? Which task types do they correspond to?We will create and train models on all three task types in this tutorial. Intro to the jraph LibraryIn the following sections, we will learn how to represent graphs and build GNNs in Python. We will use[jraph](https://github.com/deepmind/jraph), a lightweight library for working with GNNs in [JAX](https://github.com/google/jax). Representing a graph in jraphIn jraph, a graph is represented with a `GraphsTuple` object. In addition to defining the graph structure of nodes and edges, you can also store node features, edge features and global graph features in a `GraphsTuple`.In the `GraphsTuple`, edges are represented in two aligned arrays of node indices: senders (source nodes) and receivers (destinaton nodes).Each index corresponds to one edge, e.g. edge `i` goes from `senders[i]` to `receivers[i]`.You can even store multiple graphs in one `GraphsTuple` object.We will start with creating a simple directed graph with 4 nodes and 5 edges. We will also add toy features to the nodes, using `2*node_index` as the feature.We will later use this toy graph in the GCN demo. ###Code def build_toy_graph() -> jraph.GraphsTuple: """Define a four node graph, each node has a scalar as its feature.""" # Nodes are defined implicitly by their features. # We will add four nodes, each with a feature, e.g. # node 0 has feature [0.], # node 1 has featre [2.] etc. # len(node_features) is the number of nodes. node_features = jnp.array([[0.], [2.], [4.], [6.]]) # We will now specify 5 directed edges connecting the nodes we defined above. # We define this with `senders` (source node indices) and `receivers` # (destination node indices). # For example, to add an edge from node 0 to node 1, we append 0 to senders, # and 1 to receivers. # We can do the same for all 5 edges: # 0 -> 1 # 1 -> 2 # 2 -> 0 # 3 -> 0 # 0 -> 3 senders = jnp.array([0, 1, 2, 3, 0]) receivers = jnp.array([1, 2, 0, 0, 3]) # You can optionally add edge attributes to the 5 edges. edges = jnp.array([[5.], [6.], [7.], [8.], [8.]]) # We then save the number of nodes and the number of edges. # This information is used to make running GNNs over multiple graphs # in a GraphsTuple possible. n_node = jnp.array([4]) n_edge = jnp.array([5]) # Optionally you can add `global` information, such as a graph label. global_context = jnp.array([[1]]) # Same feature dims as nodes and edges. graph = jraph.GraphsTuple( nodes=node_features, edges=edges, senders=senders, receivers=receivers, n_node=n_node, n_edge=n_edge, globals=global_context ) return graph graph = build_toy_graph() ###Output _____no_output_____ ###Markdown Inspecting the GraphsTuple ###Code # Number of nodes # Note that `n_node` returns an array. The length of `n_node` corresponds to # the number of graphs stored in one `GraphsTuple`. # In this case, we only have one graph, so n_node has length 1. graph.n_node # Number of edges graph.n_edge # Node features graph.nodes # Edge features graph.edges # Edges graph.senders graph.receivers # Graph-level features graph.globals ###Output _____no_output_____ ###Markdown Visualizing the GraphTo visualize the graph structure of the graph we created above, we will use the [`networkx`](networkx.org) library because it already has functions for drawing graphs.We first convert the `jraph.GraphsTuple` to a `networkx.DiGraph`. ###Code def convert_jraph_to_networkx_graph(jraph_graph: jraph.GraphsTuple) -> nx.Graph: nodes, edges, receivers, senders, _, _, _ = jraph_graph nx_graph = nx.DiGraph() if nodes is None: for n in range(jraph_graph.n_node[0]): nx_graph.add_node(n) else: for n in range(jraph_graph.n_node[0]): nx_graph.add_node(n, node_feature=nodes[n]) if edges is None: for e in range(jraph_graph.n_edge[0]): nx_graph.add_edge(int(senders[e]), int(receivers[e])) else: for e in range(jraph_graph.n_edge[0]): nx_graph.add_edge( int(senders[e]), int(receivers[e]), edge_feature=edges[e]) return nx_graph def draw_jraph_graph_structure(jraph_graph: jraph.GraphsTuple) -> None: nx_graph = convert_jraph_to_networkx_graph(jraph_graph) pos = nx.spring_layout(nx_graph) nx.draw( nx_graph, pos=pos, with_labels=True, node_size=500, font_color='yellow') draw_jraph_graph_structure(graph) ###Output _____no_output_____ ###Markdown Graph Convolutional Network (GCN) LayerNow let's implement our first graph network!The graph convolutional network, introduced by by Kipf et al. (2017) in https://arxiv.org/abs/1609.02907, is one of the basic graph network architectures. We will build its core building block, the graph convolutional layer.In a convolutional neural network (CNN), a convolutional filter (e.g. 3x3) is applied repeatedly to different parts of a larger input (e.g. 64x64) by striding across the input.In a GCN, a convolution filter is applied to the neighbourhoods around a node in a graph.However, there are also some differences to point out:In contrast to the CNN filter, the neighbourhoods in a GCN can be of different sizes, and there is no ordering of inputs. To see that, note that the CNN filter performs a weighted sum aggregation over the inputs with learnable weights, where each filter input has its own weight. In the GCN, the same weight is applied to all neighbours and the aggregation function is not learned. In other words, in a GCN, each neighbor contributes equally. This is why the CNN filter is not order-invariant, but the GCN filter is.Comparison of CNN and GCN filters.Image source: https://arxiv.org/pdf/1901.00596.pdfMore specifically, the GCN layer performs two steps:1. _Compute messages / update node features_: Create a feature vector $\vec{h}_n$ for each node $n$ (e.g. with an MLP). This is going to be the message that this node will pass to neighboring nodes.2. _Message-passing / aggregate node features_: For each node, calculate a new feature vector $\vec{h}'_n$ based on the messages (features) from the nodes in its neighborhood. In a directed graph, only nodes from incoming edges are counted as neighbors. The image below shows this aggregation step. There are multiple options for aggregation in a GCN, e.g. taking the mean, the sum, the min or max. (Later in this tutorial, we will also see how we can make the aggregation function dependent on the node features by adding an attention mechanism in the Graph Attention Network.)*\"A generic overview of a graph convolution operation, highlighting the relevant information for deriving the next-level features for every node in the graph.\"* Image source: Petar Veličković (https://github.com/PetarV-/TikZ) Simple GCN Layer ###Code def apply_simplified_gcn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: # Unpack GraphsTuple nodes, _, receivers, senders, _, _, _ = graph # 1. Update node features # For simplicity, we will first use an identify function here, and replace it # with a trainable MLP block later. update_node_fn = lambda nodes: nodes nodes = update_node_fn(nodes) # 2. Aggregate node features over nodes in neighborhood # Equivalent to jnp.sum(n_node), but jittable total_num_nodes = tree.tree_leaves(nodes)[0].shape[0] aggregate_nodes_fn = jax.ops.segment_sum # Compute new node features by aggregating messages from neighboring nodes nodes = tree.tree_map(lambda x: aggregate_nodes_fn(x[senders], receivers, total_num_nodes), nodes) out_graph = graph._replace(nodes=nodes) return out_graph ###Output _____no_output_____ ###Markdown We can now run the graph convolution on our toy graph from before. ###Code graph = build_toy_graph() ###Output _____no_output_____ ###Markdown Here is the visualized graph. ###Code draw_jraph_graph_structure(graph) out_graph = apply_simplified_gcn(graph) ###Output _____no_output_____ ###Markdown Since we used the identity function for updating nodes and sum aggregation, we can verify the results pretty easily. As a reminder, in this toy graph, the node features are the same as the node index.Node 0: sum of features from node 2 and node 3 $\rightarrow$ 10.Node 1: sum of features from node 0 $\rightarrow$ 0.Node 2: sum of features from node 1 $\rightarrow$ 2.Node 3: sum of features from node 0 $\rightarrow$ 0. ###Code out_graph.nodes ###Output _____no_output_____ ###Markdown Add Trainable Parameters to GCN layerSo far our graph convolution operation doesn't have any learnable parameters.Let's add an MLP block to the update function to make it trainable. ###Code class MLP(hk.Module): def __init__(self, features: jnp.ndarray): super().__init__() self.features = features def __call__(self, x: jnp.ndarray) -> jnp.ndarray: layers = [] for feat in self.features[:-1]: layers.append(hk.Linear(feat)) layers.append(jax.nn.relu) layers.append(hk.Linear(self.features[-1])) mlp = hk.Sequential(layers) return mlp(x) # Use MLP block to define the update node function update_node_fn = lambda x: MLP(features=[8, 4])(x) ###Output _____no_output_____ ###Markdown Check outputs of `update_node_fn` with MLP Block ###Code graph = build_toy_graph() update_node_module = hk.without_apply_rng(hk.transform(update_node_fn)) params = update_node_module.init(jax.random.PRNGKey(42), graph.nodes) out = update_node_module.apply(params, graph.nodes) ###Output _____no_output_____ ###Markdown As output, we expect the updated node features. We should see one array of dim 4 for each of the 4 nodes, which is the result of applying a single MLP block to the features of each node individually. ###Code out ###Output _____no_output_____ ###Markdown Add Self-Edges (Edges connecting a node to itself)For each node, add an edge of the node onto itself. This way, nodes will include themselves in the aggregation step. ###Code def add_self_edges_fn(receivers: jnp.ndarray, senders: jnp.ndarray, total_num_nodes: int) -> Tuple[jnp.ndarray, jnp.ndarray]: """Adds self edges. Assumes self edges are not in the graph yet.""" receivers = jnp.concatenate((receivers, jnp.arange(total_num_nodes)), axis=0) senders = jnp.concatenate((senders, jnp.arange(total_num_nodes)), axis=0) return receivers, senders ###Output _____no_output_____ ###Markdown Add Symmetric NormalizationNote that the nodes may have different numbers of neighbors / degrees.This could lead to instabilities during neural network training, e.g. exploding or vanishing gradients. To address that, normalization is a commonly used method. In this case, we will normalize by node degrees.As a first attempt, we could count the number of incoming edges (including self-edge) and divide by that value.More formally, let $A$ be the adjacency matrix defining the edges of the graph.Then we define the degree matrix $D$ as a diagonal matrix with $D_{ii} = \sum_jA_{ij}$ (the degree of node $i$)Now we can normalize $AH$ by dividing it by the node degrees:$${D}^{-1}AH$$To take both the in and out degrees into account, we can use symmetric normalization, which is also what Kipf and Welling proposed in their [paper](https://arxiv.org/abs/1609.02907):$$D^{-\frac{1}{2}}AD^{-\frac{1}{2}}H$$ General GCN LayerNow we can write a more general and configurable version of the Graph Convolution layer, allowing the caller to specify:* **`update_node_fn`**: Function to use to update node features (e.g. the MLP block version we just implemented)* **`aggregate_nodes_fn`**: Aggregation function to use to aggregate messages from neighbourhood.* **`add_self_edges`**: Whether to add self edges for aggregation step.* **`symmetric_normalization`**: Whether to add symmetric normalization. ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/_src/models.py#L506 def GraphConvolution(update_node_fn: Callable, aggregate_nodes_fn: Callable = jax.ops.segment_sum, add_self_edges: bool = False, symmetric_normalization: bool = True) -> Callable: """Returns a method that applies a Graph Convolution layer. Graph Convolutional layer as in https://arxiv.org/abs/1609.02907, NOTE: This implementation does not add an activation after aggregation. If you are stacking layers, you may want to add an activation between each layer. Args: update_node_fn: function used to update the nodes. In the paper a single layer MLP is used. aggregate_nodes_fn: function used to aggregates the sender nodes. add_self_edges: whether to add self edges to nodes in the graph as in the paper definition of GCN. Defaults to False. symmetric_normalization: whether to use symmetric normalization. Defaults to True. Returns: A method that applies a Graph Convolution layer. """ def _ApplyGCN(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Applies a Graph Convolution layer.""" nodes, _, receivers, senders, _, _, _ = graph # First pass nodes through the node updater. nodes = update_node_fn(nodes) # Equivalent to jnp.sum(n_node), but jittable total_num_nodes = tree.tree_leaves(nodes)[0].shape[0] if add_self_edges: # We add self edges to the senders and receivers so that each node # includes itself in aggregation. # In principle, a `GraphsTuple` should partition by n_edge, but in # this case it is not required since a GCN is agnostic to whether # the `GraphsTuple` is a batch of graphs or a single large graph. conv_receivers, conv_senders = add_self_edges_fn(receivers, senders, total_num_nodes) else: conv_senders = senders conv_receivers = receivers # pylint: disable=g-long-lambda if symmetric_normalization: # Calculate the normalization values. count_edges = lambda x: jax.ops.segment_sum( jnp.ones_like(conv_senders), x, total_num_nodes) sender_degree = count_edges(conv_senders) receiver_degree = count_edges(conv_receivers) # Pre normalize by sqrt sender degree. # Avoid dividing by 0 by taking maximum of (degree, 1). nodes = tree.tree_map( lambda x: x * jax.lax.rsqrt(jnp.maximum(sender_degree, 1.0))[:, None], nodes, ) # Aggregate the pre-normalized nodes. nodes = tree.tree_map( lambda x: aggregate_nodes_fn(x[conv_senders], conv_receivers, total_num_nodes), nodes) # Post normalize by sqrt receiver degree. # Avoid dividing by 0 by taking maximum of (degree, 1). nodes = tree.tree_map( lambda x: (x * jax.lax.rsqrt(jnp.maximum(receiver_degree, 1.0))[:, None]), nodes, ) else: nodes = tree.tree_map( lambda x: aggregate_nodes_fn(x[conv_senders], conv_receivers, total_num_nodes), nodes) # pylint: enable=g-long-lambda return graph._replace(nodes=nodes) return _ApplyGCN ###Output _____no_output_____ ###Markdown Test General GCN Layer ###Code gcn_layer = GraphConvolution( update_node_fn=lambda n: MLP(features=[8, 4])(n), aggregate_nodes_fn=jax.ops.segment_sum, add_self_edges=True, symmetric_normalization=True ) graph = build_toy_graph() network = hk.without_apply_rng(hk.transform(gcn_layer)) params = network.init(jax.random.PRNGKey(42), graph) out_graph = network.apply(params, graph) out_graph.nodes ###Output _____no_output_____ ###Markdown Build GCN Model with Multiple LayersWith a single GCN layer, a node's representation after the GCN layer is onlyinfluenced by its direct neighbourhood. However, we may want to consider larger neighbourhoods, i.e. more than just 1 hop away. To achieve that, we can stackmultiple GCN layers, similar to how stacking CNN layers expands the input region.We will define a network with three GCN layers: ###Code def gcn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Defines a graph neural network with 3 GCN layers. Args: graph: GraphsTuple the network processes. Returns: output graph with updated node values. """ gn = GraphConvolution( update_node_fn=lambda n: jax.nn.relu(hk.Linear(8)(n)), add_self_edges=True) graph = gn(graph) gn = GraphConvolution( update_node_fn=lambda n: jax.nn.relu(hk.Linear(4)(n)), add_self_edges=True) graph = gn(graph) gn = GraphConvolution( update_node_fn=hk.Linear(2)) graph = gn(graph) return graph graph = build_toy_graph() network = hk.without_apply_rng(hk.transform(gcn)) params = network.init(jax.random.PRNGKey(42), graph) out_graph = network.apply(params, graph) out_graph.nodes ###Output _____no_output_____ ###Markdown Node Classification with GCN on Karate Club DatasetTime to try out our GCN on our first graph prediction task! Zachary's Karate Club Dataset[Zachary's karate club](https://en.wikipedia.org/wiki/Zachary%27s_karate_club) is a small dataset commonly used as an example for a social graph. It's great for demo purposes, as it's easy to visualize and quick to train a model on it.A node represents a student or instructor in the club. An edge means that those two people have interacted outside of the class. There are two instructors in the club.Each student is assigned to one of two instructors. Optimizing the GCN on the Karate Club Node Classification TaskThe task is to predict the assignment of students to instructors, given the social graph and only knowing the assignment of two nodes (the two instructors) a priori.In other words, out of the 34 nodes, only two nodes are labeled, and we are trying to optimize the assignment of the other 32 nodes, by **maximizing the log-likelihood of the two known node assignments**.We will compute the accuracy of our node assignments by comparing to the ground-truth assignments. **Note that the ground-truth for the 32 student nodes is not used in the loss function itself.** Let's load the dataset: ###Code """Zachary's karate club example. From https://github.com/deepmind/jraph/blob/master/jraph/examples/zacharys_karate_club.py. Here we train a graph neural network to process Zachary's karate club. https://en.wikipedia.org/wiki/Zachary%27s_karate_club Zachary's karate club is used in the literature as an example of a social graph. Here we use a graphnet to optimize the assignments of the students in the karate club to two distinct karate instructors (Mr. Hi and John A). """ def get_zacharys_karate_club() -> jraph.GraphsTuple: """Returns GraphsTuple representing Zachary's karate club.""" social_graph = [ (1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2), (4, 0), (5, 0), (6, 0), (6, 4), (6, 5), (7, 0), (7, 1), (7, 2), (7, 3), (8, 0), (8, 2), (9, 2), (10, 0), (10, 4), (10, 5), (11, 0), (12, 0), (12, 3), (13, 0), (13, 1), (13, 2), (13, 3), (16, 5), (16, 6), (17, 0), (17, 1), (19, 0), (19, 1), (21, 0), (21, 1), (25, 23), (25, 24), (27, 2), (27, 23), (27, 24), (28, 2), (29, 23), (29, 26), (30, 1), (30, 8), (31, 0), (31, 24), (31, 25), (31, 28), (32, 2), (32, 8), (32, 14), (32, 15), (32, 18), (32, 20), (32, 22), (32, 23), (32, 29), (32, 30), (32, 31), (33, 8), (33, 9), (33, 13), (33, 14), (33, 15), (33, 18), (33, 19), (33, 20), (33, 22), (33, 23), (33, 26), (33, 27), (33, 28), (33, 29), (33, 30), (33, 31), (33, 32)] # Add reverse edges. social_graph += [(edge[1], edge[0]) for edge in social_graph] n_club_members = 34 return jraph.GraphsTuple( n_node=jnp.asarray([n_club_members]), n_edge=jnp.asarray([len(social_graph)]), # One-hot encoding for nodes, i.e. argmax(nodes) = node index. nodes=jnp.eye(n_club_members), # No edge features. edges=None, globals=None, senders=jnp.asarray([edge[0] for edge in social_graph]), receivers=jnp.asarray([edge[1] for edge in social_graph])) def get_ground_truth_assignments_for_zacharys_karate_club() -> jnp.ndarray: """Returns ground truth assignments for Zachary's karate club.""" return jnp.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) graph = get_zacharys_karate_club() print(f'Number of nodes: {graph.n_node[0]}') print(f'Number of edges: {graph.n_edge[0]}') ###Output _____no_output_____ ###Markdown Visualize the karate club graph with circular node layout: ###Code nx_graph = convert_jraph_to_networkx_graph(graph) pos = nx.circular_layout(nx_graph) plt.figure(figsize=(6, 6)) nx.draw(nx_graph, pos=pos, with_labels = True, node_size=500, font_color='yellow') ###Output _____no_output_____ ###Markdown Define the GCN with the `GraphConvolution` layers we implemented: ###Code def gcn_definition(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Defines a GCN for the karate club task. Args: graph: GraphsTuple the network processes. Returns: output graph with updated node values. """ gn = GraphConvolution( update_node_fn=lambda n: jax.nn.relu(hk.Linear(8)(n)), add_self_edges=True) graph = gn(graph) gn = GraphConvolution( update_node_fn=hk.Linear(2)) # output dim is 2 because we have 2 output classes. graph = gn(graph) return graph ###Output _____no_output_____ ###Markdown Training and evaluation code: ###Code def optimize_club(network: hk.Transformed, num_steps: int) -> jnp.ndarray: """Solves the karate club problem by optimizing the assignments of students.""" zacharys_karate_club = get_zacharys_karate_club() labels = get_ground_truth_assignments_for_zacharys_karate_club() params = network.init(jax.random.PRNGKey(42), zacharys_karate_club) @jax.jit def predict(params: hk.Params) -> jnp.ndarray: decoded_graph = network.apply(params, zacharys_karate_club) return jnp.argmax(decoded_graph.nodes, axis=1) @jax.jit def prediction_loss(params: hk.Params) -> jnp.ndarray: decoded_graph = network.apply(params, zacharys_karate_club) # We interpret the decoded nodes as a pair of logits for each node. log_prob = jax.nn.log_softmax(decoded_graph.nodes) # The only two assignments we know a-priori are those of Mr. Hi (Node 0) # and John A (Node 33). return -(log_prob[0, 0] + log_prob[33, 1]) opt_init, opt_update = optax.adam(1e-2) opt_state = opt_init(params) @jax.jit def update(params: hk.Params, opt_state) -> Tuple[hk.Params, Any]: """Returns updated params and state.""" g = jax.grad(prediction_loss)(params) updates, opt_state = opt_update(g, opt_state) return optax.apply_updates(params, updates), opt_state @jax.jit def accuracy(params: hk.Params) -> jnp.ndarray: decoded_graph = network.apply(params, zacharys_karate_club) return jnp.mean(jnp.argmax(decoded_graph.nodes, axis=1) == labels) for step in range(num_steps): print(f"step {step} accuracy {accuracy(params).item():.2f}") params, opt_state = update(params, opt_state) return predict(params) ###Output _____no_output_____ ###Markdown Let's train the GCN! We expect this model reach an accuracy of about 0.91. ###Code network = hk.without_apply_rng(hk.transform(gcn_definition)) result = optimize_club(network, num_steps=15) ###Output _____no_output_____ ###Markdown Try modifying the model parameters to see if you can improve the accuracy!You can also modify the dataset itself, and see how that influences model training. Node assignments predicted by the model at the end of training: ###Code result ###Output _____no_output_____ ###Markdown Visualize ground truth and predicted node assignments:What do you think of the results? ###Code zacharys_karate_club = get_zacharys_karate_club() nx_graph = convert_jraph_to_networkx_graph(zacharys_karate_club) pos = nx.circular_layout(nx_graph) fig = plt.figure(figsize=(15, 7)) ax1 = fig.add_subplot(121) nx.draw( nx_graph, pos=pos, with_labels=True, node_size=500, node_color=result.tolist(), font_color='white') ax1.title.set_text('Predicted Node Assignments with GCN') gt_labels = get_ground_truth_assignments_for_zacharys_karate_club() ax2 = fig.add_subplot(122) nx.draw( nx_graph, pos=pos, with_labels=True, node_size=500, node_color=gt_labels.tolist(), font_color='white') ax2.title.set_text('Ground-Truth Node Assignments') fig.suptitle('Do you spot the difference? 😐', y=-0.01) plt.show() ###Output _____no_output_____ ###Markdown Graph Attention (GAT) LayerWhile the GCN we covered in the previous section can learn meaningful representations, it also has some shortcomings. Can you think of any?In the GCN layer, the messages from all its neighbours and the node itself are equally weighted. This may lead to loss of node-specific information. E.g., consider the case when a set of nodes shares the same set of neighbors, and start out with different node features. Then because of averaging, their resulting output features would be the same. Adding self-edges mitigates this issue by a small amount, but this problem is magnified with increasing number of GCN layers and number of edges connecting to a node.The graph attention (GAT) mechanism, as proposed by [Velickovic et al. ( 2017)](https://arxiv.org/abs/1710.10903), allows the network to learn how to weigh / assign importance to the node features from the neighbourhood when computing the new node features. This is very similar to the idea of using attention in Transformers, which were introduced in [Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762).(One could even argue that Transformers are graph attention networks operating on the special case of fully-connected graphs.)In the figure below, $\vec{h}$ are the node features and $\vec{\alpha}$ are the learned attention weights.Figure Credit: [Velickovic et al. ( 2017)](https://arxiv.org/abs/1710.10903).(Detail: This image is showing multi-headed attention with 3 heads, each color corresponding to a different head. At the end, an aggregation function is applied over all the heads.)To obtain the output node features of the GAT layer, we compute:$$ \vec{h}'_i = \sum _{j \in \mathcal{N}(i)}\alpha_{ij} \mathbf{W} \vec{h}_j$$Here, $\mathbf{W}$ is a weight matrix which performs a linear transformation on the input. How do we obtain $\alpha$, or in other words, learn what to pay attention to?Intuitively, the attention coefficient $\alpha_{ij}$ should rely on both the transformed features from nodes $i$ and $j$. So let's first define an attention mechanism function $\mathrm{attention\_fn}$ that computes the intermediary attention coefficients $e_{ij}$:$$ e_{ij} = \mathrm{attention\_fn}(\mathbf{W}\vec{h}_i, \mathbf{W}\vec{h}_j)$$To obtain normalized attention weights $\alpha$, we apply a softmax:$$\alpha_{ij} = \frac{\exp(e_{ij})}{\sum _{j \in \mathcal{N}(i)}\exp(e_{ij})}$$For the function $a$, the authors of the GAT paper chose to concatenate the transformed node features (denoted by $||$) and apply a single-layer feedforward network, parameterized by a weight vector $\vec{\mathbf{a}}$ and with LeakyRelu as non-linearity.In the implementation below, we refer to $\mathbf{W}$ as `attention_query_fn` and $att\_fn$ as `attention_logit_fn`.$$\mathrm{attention\_fn}(\mathbf{W}\vec{h}_i, \mathbf{W}\vec{h}_j) = \text{LeakyReLU}(\vec{\mathbf{a}}(\mathbf{W}\vec{h}_i || \mathbf{W}\vec{h}_j))$$The figure below summarizes this attention mechanism visually.Figure Credit: Petar Velickovic.<!-- $\sum_{j \in \mathcal{N}(i)}\vec{\alpha}_{ij} \stackrel{!}{=}1 $ --> ###Code # GAT implementation adapted from https://github.com/deepmind/jraph/blob/master/jraph/_src/models.py#L442. def GAT(attention_query_fn: Callable, attention_logit_fn: Callable, node_update_fn: Optional[Callable] = None, add_self_edges: bool = True) -> Callable: """Returns a method that applies a Graph Attention Network layer. Graph Attention message passing as described in https://arxiv.org/pdf/1710.10903.pdf. This model expects node features as a jnp.array, may use edge features for computing attention weights, and ignore global features. It does not support nests. Args: attention_query_fn: function that generates attention queries from sender node features. attention_logit_fn: function that converts attention queries into logits for softmax attention. node_update_fn: function that updates the aggregated messages. If None, will apply leaky relu and concatenate (if using multi-head attention). Returns: A function that applies a Graph Attention layer. """ # pylint: disable=g-long-lambda if node_update_fn is None: # By default, apply the leaky relu and then concatenate the heads on the # feature axis. node_update_fn = lambda x: jnp.reshape( jax.nn.leaky_relu(x), (x.shape[0], -1)) def _ApplyGAT(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Applies a Graph Attention layer.""" nodes, edges, receivers, senders, _, _, _ = graph # Equivalent to the sum of n_node, but statically known. try: sum_n_node = nodes.shape[0] except IndexError: raise IndexError('GAT requires node features') # Pass nodes through the attention query function to transform # node features, e.g. with an MLP. nodes = attention_query_fn(nodes) total_num_nodes = tree.tree_leaves(nodes)[0].shape[0] if add_self_edges: # We add self edges to the senders and receivers so that each node # includes itself in aggregation. receivers, senders = add_self_edges_fn(receivers, senders, total_num_nodes) # We compute the softmax logits using a function that takes the # embedded sender and receiver attributes. sent_attributes = nodes[senders] received_attributes = nodes[receivers] att_softmax_logits = attention_logit_fn(sent_attributes, received_attributes, edges) # Compute the attention softmax weights on the entire tree. att_weights = jraph.segment_softmax( att_softmax_logits, segment_ids=receivers, num_segments=sum_n_node) # Apply attention weights. messages = sent_attributes * att_weights # Aggregate messages to nodes. nodes = jax.ops.segment_sum(messages, receivers, num_segments=sum_n_node) # Apply an update function to the aggregated messages. nodes = node_update_fn(nodes) return graph._replace(nodes=nodes) # pylint: enable=g-long-lambda return _ApplyGAT ###Output _____no_output_____ ###Markdown Test GAT Layer ###Code def attention_logit_fn(sender_attr: jnp.ndarray, receiver_attr: jnp.ndarray, edges: jnp.ndarray) -> jnp.ndarray: del edges x = jnp.concatenate((sender_attr, receiver_attr), axis=1) return hk.Linear(1)(x) gat_layer = GAT( attention_query_fn=lambda n: hk.Linear(8) (n), # Applies W to the node features attention_logit_fn=attention_logit_fn, node_update_fn=None, add_self_edges=True, ) graph = build_toy_graph() network = hk.without_apply_rng(hk.transform(gat_layer)) params = network.init(jax.random.PRNGKey(42), graph) out_graph = network.apply(params, graph) out_graph.nodes ###Output _____no_output_____ ###Markdown Train GAT Model on Karate Club DatasetWe will now repeat the karate club experiment with a GAT network. ###Code def gat_definition(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Defines a GAT network for the karate club node classification task. Args: graph: GraphsTuple the network processes. Returns: output graph with updated node values. """ def _attention_logit_fn(sender_attr: jnp.ndarray, receiver_attr: jnp.ndarray, edges: jnp.ndarray) -> jnp.ndarray: del edges x = jnp.concatenate((sender_attr, receiver_attr), axis=1) return hk.Linear(1)(x) gn = GAT( attention_query_fn=lambda n: hk.Linear(8)(n), attention_logit_fn=_attention_logit_fn, node_update_fn=None, add_self_edges=True) graph = gn(graph) gn = GAT( attention_query_fn=lambda n: hk.Linear(8)(n), attention_logit_fn=_attention_logit_fn, node_update_fn=hk.Linear(2), add_self_edges=True) graph = gn(graph) return graph ###Output _____no_output_____ ###Markdown Let's train the model!We expect the model to reach an accuracy of about 0.97. ###Code network = hk.without_apply_rng(hk.transform(gat_definition)) result = optimize_club(network, num_steps=15) ###Output _____no_output_____ ###Markdown The final node assignment predicted by the trained model: ###Code result zacharys_karate_club = get_zacharys_karate_club() nx_graph = convert_jraph_to_networkx_graph(zacharys_karate_club) pos = nx.circular_layout(nx_graph) fig = plt.figure(figsize=(15, 7)) ax1 = fig.add_subplot(121) nx.draw( nx_graph, pos=pos, with_labels=True, node_size=500, node_color=result.tolist(), font_color='white') ax1.title.set_text('Predicted Node Assignments with GAT') gt_labels = get_ground_truth_assignments_for_zacharys_karate_club() ax2 = fig.add_subplot(122) nx.draw( nx_graph, pos=pos, with_labels=True, node_size=500, node_color=gt_labels.tolist(), font_color='white') ax2.title.set_text('Ground-Truth Node Assignments') fig.suptitle('Do you spot the difference? 😐', y=-0.01) plt.show() ###Output _____no_output_____ ###Markdown Graph Classification on MUTAG (Molecules) In the previous section, we used our GCN and GAT networks on a node classification problem. Now, let's use the same model architectures on a **graph classification task**. The main difference from our previous setup is that instead of observing individual node latents, we are now attempting to summarize them into one embedding vector, representative of the entire graph, which we then use to predict the class of this graph.We will do this on one of the most common tasks of this type -- **molecular property prediction**, where molecules are represented as graphs. Nodes correspond to atoms, and edges represent the bonds between them. We will use the **MUTAG** dataset for this example, a common dataset from the [TUDatasets](https://chrsmrrs.github.io/datasets/) collection.We have converted this dataset to be compatible with jraph and will download it in the cell below.Citation for TUDatasets: [Morris, Christopher, et al. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663. 2020.](https://chrsmrrs.github.io/datasets/) ###Code # Download jraph version of MUTAG. !wget -P /tmp/ https://storage.googleapis.com/dm-educational/assets/graph-nets/jraph_datasets/mutag.pickle with open('/tmp/mutag.pickle', 'rb') as f: mutag_ds = pickle.load(f) ###Output _____no_output_____ ###Markdown The dataset is saved as a list of examples, each example is a dictionary containing an input_graph and its corresponding target. ###Code len(mutag_ds) # Inspect the first graph g = mutag_ds[0]['input_graph'] print(f'Number of nodes: {g.n_node[0]}') print(f'Number of edges: {g.n_edge[0]}') print(f'Node features shape: {g.nodes.shape}') print(f'Edge features shape: {g.edges.shape}') draw_jraph_graph_structure(g) # Target for first graph print(f"Target: {mutag_ds[0]['target']}") ###Output _____no_output_____ ###Markdown We see that there are 188 graphs, to be classified in one of 2 classes, representing "their mutagenic effect on a specific gram negative bacterium". Node features represent the 1-hot encoding of the atom type (0=C, 1=N, 2=O, 3=F, 4=I, 5=Cl, 6=Br). Edge features (`edge_attr`) represent the bond type, which we will here ignore.Let's split the dataset to use the first 150 graphs as the training set (and the rest as the test set). ###Code train_mutag_ds = mutag_ds[:150] test_mutag_ds = mutag_ds[150:] ###Output _____no_output_____ ###Markdown Padding Graphs to Speed Up TrainingSince jax recompiles the program for each graph size, training would take a long time due to recompilation for different graph sizes. To address that, we pad the number of nodes and edges in the graphs to nearest power of two. Since jax maintains a cacheof compiled programs, the compilation cost is amortized. ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py def _nearest_bigger_power_of_two(x: int) -> int: """Computes the nearest power of two greater than x for padding.""" y = 2 while y < x: y *= 2 return y def pad_graph_to_nearest_power_of_two( graphs_tuple: jraph.GraphsTuple) -> jraph.GraphsTuple: """Pads a batched `GraphsTuple` to the nearest power of two. For example, if a `GraphsTuple` has 7 nodes, 5 edges and 3 graphs, this method would pad the `GraphsTuple` nodes and edges: 7 nodes --> 8 nodes (2^3) 5 edges --> 8 edges (2^3) And since padding is accomplished using `jraph.pad_with_graphs`, an extra graph and node is added: 8 nodes --> 9 nodes 3 graphs --> 4 graphs Args: graphs_tuple: a batched `GraphsTuple` (can be batch size 1). Returns: A graphs_tuple batched to the nearest power of two. """ # Add 1 since we need at least one padding node for pad_with_graphs. pad_nodes_to = _nearest_bigger_power_of_two(jnp.sum(graphs_tuple.n_node)) + 1 pad_edges_to = _nearest_bigger_power_of_two(jnp.sum(graphs_tuple.n_edge)) # Add 1 since we need at least one padding graph for pad_with_graphs. # We do not pad to nearest power of two because the batch size is fixed. pad_graphs_to = graphs_tuple.n_node.shape[0] + 1 return jraph.pad_with_graphs(graphs_tuple, pad_nodes_to, pad_edges_to, pad_graphs_to) ###Output _____no_output_____ ###Markdown Graph Network Model DefinitionWe will use `jraph.GraphNetwork()` to build our graph model. The `GraphNetwork` architecture is defined in [Battaglia et al. (2018)](https://arxiv.org/pdf/1806.01261.pdf).We first define update functions for nodes, edges, and the full graph (global). We will use MLP blocks for all three. ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py @jraph.concatenated_args def edge_update_fn(feats: jnp.ndarray) -> jnp.ndarray: """Edge update function for graph net.""" net = hk.Sequential( [hk.Linear(128), jax.nn.relu, hk.Linear(128)]) return net(feats) @jraph.concatenated_args def node_update_fn(feats: jnp.ndarray) -> jnp.ndarray: """Node update function for graph net.""" net = hk.Sequential( [hk.Linear(128), jax.nn.relu, hk.Linear(128)]) return net(feats) @jraph.concatenated_args def update_global_fn(feats: jnp.ndarray) -> jnp.ndarray: """Global update function for graph net.""" # MUTAG is a binary classification task, so output pos neg logits. net = hk.Sequential( [hk.Linear(128), jax.nn.relu, hk.Linear(2)]) return net(feats) def net_fn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: # Add a global paramater for graph classification. graph = graph._replace(globals=jnp.zeros([graph.n_node.shape[0], 1])) embedder = jraph.GraphMapFeatures( hk.Linear(128), hk.Linear(128), hk.Linear(128)) net = jraph.GraphNetwork( update_node_fn=node_update_fn, update_edge_fn=edge_update_fn, update_global_fn=update_global_fn) return net(embedder(graph)) ###Output _____no_output_____ ###Markdown Loss and Accuracy FunctionDefine the classification cross-entropy loss and accuracy function. ###Code def compute_loss(params: hk.Params, graph: jraph.GraphsTuple, label: jnp.ndarray, net: jraph.GraphsTuple) -> Tuple[jnp.ndarray, jnp.ndarray]: """Computes loss and accuracy.""" pred_graph = net.apply(params, graph) preds = jax.nn.log_softmax(pred_graph.globals) targets = jax.nn.one_hot(label, 2) # Since we have an extra 'dummy' graph in our batch due to padding, we want # to mask out any loss associated with the dummy graph. # Since we padded with `pad_with_graphs` we can recover the mask by using # get_graph_padding_mask. mask = jraph.get_graph_padding_mask(pred_graph) # Cross entropy loss. loss = -jnp.mean(preds * targets * mask[:, None]) # Accuracy taking into account the mask. accuracy = jnp.sum( (jnp.argmax(pred_graph.globals, axis=1) == label) * mask) / jnp.sum(mask) return loss, accuracy ###Output _____no_output_____ ###Markdown Training and Evaluation Functions ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py def train(dataset: List[Dict[str, Any]], num_train_steps: int) -> hk.Params: """Training loop.""" # Transform impure `net_fn` to pure functions with hk.transform. net = hk.without_apply_rng(hk.transform(net_fn)) # Get a candidate graph and label to initialize the network. graph = dataset[0]['input_graph'] # Initialize the network. params = net.init(jax.random.PRNGKey(42), graph) # Initialize the optimizer. opt_init, opt_update = optax.adam(1e-4) opt_state = opt_init(params) compute_loss_fn = functools.partial(compute_loss, net=net) # We jit the computation of our loss, since this is the main computation. # Using jax.jit means that we will use a single accelerator. If you want # to use more than 1 accelerator, use jax.pmap. More information can be # found in the jax documentation. compute_loss_fn = jax.jit(jax.value_and_grad( compute_loss_fn, has_aux=True)) for idx in range(num_train_steps): graph = dataset[idx % len(dataset)]['input_graph'] label = dataset[idx % len(dataset)]['target'] # Jax will re-jit your graphnet every time a new graph shape is encountered. # In the limit, this means a new compilation every training step, which # will result in *extremely* slow training. To prevent this, pad each # batch of graphs to the nearest power of two. Since jax maintains a cache # of compiled programs, the compilation cost is amortized. graph = pad_graph_to_nearest_power_of_two(graph) # Since padding is implemented with pad_with_graphs, an extra graph has # been added to the batch, which means there should be an extra label. label = jnp.concatenate([label, jnp.array([0])]) (loss, acc), grad = compute_loss_fn(params, graph, label) updates, opt_state = opt_update(grad, opt_state, params) params = optax.apply_updates(params, updates) if idx % 50 == 0: print(f'step: {idx}, loss: {loss}, acc: {acc}') print('Training finished') return params def evaluate(dataset: List[Dict[str, Any]], params: hk.Params) -> Tuple[jnp.ndarray, jnp.ndarray]: """Evaluation Script.""" # Transform impure `net_fn` to pure functions with hk.transform. net = hk.without_apply_rng(hk.transform(net_fn)) # Get a candidate graph and label to initialize the network. graph = dataset[0]['input_graph'] accumulated_loss = 0 accumulated_accuracy = 0 compute_loss_fn = jax.jit(functools.partial(compute_loss, net=net)) for idx in range(len(dataset)): graph = dataset[idx]['input_graph'] label = dataset[idx]['target'] graph = pad_graph_to_nearest_power_of_two(graph) label = jnp.concatenate([label, jnp.array([0])]) loss, acc = compute_loss_fn(params, graph, label) accumulated_accuracy += acc accumulated_loss += loss if idx % 100 == 0: print(f'Evaluated {idx + 1} graphs') print('Completed evaluation.') loss = accumulated_loss / idx accuracy = accumulated_accuracy / idx print(f'Eval loss: {loss}, accuracy {accuracy}') return loss, accuracy params = train(train_mutag_ds, num_train_steps=500) evaluate(test_mutag_ds, params) ###Output _____no_output_____ ###Markdown We converge at ~76% test accuracy. We could of course further tune the parameters to improve this result. Link prediction on CORA (Citation Network) The final problem type we will explore is **link prediction**, an instance of an **edge-level** task. Given a graph, our goal is to predict whether a certain edge $(u,v)$ should be present or not. This is often useful in the recommender system settings (e.g., propose new friends in a social network, propose a movie to a user).As before, the first step is to obtain node latents $h_i$ using a GNN. In this context we will use the autoencoder language and call this GNN **encoder**. Then, we learn a binary classifier $f: (h_i, h_j) \to z_{i,j}$ (**decoder**), predicting if an edge $(i,j)$ should exist or not. While we could use a more elaborate decoder (e.g., an MLP), a common approach we will also use here is to focus on obtaining good node embeddings, and for the decoder simply use the similarity between node latents, i.e. $z_{i,j} = h_i^T h_j$. For this problem we will use the [**Cora** dataset](https://linqs.github.io/linqs-website/datasets/cora), a citation graph containing 2708 scientific publications. For each publication we have a 1433-dimensional feature vector, which is a bag-of-words representation (with a small, fixed dictionary) of the paper text. The edges in this graph represent citations, and are commonly treated as undirected. Each paper is in one of seven topics (classes) so you can also use this dataset for node classification.Similar to MUTAG, we have converted this dataset to jraph for you.Citation for the use of the Cora dataset:- [Qing Lu and Lise Getoor. Link-Based Classification. International Conference on Machine Learning. 2003.](https://linqs.github.io/linqs-website/publications/id:lu-icml03)- [Sen, Prithviraj, et al. Collective classification in network data. AI magazine 29.3. 2008.](https://linqs.github.io/linqs-website/datasets/cora)- [Dataset download link](https://linqs.github.io/linqs-website/datasets/cora) ###Code # Download jraph version of Cora. !wget -P /tmp/ https://storage.googleapis.com/dm-educational/assets/graph-nets/jraph_datasets/cora.pickle with open('/tmp/cora.pickle', 'rb') as f: cora_ds = pickle.load(f) ###Output _____no_output_____ ###Markdown Splitting Edges and Adding "Negative" EdgesFor the link prediction task, we split the edges into train, val and test sets and also add "negative" examples (edges that do not correspond to a citation). We will ignore the topic classes.For the validation and test splits, we add the same number of existing edges ("positive examples") and non-existing edges ("negative examples").In contrast to the validation and test splits, the training split only contains positive examples (set $T_+$). The $|T_+|$ negative examples to be used during training will be sampled ad hoc in each epoch and uniformly at random from all edges that are not in $T_+$. This allows the model to see a wider range of negative examples. ###Code def train_val_test_split_edges(graph: jraph.GraphsTuple, val_perc: float = 0.05, test_perc: float = 0.1): """Split edges in input graph into train, val and test splits. For val and test sets, also include negative edges. Based on torch_geometric.utils.train_test_split_edges. """ mask = graph.senders < graph.receivers senders = graph.senders[mask] receivers = graph.receivers[mask] num_val = int(val_perc * senders.shape[0]) num_test = int(test_perc * senders.shape[0]) permuted_indices = onp.random.permutation(range(senders.shape[0])) senders = senders[permuted_indices] receivers = receivers[permuted_indices] if graph.edges is not None: edges = graph.edges[permuted_indices] val_senders = senders[:num_val] val_receivers = receivers[:num_val] if graph.edges is not None: val_edges = edges[:num_val] test_senders = senders[num_val:num_val + num_test] test_receivers = receivers[num_val:num_val + num_test] if graph.edges is not None: test_edges = edges[num_val:num_val + num_test] train_senders = senders[num_val + num_test:] train_receivers = receivers[num_val + num_test:] train_edges = None if graph.edges is not None: train_edges = edges[num_val + num_test:] # make training edges undirected by adding reverse edges back in train_senders_undir = jnp.concatenate((train_senders, train_receivers)) train_receivers_undir = jnp.concatenate((train_receivers, train_senders)) train_senders = train_senders_undir train_receivers = train_receivers_undir # Negative edges. num_nodes = graph.n_node[0] # Create a negative adjacency mask, s.t. mask[i, j] = True iff edge i->j does # not exist in the original graph. neg_adj_mask = onp.ones((num_nodes, num_nodes), dtype=onp.uint8) # upper triangular part neg_adj_mask = onp.triu(neg_adj_mask, k=1) neg_adj_mask[graph.senders, graph.receivers] = 0 neg_adj_mask = neg_adj_mask.astype(onp.bool) neg_senders, neg_receivers = neg_adj_mask.nonzero() perm = onp.random.permutation(range(len(neg_senders))) neg_senders = neg_senders[perm] neg_receivers = neg_receivers[perm] val_neg_senders = neg_senders[:num_val] val_neg_receivers = neg_receivers[:num_val] test_neg_senders = neg_senders[num_val:num_val + num_test] test_neg_receivers = neg_receivers[num_val:num_val + num_test] train_graph = jraph.GraphsTuple( nodes=graph.nodes, edges=train_edges, senders=train_senders, receivers=train_receivers, n_node=graph.n_node, n_edge=jnp.array([len(train_senders)]), globals=graph.globals) return train_graph, neg_adj_mask, val_senders, val_receivers, val_neg_senders, val_neg_receivers, test_senders, test_receivers, test_neg_senders, test_neg_receivers ###Output _____no_output_____ ###Markdown Test the Edge Splitting Function ###Code graph = cora_ds[0]['input_graph'] train_graph, neg_adj_mask, val_pos_senders, val_pos_receivers, val_neg_senders, val_neg_receivers, test_pos_senders, test_pos_receivers, test_neg_senders, test_neg_receivers = train_val_test_split_edges(graph) print(f'Train set: {train_graph.senders.shape[0]} positive edges, we will sample the same number of negative edges at runtime') print(f'Val set: {val_pos_senders.shape[0]} positive edges, {val_neg_senders.shape[0]} negative edges') print(f'Test set: {test_pos_senders.shape[0]} positive edges, {test_neg_senders.shape[0]} negative edges') print(f'Negative adjacency mask shape: {neg_adj_mask.shape}') print(f'Numbe of negative edges to sample from: {neg_adj_mask.sum()}') ###Output _____no_output_____ ###Markdown *Note*: It will often happen during training that as a negative example, we sample an initially existing edge (that is now e.g. a positive example in the test set). We are however not allowed to check for this, as we should be unaware of the existence of test edges during training.Assuming our dot product decoder, we are essentially attempting to bring the latents of endpoints of edges from $T_+$ closer together, and make the latents of all other pairs of nodes as distant as possible. As this is impossible to fully satisfy, the hope is that the model will "fail" to distance those pairs of nodes where the edges should actually exist (positive examples from the test set). Graph Network Model DefinitionWe will use jraph.GraphNetwork to build our graph net model.We first define update functions for node features. We are not using edge or global features for this task. ###Code @jraph.concatenated_args def node_update_fn(feats: jnp.ndarray) -> jnp.ndarray: """Node update function for graph net.""" net = hk.Sequential([hk.Linear(128), jax.nn.relu, hk.Linear(64)]) return net(feats) def net_fn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Network definition.""" graph = graph._replace(globals=jnp.zeros([graph.n_node.shape[0], 1])) net = jraph.GraphNetwork( update_node_fn=node_update_fn, update_edge_fn=None, update_global_fn=None) return net(graph) def decode(pred_graph: jraph.GraphsTuple, senders: jnp.ndarray, receivers: jnp.ndarray) -> jnp.ndarray: """Given a set of candidate edges, take dot product of respective nodes. Args: pred_graph: input graph. senders: Senders of candidate edges. receivers: Receivers of candidate edges. Returns: For each edge, computes dot product of the features of the two nodes. """ return jnp.squeeze( jnp.sum(pred_graph.nodes[senders] * pred_graph.nodes[receivers], axis=1)) ###Output _____no_output_____ ###Markdown To evaluate our model, we first apply the sigmoid function to obtained dot products to get a score $s_{i,j} \in [0,1]$ for each edge. Now, we can pick a threshold $\tau$ and say that we predict all pairs $(i,j)$ s.t. $s_{i,j} \geq \tau$ as edges (and all the rest as non-edges). Loss and ROC-AUC-Metric FunctionDefine the binary classification cross-entropy loss.To aggregate the results over all choices of $\tau$, we will use ROC-AUC (the area under the ROC curve) as our evaluation metric. ###Code from sklearn.metrics import roc_auc_score def compute_bce_with_logits_loss(x: jnp.ndarray, y: jnp.ndarray) -> jnp.ndarray: """Computes binary cross-entropy with logits loss. Combines sigmoid and BCE, and uses log-sum-exp trick for numerical stability. See https://stackoverflow.com/a/66909858 if you want to learn more. Args: x: Predictions (logits). y: Labels. Returns: Binary cross-entropy loss with mean aggregation. """ max_val = jnp.clip(x, 0, None) loss = x - x * y + max_val + jnp.log( jnp.exp(-max_val) + jnp.exp((-x - max_val))) return loss.mean() def compute_loss(params: hk.Params, graph: jraph.GraphsTuple, senders: jnp.ndarray, receivers: jnp.ndarray, labels: jnp.ndarray, net: hk.Transformed) -> Tuple[jnp.ndarray, jnp.ndarray]: """Computes loss.""" pred_graph = net.apply(params, graph) preds = decode(pred_graph, senders, receivers) loss = compute_bce_with_logits_loss(preds, labels) return loss, preds def compute_roc_auc_score(preds: jnp.ndarray, labels: jnp.ndarray) -> jnp.ndarray: """Computes roc auc (area under the curve) score for classification.""" s = jax.nn.sigmoid(preds) roc_auc = roc_auc_score(labels, s) return roc_auc ###Output _____no_output_____ ###Markdown Helper function for sampling negative edges during training. ###Code def negative_sampling( graph: jraph.GraphsTuple, num_neg_samples: int, key: jnp.DeviceArray) -> Tuple[jnp.DeviceArray, jnp.DeviceArray]: """Samples negative edges, i.e. edges that don't exist in the input graph.""" num_nodes = graph.n_node[0] total_possible_edges = num_nodes**2 # convert 2D edge indices to 1D representation. pos_idx = graph.senders * num_nodes + graph.receivers # Percentage to oversample edges, so most likely will sample enough neg edges. alpha = jnp.abs(1 / (1 - 1.1 * (graph.senders.shape[0] / total_possible_edges))) perm = jax.random.randint( key, shape=(int(alpha * num_neg_samples),), minval=0, maxval=total_possible_edges, dtype=jnp.uint32) # mask where sampled edges are positive edges. mask = jnp.isin(perm, pos_idx) # remove positive edges. perm = perm[~mask][:num_neg_samples] # convert 1d back to 2d edge indices. neg_senders = perm // num_nodes neg_receivers = perm % num_nodes return neg_senders, neg_receivers ###Output _____no_output_____ ###Markdown Let's write the training loop: ###Code def train(dataset: List[Dict[str, Any]], num_epochs: int) -> hk.Params: """Training loop.""" key = jax.random.PRNGKey(42) # Transform impure `net_fn` to pure functions with hk.transform. net = hk.without_apply_rng(hk.transform(net_fn)) # Get a candidate graph and label to initialize the network. graph = dataset[0]['input_graph'] train_graph, _, val_pos_s, val_pos_r, val_neg_s, val_neg_r, test_pos_s, \ test_pos_r, test_neg_s, test_neg_r = train_val_test_split_edges( graph) # Prepare the validation and test data. val_senders = jnp.concatenate((val_pos_s, val_neg_s)) val_receivers = jnp.concatenate((val_pos_r, val_neg_r)) val_labels = jnp.concatenate( (jnp.ones(len(val_pos_s)), jnp.zeros(len(val_neg_s)))) test_senders = jnp.concatenate((test_pos_s, test_neg_s)) test_receivers = jnp.concatenate((test_pos_r, test_neg_r)) test_labels = jnp.concatenate( (jnp.ones(len(test_pos_s)), jnp.zeros(len(test_neg_s)))) # Initialize the network. params = net.init(key, train_graph) # Initialize the optimizer. opt_init, opt_update = optax.adam(1e-4) opt_state = opt_init(params) compute_loss_fn = functools.partial(compute_loss, net=net) # We jit the computation of our loss, since this is the main computation. # Using jax.jit means that we will use a single accelerator. If you want # to use more than 1 accelerator, use jax.pmap. More information can be # found in the jax documentation. compute_loss_fn = jax.jit(jax.value_and_grad(compute_loss_fn, has_aux=True)) for epoch in range(num_epochs): num_neg_samples = train_graph.senders.shape[0] train_neg_senders, train_neg_receivers = negative_sampling( train_graph, num_neg_samples=num_neg_samples, key=key) train_senders = jnp.concatenate((train_graph.senders, train_neg_senders)) train_receivers = jnp.concatenate( (train_graph.receivers, train_neg_receivers)) train_labels = jnp.concatenate( (jnp.ones(len(train_graph.senders)), jnp.zeros(len(train_neg_senders)))) (train_loss, train_preds), grad = compute_loss_fn(params, train_graph, train_senders, train_receivers, train_labels) updates, opt_state = opt_update(grad, opt_state, params) params = optax.apply_updates(params, updates) if epoch % 10 == 0 or epoch == (num_epochs - 1): train_roc_auc = compute_roc_auc_score(train_preds, train_labels) val_loss, val_preds = compute_loss(params, train_graph, val_senders, val_receivers, val_labels, net) val_roc_auc = compute_roc_auc_score(val_preds, val_labels) print(f'epoch: {epoch}, train_loss: {train_loss:.3f}, ' f'train_roc_auc: {train_roc_auc:.3f}, val_loss: {val_loss:.3f}, ' f'val_roc_auc: {val_roc_auc:.3f}') test_loss, test_preds = compute_loss(params, train_graph, test_senders, test_receivers, test_labels, net) test_roc_auc = compute_roc_auc_score(test_preds, test_labels) print('Training finished') print( f'epoch: {epoch}, test_loss: {test_loss:.3f}, test_roc_auc: {test_roc_auc:.3f}' ) return params ###Output _____no_output_____ ###Markdown Let's train the model! We expect the model to reach roughly test_roc_auc of 0.84.(Note that ROC-AUC is a scalar between 0 and 1, with 1 being the ROC-AUC of a perfect classifier.) ###Code params = train(cora_ds, num_epochs=200) ###Output _____no_output_____ ###Markdown Introduction to Graph Neural Nets with JAX/jraph*Lisa Wang, DeepMind ([email protected]), Nikola Jovanović, ETH Zurich([email protected])***Colab Runtime:**If possible, please use a GPU hardware accelerator to run this colab. You can choose that under *Runtime > Change Runtime Type*.**Prerequisites:*** Some familiarity with [JAX](https://github.com/google/jax), you can refer to this [colab](https://colab.sandbox.google.com/github/google/jax/blob/master/docs/jax-101/01-jax-basics.ipynb) for an introduction to JAX.* Neural network basics* Graph theory basics (MIT Open Courseware [slides](https://ocw.mit.edu/courses/civil-and-environmental-engineering/1-022-introduction-to-network-models-fall-2018/lecture-notes/MIT1_022F18_lec2.pdf) by Amir Ajorlou)We recommend watching the [Theoretical Foundations of Graph Neural Networks Lecture](https://www.youtube.com/watch?v=uF53xsT7mjc&) by Petar Veličković before working through this colab. The talk provides a theoretical introduction to Graph Neural Networks (GNNs), historical context and motivating examples.**Outline:*** [Fundamental Graph Concepts](scrollTo=gsKA-syx_LUi)* [Graph Prediction Tasks](scrollTo=spQGRxhPN8Eo)* [Intro to the jraph Library](scrollTo=3C5YI9M0vwvb)* [Graph Convolutional Network (GCN) Layer](scrollTo=NZRMF2d-h2pd)* [Build GCN Model with Multiple Layers](scrollTo=lha8rbQ78l3S)* [Node Classification with GCN on Karate Club Dataset](scrollTo=Z5t7kw7SE_h4)* [Graph Attention (GAT) Layer](scrollTo=yg8g96NdBCK6)* [Train GAT Model on Karate Club Dataset](scrollTo=anfVGJwBe27v)* [Graph Classification on MUTAG (Molecules)](scrollTo=n5TxaTGzBkBa)* [Link Prediction on CORA (Citation Network)](scrollTo=OwVE88dTRC6V)* [Bonus: Intro to Graph Adversarial Attacks](scrollTo=35kbP8GZRFEm)**Additional Resources:*** Battaglia et al. (2018): [Relational inductive biases, deep learning, and graph networks](https://arxiv.org/pdf/1806.01261)---Some sections in this colab build on the [GraphNets Tutorial colab in pytorch](https://github.com/eemlcommunity/PracticalSessions2021/blob/main/graphnets/graphnets_tutorial.ipynb) by Nikola Jovanović.We would like to thank Razvan Pascanu and Petar Veličković for their valuable input and feedback.---*Copyright 2021 by the Authors.**Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0**Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.* Setup: Install and Import libraries ###Code !pip install git+git://github.com/deepmind/jraph.git !pip install flax !pip install dm-haiku # Imports %matplotlib inline import functools import matplotlib.pyplot as plt import jax import jax.numpy as jnp import jax.tree_util as tree import jraph import flax import haiku as hk import optax import numpy as onp import networkx as nx from typing import Tuple ###Output _____no_output_____ ###Markdown Fundamental Graph ConceptsA graph consists of a set of nodes and a set of edges, where edges form connections between nodes.More formally, a graph is defined as $ \mathcal{G} = (\mathcal{V}, \mathcal{E})$ where $\mathcal{V}$ is the set of vertices / nodes, and $\mathcal{E}$ is the set of edges.In an **undirected** graph, each edge is an unordered pair of two nodes $ \in \mathcal{V}$. E.g. a friend network can be represented as an undirected graph, assuming that the relationship "*A is friends with B*" implies "*B is friends with A*".In a **directed** graph, each edge is an ordered pair of nodes $ \in \mathcal{V}$. E.g. a citation network would be best represented with a directed graph, since the relationship "*A cites B*" does not imply "*B cites A*".The **degree** of a node is defined as the number of edges incident on it, i.e. the sum of incoming and outgoing edges for that node.The **in-degree** is the sum of incoming edges only, and the **out-degree** is the sum of outgoing edges only.There are two common ways to represent $\mathcal{E}$:1. As an **adjacency matrix**: a binary square matrix $A$ of size $|\mathcal{V}| \times |\mathcal{V}|$, where $A_{u,v}=1$ iff there is a connection between nodes $u$ and $v$.2. As an **adjacency list**: a list of ordered pairs $(u,v)$. Example: Below is a directed graph with four nodes and four edges.The arrows on the edges indicate the direction of each edge, e.g. there is an edge going from node 0 to node 1.node 0 has out-degree of 1, since it has one outgoing edge, and an in-degree of 2, since it has two incoming edges.The adjacency list representation of edges is:[(0, 1), (1, 2), (2, 0), (3, 0)]And adjacency matrix:$$\begin{array}{l|llll} source \setminus dest & n_0 & n_1 & n_2 & n_3 \\ \hlinen_0 & 0 & 1 & 0 & 0 \\n_1 & 0 & 0 & 1 & 0 \\n_2 & 1 & 0 & 0 & 0 \\n_3 & 1 & 0 & 0 & 0\end{array}$$ Graph Prediction TasksWhat are the kinds of problems we want to solve on graphs?The tasks fall into roughly three categories:1. **Node Classification**: E.g. what is the topic of a paper given a citation network of papers?2. **Link Prediction / Edge Classification**: E.g. are two people in a social network friends?3. **Graph Classification**: E.g. is this protein molecule (represented as a graph) likely going to be effective?The three main graph learning tasks. Image source: Petar Veličković.Which examples of graph prediction tasks come to your mind? Which task types do they correspond to?We will create and train models on all three task types in this tutorial. Intro to the jraph LibraryIn the following sections, we will learn how represent graphs and build GNNs in Python. We will use[jraph](https://github.com/deepmind/jraph), a lightweight library for working with GNNs in [JAX](https://github.com/google/jax). Representing a graph in jraphIn jraph, a graph is represented with a `GraphsTuple` object. In addition to defining the graph structure of nodes and edges, you can also store node features, edge features and global graph features in a `GraphsTuple`.In the `GraphsTuple`, edges are represented with an adjacency list, which is stored in two aligned arrays of node indices: senders (source nodes) and receivers (destinaton nodes).Each index corresponds to one edge, e.g. edge `i` goes from `senders[i]` to `receivers[i]`.You can even store multiple graphs in one `GraphsTuple` object.We will start with creating a simple directed graph with 4 nodes and 5 edges. We will also add toy features to the nodes, using `2*node_index` as the feature.We will later use this toy graph in the GCN demo. ###Code def build_toy_graph(): """Define a four node graph, each node has a scalar as its feature.""" # Nodes are defined implicitly by their features. # We will add four nodes, each with a feature, e.g. # node 0 has feature [0.], # node 1 has featre [2.] etc. # len(node_features) is the number of nodes. node_features = jnp.array([[0.], [2.], [4.], [6.]]) # We will now specify 5 directed edges connecting the nodes we defined above. # We define this with `senders` (source node indices) and `receivers` # (destination node indices). # For example, to add an edge from node 0 to node 1, we append 0 to senders, # and 1 to receivers. # We can do the same for all 5 edges: # 0 -> 1 # 1 -> 2 # 2 -> 0 # 3 -> 0 # 0 -> 3 senders = jnp.array([0, 1, 2, 3, 0]) receivers = jnp.array([1, 2, 0, 0, 3]) # You can optionally add edge attributes to the 5 edges. edges = jnp.array([[5.], [6.], [7.], [8.], [8.]]) # We then save the number of nodes and the number of edges. # This information is used to make running GNNs over multiple graphs # in a GraphsTuple possible. n_node = jnp.array([4]) n_edge = jnp.array([5]) # Optionally you can add `global` information, such as a graph label. global_context = jnp.array([[1]]) # Same feature dimensions as nodes and edges. graph = jraph.GraphsTuple( nodes=node_features, edges=edges, senders=senders, receivers=receivers, n_node=n_node, n_edge=n_edge, globals=global_context ) return graph graph = build_toy_graph() ###Output _____no_output_____ ###Markdown Inspecting the GraphsTuple ###Code # Number of nodes # Note that `n_node` returns an array. The length of `n_node` corresponds to # the number of graphs stored in one `GraphsTuple`. # In this case, we only have one graph, so n_node has length 1. graph.n_node # Number of edges graph.n_edge # Node features graph.nodes # Edge features graph.edges # Edges graph.senders graph.receivers # Graph-level features graph.globals ###Output _____no_output_____ ###Markdown Visualizing the GraphTo visualize the graph structure of the graph we created above, we will use the [`networkx`](networkx.org) library because it already has functions for drawing graphs.We first convert the `jraph.GraphsTuple` to a `networkx.DiGraph`. ###Code def convert_jraph_to_networkx_graph(jraph_graph): nodes, edges, receivers, senders, _, _, _ = jraph_graph nx_graph = nx.DiGraph() if nodes is None: for n in range(jraph_graph.n_node[0]): nx_graph.add_node(n) else: for n in range(jraph_graph.n_node[0]): nx_graph.add_node(n, node_feature=nodes[n]) if edges is None: for e in range(jraph_graph.n_edge[0]): nx_graph.add_edge(int(senders[e]), int(receivers[e])) else: for e in range(jraph_graph.n_edge[0]): nx_graph.add_edge(int(senders[e]), int(receivers[e]), edge_feature=edges[e]) return nx_graph def draw_jraph_graph_structure(jraph_graph: jraph.GraphsTuple): nx_graph = convert_jraph_to_networkx_graph(jraph_graph) pos = nx.spring_layout(nx_graph) nx.draw(nx_graph, pos=pos, with_labels = True, node_size=500, font_color='yellow') draw_jraph_graph_structure(graph) ###Output _____no_output_____ ###Markdown Graph Convolutional Network (GCN) LayerNow let's implement our first graph network!The graph convolutional network, introduced by by Kipf et al. (2017) in https://arxiv.org/abs/1609.02907, is one of the basic graph network architectures. We will build its core building block, the graph convolutional layer.In a convolutional neural network (CNN), a convolutional filter (e.g. 3x3) is applied repeatedly to different parts of a larger input (e.g. 64x64) by striding across the input.In a GCN, a convolution filter is applied to the neighbourhoods around a node in a graph.However, there are also some differences to point out:In contrast to the CNN filter, the neighbourhoods in a GCN can be of different sizes, and there is no ordering of inputs. To see that, note that the CNN filter performs a weighted sum aggregation over the inputs with learnable weights, where each filter input has its own weight. In the GCN, the same weight is applied to all neighbours and the aggregation function is not learned. In other words, in a GCN, each neighbor contributes equally. This is why the CNN filter is not order-invariant, but the GCN filter is.Comparison of CNN and GCN filters.Image source: https://arxiv.org/pdf/1901.00596.pdfMore specifically, the GCN layer performs two steps:1. _Compute messages / update node features_: Create a feature vector $\vec{h}_n$ for each node $n$ (e.g. with an MLP). This is going to be the message that this node will pass to neighboring nodes.2. _Message-passing / aggregate node features_: For each node, calculate a new feature vector $\vec{h}'_n$ based on the messages (features) from the nodes in its neighborhood. In a directed graph, only nodes from incoming edges are counted as neighbors. The image below shows this aggregation step. There are multiple options for aggregation in a GCN, e.g. taking the mean, the sum, the min or max. (Later in this tutorial, we will also see how we can make the aggregation function dependent on the node features by adding an attention mechanism in the Graph Attention Network.)*\"A generic overview of a graph convolution operation, highlighting the relevant information for deriving the next-level features for every node in the graph.\"* Image source: Petar Veličković (https://github.com/PetarV-/TikZ) Simple GCN Layer ###Code def apply_simplified_gcn(graph: jraph.GraphsTuple): # Unpack GraphsTuple nodes, _, receivers, senders, _, _, _ = graph # 1. Update node features # For simplicity, we will first use an identify function here, and replace it # with a trainable MLP block later. update_node_fn = lambda nodes: nodes nodes = update_node_fn(nodes) # 2. Aggregate node features over nodes in neighborhood # Equivalent to jnp.sum(n_node), but jittable total_num_nodes = tree.tree_leaves(nodes)[0].shape[0] aggregate_nodes_fn = jax.ops.segment_sum # Compute new node features by aggregating messages from neighboring nodes nodes = tree.tree_map(lambda x: aggregate_nodes_fn(x[senders], receivers, total_num_nodes), nodes) out_graph = graph._replace(nodes=nodes) return out_graph ###Output _____no_output_____ ###Markdown We can now run the graph convolution on our toy graph from before. ###Code graph = build_toy_graph() ###Output _____no_output_____ ###Markdown Here is the visualized graph. ###Code draw_jraph_graph_structure(graph) out_graph = apply_simplified_gcn(graph) ###Output _____no_output_____ ###Markdown Since we used the identity function for updating nodes and sum aggregation, we can verify the results pretty easily. As a reminder, in this toy graph, the node features are the same as the node index.Node 0: sum of features from node 2 and node 3 $\rightarrow$ 10.Node 1: sum of features from node 0 $\rightarrow$ 0.Node 2: sum of features from node 1 $\rightarrow$ 2.Node 3: sum of features from node 0 $\rightarrow$ 0. ###Code out_graph.nodes ###Output _____no_output_____ ###Markdown Add Trainable Parameters to GCN layerSo far our graph convolution operation doesn't have any learnable parameters.Let's add an MLP block to the update function to make it trainable. ###Code class MLP(hk.Module): def __init__(self, features: jnp.ndarray): super().__init__() self.features = features def __call__(self, x: jnp.ndarray): layers = [] for feat in self.features[:-1]: layers.append(hk.Linear(feat)) layers.append(jax.nn.relu) layers.append(hk.Linear(self.features[-1])) mlp = hk.Sequential(layers) return mlp(x) # Use MLP block to define the update node function update_node_fn = lambda x: MLP(features=[8, 4])(x) ###Output _____no_output_____ ###Markdown Check outputs of `update_node_fn` with MLP Block ###Code graph = build_toy_graph() update_node_module = hk.without_apply_rng(hk.transform(update_node_fn)) params = update_node_module.init(jax.random.PRNGKey(42), graph.nodes) out = update_node_module.apply(params, graph.nodes) ###Output _____no_output_____ ###Markdown As output, we expect the updated node features. We should see one array of dim 4 for each of the 4 nodes, which is the result of applying a single MLP block to the features of each node individually. ###Code out ###Output _____no_output_____ ###Markdown Add Self-Edges (Edges connecting a node to itself)For each node, add an edge of the node onto itself. This way, nodes will include themselves in the aggregation step. ###Code def add_self_edges_fn(receivers, senders, total_num_nodes): """Adds self edges. Assumes self edges are not in the graph yet.""" receivers = jnp.concatenate((receivers, jnp.arange(total_num_nodes)), axis=0) senders = jnp.concatenate((senders, jnp.arange(total_num_nodes)), axis=0) return receivers, senders ###Output _____no_output_____ ###Markdown Add Symmetric NormalizationNote that the nodes may have different numbers of neighbors / degrees.This could lead to instabilities during neural network training, e.g. exploding or vanishing gradients. To address that, normalization is a commonly used method. In this case, we will normalize by node degrees.As a first attempt, we could count the number of incoming edges (including self-edge) and divide by that value.More formally, let $A$ be the adjacency matrix defining the edges of the graph.Then we define the degree matrix $D$ as a diagonal matrix with $D_{ii} = \sum_jA_{ij}$ (the degree of node $i$)Now we can normalize $AH$ by dividing it by the node degrees:$${D}^{-1}AH$$To take both the in and out degrees into account, we can use symmetric normalization, which is also what Kipf and Welling proposed in their [paper](https://arxiv.org/abs/1609.02907):$$D^{-\frac{1}{2}}AD^{-\frac{1}{2}}H$$ General GCN LayerNow we can write a more general and configurable version of the Graph Convolution layer, allowing the caller to specify:* **`update_node_fn`**: Function to use to update node features (e.g. the MLP block version we just implemented)* **`aggregate_nodes_fn`**: Aggregation function to use to aggregate messages from neighbourhood.* **`add_self_edges`**: Whether to add self edges for aggregation step.* **`symmetric_normalization`**: Whether to add symmetric normalization. ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/_src/models.py#L506 def GraphConvolution( update_node_fn, aggregate_nodes_fn=jax.ops.segment_sum, add_self_edges: bool = False, symmetric_normalization: bool = True): """Returns a method that applies a Graph Convolution layer. Graph Convolutional layer as in https://arxiv.org/abs/1609.02907, NOTE: This implementation does not add an activation after aggregation. If you are stacking layers, you may want to add an activation between each layer. Args: update_node_fn: function used to update the nodes. In the paper a single layer MLP is used. aggregate_nodes_fn: function used to aggregates the sender nodes. add_self_edges: whether to add self edges to nodes in the graph as in the paper definition of GCN. Defaults to False. symmetric_normalization: whether to use symmetric normalization. Defaults to True. Returns: A method that applies a Graph Convolution layer. """ def _ApplyGCN(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Applies a Graph Convolution layer.""" nodes, _, receivers, senders, _, _, _ = graph # First pass nodes through the node updater. nodes = update_node_fn(nodes) # Equivalent to jnp.sum(n_node), but jittable total_num_nodes = tree.tree_leaves(nodes)[0].shape[0] if add_self_edges: # We add self edges to the senders and receivers so that each node # includes itself in aggregation. # In principle, a `GraphsTuple` should partition by n_edge, but in # this case it is not required since a GCN is agnostic to whether # the `GraphsTuple` is a batch of graphs or a single large graph. conv_receivers, conv_senders = add_self_edges_fn(receivers, senders, total_num_nodes) else: conv_senders = senders conv_receivers = receivers # pylint: disable=g-long-lambda if symmetric_normalization: # Calculate the normalization values. count_edges = lambda x: jax.ops.segment_sum( jnp.ones_like(conv_senders), x, total_num_nodes) sender_degree = count_edges(conv_senders) receiver_degree = count_edges(conv_receivers) # Pre normalize by sqrt sender degree. # Avoid dividing by 0 by taking maximum of (degree, 1). nodes = tree.tree_map( lambda x: x * jax.lax.rsqrt(jnp.maximum(sender_degree, 1.0))[:, None], nodes, ) # Aggregate the pre-normalized nodes. nodes = tree.tree_map( lambda x: aggregate_nodes_fn(x[conv_senders], conv_receivers, total_num_nodes), nodes) # Post normalize by sqrt receiver degree. # Avoid dividing by 0 by taking maximum of (degree, 1). nodes = tree.tree_map( lambda x: (x * jax.lax.rsqrt(jnp.maximum(receiver_degree, 1.0))[:, None]), nodes, ) else: nodes = tree.tree_map( lambda x: aggregate_nodes_fn(x[conv_senders], conv_receivers, total_num_nodes), nodes) # pylint: enable=g-long-lambda return graph._replace(nodes=nodes) return _ApplyGCN ###Output _____no_output_____ ###Markdown Test General GCN Layer ###Code gcn_layer = GraphConvolution( update_node_fn=lambda n: MLP(features=[8, 4])(n), aggregate_nodes_fn=jax.ops.segment_sum, add_self_edges=True, symmetric_normalization=True ) graph = build_toy_graph() network = hk.without_apply_rng(hk.transform(gcn_layer)) params = network.init(jax.random.PRNGKey(42), graph) out_graph = network.apply(params, graph) out_graph.nodes ###Output _____no_output_____ ###Markdown Build GCN Model with Multiple LayersWith a single GCN layer, a node's representation after the GCN layer is onlyinfluenced by its direct neighbourhood. However, we may want to consider larger neighbourhoods, i.e. more than just 1 hop away. To achieve that, we can stackmultiple GCN layers, similar to how stacking CNN layers expands the input region.We will define a network with three GCN layers: ###Code def gcn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Defines a graph neural network with 3 GCN layers. Args: graph: GraphsTuple the network processes. Returns: output graph with updated node values. """ gn = GraphConvolution( update_node_fn=lambda n: jax.nn.relu(hk.Linear(8)(n)), add_self_edges=True) graph = gn(graph) gn = GraphConvolution( update_node_fn=lambda n: jax.nn.relu(hk.Linear(4)(n)), add_self_edges=True) graph = gn(graph) gn = GraphConvolution( update_node_fn=hk.Linear(2)) graph = gn(graph) return graph graph = build_toy_graph() network = hk.without_apply_rng(hk.transform(gcn)) params = network.init(jax.random.PRNGKey(42), graph) out_graph = network.apply(params, graph) out_graph.nodes ###Output _____no_output_____ ###Markdown Node Classification with GCN on Karate Club DatasetTime to try out our GCN on our first graph prediction task! Zachary's Karate Club Dataset[Zachary's karate club](https://en.wikipedia.org/wiki/Zachary%27s_karate_club) is a small dataset commonly used as an example for a social graph. It's great for demo purposes, as it's easy to visualize and quick to train a model on it.A node represents a student or instructor in the club. An edge means that those two people have interacted outside of the class. There are two instructors in the club.Each student is assigned to one of two instructors. Optimizing the GCN on the Karate Club Node Classification TaskThe task is to predict the assignment of students to instructors, given the social graph and only knowing the assignment of two nodes (the two instructors) a priori.In other words, out of the 34 nodes, only two nodes are labeled, and we are trying to optimize the assignment of the other 32 nodes, by **maximizing the log-likelihood of the two known node assignments**.We will compute the accuracy of our node assignments by comparing to the ground-truth assignments. **Note that the ground-truth for the 32 student nodes is not used in the loss function itself.** Let's load the dataset: ###Code """Zachary's karate club example. From https://github.com/deepmind/jraph/blob/master/jraph/examples/zacharys_karate_club.py. Here we train a graph neural network to process Zachary's karate club. https://en.wikipedia.org/wiki/Zachary%27s_karate_club Zachary's karate club is used in the literature as an example of a social graph. Here we use a graphnet to optimize the assignments of the students in the karate club to two distinct karate instructors (Mr. Hi and John A). """ def get_zacharys_karate_club() -> jraph.GraphsTuple: """Returns GraphsTuple representing Zachary's karate club.""" social_graph = [ (1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2), (4, 0), (5, 0), (6, 0), (6, 4), (6, 5), (7, 0), (7, 1), (7, 2), (7, 3), (8, 0), (8, 2), (9, 2), (10, 0), (10, 4), (10, 5), (11, 0), (12, 0), (12, 3), (13, 0), (13, 1), (13, 2), (13, 3), (16, 5), (16, 6), (17, 0), (17, 1), (19, 0), (19, 1), (21, 0), (21, 1), (25, 23), (25, 24), (27, 2), (27, 23), (27, 24), (28, 2), (29, 23), (29, 26), (30, 1), (30, 8), (31, 0), (31, 24), (31, 25), (31, 28), (32, 2), (32, 8), (32, 14), (32, 15), (32, 18), (32, 20), (32, 22), (32, 23), (32, 29), (32, 30), (32, 31), (33, 8), (33, 9), (33, 13), (33, 14), (33, 15), (33, 18), (33, 19), (33, 20), (33, 22), (33, 23), (33, 26), (33, 27), (33, 28), (33, 29), (33, 30), (33, 31), (33, 32)] # Add reverse edges. social_graph += [(edge[1], edge[0]) for edge in social_graph] n_club_members = 34 return jraph.GraphsTuple( n_node=jnp.asarray([n_club_members]), n_edge=jnp.asarray([len(social_graph)]), # One-hot encoding for nodes, i.e. argmax(nodes) = node index. nodes=jnp.eye(n_club_members), # No edge features. edges=None, globals=None, senders=jnp.asarray([edge[0] for edge in social_graph]), receivers=jnp.asarray([edge[1] for edge in social_graph])) def get_ground_truth_assignments_for_zacharys_karate_club() -> jnp.ndarray: """Returns ground truth assignments for Zachary's karate club.""" return jnp.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) graph = get_zacharys_karate_club() print(f'Number of nodes: {graph.n_node[0]}') print(f'Number of edges: {graph.n_edge[0]}') ###Output _____no_output_____ ###Markdown Visualize the karate club graph with circular node layout: ###Code nx_graph = convert_jraph_to_networkx_graph(graph) pos = nx.circular_layout(nx_graph) plt.figure(figsize=(6, 6)) nx.draw(nx_graph, pos=pos, with_labels = True, node_size=500, font_color='yellow') ###Output _____no_output_____ ###Markdown Define the GCN with the `GraphConvolution` layers we implemented: ###Code def gcn_definition(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Defines a GCN for the karate club task. Args: graph: GraphsTuple the network processes. Returns: output graph with updated node values. """ gn = GraphConvolution( update_node_fn=lambda n: jax.nn.relu(hk.Linear(8)(n)), add_self_edges=True) graph = gn(graph) gn = GraphConvolution( update_node_fn=hk.Linear(2)) # output dim is 2 because we have 2 output classes. graph = gn(graph) return graph ###Output _____no_output_____ ###Markdown Training and evaluation code: ###Code def optimize_club(network, num_steps: int): """Solves the karate club problem by optimizing the assignments of students.""" zacharys_karate_club = get_zacharys_karate_club() labels = get_ground_truth_assignments_for_zacharys_karate_club() params = network.init(jax.random.PRNGKey(42), zacharys_karate_club) @jax.jit def predict(params): decoded_graph = network.apply(params, zacharys_karate_club) return jnp.argmax(decoded_graph.nodes, axis=1) @jax.jit def prediction_loss(params): decoded_graph = network.apply(params, zacharys_karate_club) # We interpret the decoded nodes as a pair of logits for each node. log_prob = jax.nn.log_softmax(decoded_graph.nodes) # The only two assignments we know a-priori are those of Mr. Hi (Node 0) # and John A (Node 33). return -(log_prob[0, 0] + log_prob[33, 1]) opt_init, opt_update = optax.adam(1e-2) opt_state = opt_init(params) @jax.jit def update(params, opt_state): g = jax.grad(prediction_loss)(params) updates, opt_state = opt_update(g, opt_state) return optax.apply_updates(params, updates), opt_state @jax.jit def accuracy(params): decoded_graph = network.apply(params, zacharys_karate_club) return jnp.mean(jnp.argmax(decoded_graph.nodes, axis=1) == labels) for step in range(num_steps): print(f"step {step} accuracy {accuracy(params).item():.2f}") params, opt_state = update(params, opt_state) return predict(params) ###Output _____no_output_____ ###Markdown Let's train the GCN! We expect this model reach an accuracy of about 0.91. ###Code network = hk.without_apply_rng(hk.transform(gcn_definition)) result = optimize_club(network, num_steps=15) ###Output _____no_output_____ ###Markdown Try modifying the model parameters to see if you can improve the accuracy!You can also modify the dataset itself, and see how that influences model training. Node assignments predicted by the model at the end of training: ###Code result ###Output _____no_output_____ ###Markdown Visualize ground truth and predicted node assignments:What do you think of the results? ###Code zacharys_karate_club = get_zacharys_karate_club() nx_graph = convert_jraph_to_networkx_graph(zacharys_karate_club) pos = nx.circular_layout(nx_graph) fig = plt.figure(figsize=(15, 7)) ax1 = fig.add_subplot(121) nx.draw(nx_graph, pos=pos, with_labels=True, node_size=500, node_color=result.tolist(), font_color='white') ax1.title.set_text('Predicted Node Assignments with GCN') gt_labels = get_ground_truth_assignments_for_zacharys_karate_club() ax2 = fig.add_subplot(122) nx.draw(nx_graph, pos=pos, with_labels=True, node_size=500, node_color=gt_labels.tolist(), font_color='white') ax2.title.set_text('Ground-Truth Node Assignments') fig.suptitle('Do you spot the difference? 😐', y=-0.01) plt.show() ###Output _____no_output_____ ###Markdown Graph Attention (GAT) LayerWhile the GCN we covered in the previous section can learn meaningful representations, it also has some shortcomings. Can you think of any?In the GCN layer, the messages from all its neighbours and the node itself are equally weighted. This may lead to loss of node-specific information. E.g., consider the case when a set of nodes shares the same set of neighbors, and start out with different node features. Then because of averaging, their resulting output features would be the same. Adding self-edges mitigates this issue by a small amount, but this problem is magnified with increasing number of GCN layers and number of edges connecting to a node.The graph attention (GAT) mechanism, as proposed by [Velickovic et al. ( 2017)](https://arxiv.org/abs/1710.10903), allows the network to learn how to weigh / assign importance to the node features from the neighbourhood when computing the new node features. This is very similar to the idea of using attention in Transformers, which were introduced in [Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762).(One could even argue that Transformers are graph attention networks operating on the special case of fully-connected graphs.)In the figure below, $\vec{h}$ are the node features and $\vec{\alpha}$ are the learned attention weights.Figure Credit: [Velickovic et al. ( 2017)](https://arxiv.org/abs/1710.10903).(Detail: This image is showing multi-headed attention with 3 heads, each color corresponding to a different head. At the end, an aggregation function is applied over all the heads.)To obtain the output node features of the GAT layer, we compute:$$ \vec{h}'_i = \sum _{j \in \mathcal{N}(i)}\alpha_{ij} \mathbf{W} \vec{h}_j$$Here, $\mathbf{W}$ is a weight matrix which performs a linear transformation on the input. How do we obtain $\alpha$, or in other words, learn what to pay attention to?Intuitively, the attention coefficient $\alpha_{ij}$ should rely on both the transformed features from nodes $i$ and $j$. So let's first define an attention mechanism function $\mathrm{attention\_fn}$ that computes the intermediary attention coefficients $e_{ij}$:$$ e_{ij} = \mathrm{attention\_fn}(\mathbf{W}\vec{h}_i, \mathbf{W}\vec{h}_j)$$To obtain normalized attention weights $\alpha$, we apply a softmax:$$\alpha_{ij} = \frac{\exp(e_{ij})}{\sum _{j \in \mathcal{N}(i)}\exp(e_{ij})}$$For the function $a$, the authors of the GAT paper chose to concatenate the transformed node features (denoted by $||$) and apply a single-layer feedforward network, parameterized by a weight vector $\vec{\mathbf{a}}$ and with LeakyRelu as non-linearity.In the implementation below, we refer to $\mathbf{W}$ as `attention_query_fn` and $att\_fn$ as `attention_logit_fn`.$$\mathrm{attention\_fn}(\mathbf{W}\vec{h}_i, \mathbf{W}\vec{h}_j) = \text{LeakyReLU}(\vec{\mathbf{a}}(\mathbf{W}\vec{h}_i || \mathbf{W}\vec{h}_j))$$The figure below summarizes this attention mechanism visually.Figure Credit: Petar Velickovic.<!-- $\sum_{j \in \mathcal{N}(i)}\vec{\alpha}_{ij} \stackrel{!}{=}1 $ --> ###Code # GAT implementation adapted from https://github.com/deepmind/jraph/blob/master/jraph/_src/models.py#L442. def GAT(attention_query_fn, attention_logit_fn, node_update_fn=None, add_self_edges=True): """Returns a method that applies a Graph Attention Network layer. Graph Attention message passing as described in https://arxiv.org/pdf/1710.10903.pdf. This model expects node features as a jnp.array, may use edge features for computing attention weights, and ignore global features. It does not support nests. Args: attention_query_fn: function that generates attention queries from sender node features. attention_logit_fn: function that converts attention queries into logits for softmax attention. node_update_fn: function that updates the aggregated messages. If None, will apply leaky relu and concatenate (if using multi-head attention). Returns: A function that applies a Graph Attention layer. """ # pylint: disable=g-long-lambda if node_update_fn is None: # By default, apply the leaky relu and then concatenate the heads on the # feature axis. node_update_fn = lambda x: jnp.reshape( jax.nn.leaky_relu(x), (x.shape[0], -1)) def _ApplyGAT(graph): """Applies a Graph Attention layer.""" nodes, edges, receivers, senders, _, _, _ = graph # Equivalent to the sum of n_node, but statically known. try: sum_n_node = nodes.shape[0] except IndexError: raise IndexError('GAT requires node features') # Pass nodes through the attention query function to transform # node features, e.g. with an MLP. nodes = attention_query_fn(nodes) total_num_nodes = tree.tree_leaves(nodes)[0].shape[0] if add_self_edges: # We add self edges to the senders and receivers so that each node # includes itself in aggregation. receivers, senders = add_self_edges_fn(receivers, senders, total_num_nodes) # We compute the softmax logits using a function that takes the # embedded sender and receiver attributes. sent_attributes = nodes[senders] received_attributes = nodes[receivers] att_softmax_logits = attention_logit_fn( sent_attributes, received_attributes, edges) # Compute the attention softmax weights on the entire tree. att_weights = jraph.segment_softmax(att_softmax_logits, segment_ids=receivers, num_segments=sum_n_node) # Apply attention weights. messages = sent_attributes * att_weights # Aggregate messages to nodes. nodes = jax.ops.segment_sum(messages, receivers, num_segments=sum_n_node) # Apply an update function to the aggregated messages. nodes = node_update_fn(nodes) return graph._replace(nodes=nodes) # pylint: enable=g-long-lambda return _ApplyGAT ###Output _____no_output_____ ###Markdown Test GAT Layer ###Code def attention_logit_fn(sender_attr, receiver_attr, edges): del edges x = jnp.concatenate((sender_attr, receiver_attr), axis=1) return hk.Linear(1)(x) gat_layer = GAT( attention_query_fn=lambda n: hk.Linear(8)(n), # Applies W to the node features attention_logit_fn=attention_logit_fn, node_update_fn=None, add_self_edges=True, ) graph = build_toy_graph() network = hk.without_apply_rng(hk.transform(gat_layer)) params = network.init(jax.random.PRNGKey(42), graph) out_graph = network.apply(params, graph) out_graph.nodes ###Output _____no_output_____ ###Markdown Train GAT Model on Karate Club DatasetWe will now repeat the karate club experiment with a GAT network. ###Code def gat_definition(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Defines a GAT network for the karate club node classification task. Args: graph: GraphsTuple the network processes. Returns: output graph with updated node values. """ def _attention_logit_fn( sender_attr, receiver_attr, edges): del edges x = jnp.concatenate((sender_attr, receiver_attr), axis=1) return hk.Linear(1)(x) gn = GAT( attention_query_fn=lambda n: hk.Linear(8)(n), attention_logit_fn=_attention_logit_fn, node_update_fn=None, add_self_edges=True) graph = gn(graph) gn = GAT( attention_query_fn=lambda n: hk.Linear(8)(n), attention_logit_fn=_attention_logit_fn, node_update_fn=hk.Linear(2), add_self_edges=True) graph = gn(graph) return graph ###Output _____no_output_____ ###Markdown Let's train the model!We expect the model to reach an accuracy of about 0.97. ###Code network = hk.without_apply_rng(hk.transform(gat_definition)) result = optimize_club(network, num_steps=15) ###Output _____no_output_____ ###Markdown The final node assignment predicted by the trained model: ###Code result zacharys_karate_club = get_zacharys_karate_club() nx_graph = convert_jraph_to_networkx_graph(zacharys_karate_club) pos = nx.circular_layout(nx_graph) fig = plt.figure(figsize=(15, 7)) ax1 = fig.add_subplot(121) nx.draw(nx_graph, pos=pos, with_labels=True, node_size=500, node_color=result.tolist(), font_color='white') ax1.title.set_text('Predicted Node Assignments with GAT') gt_labels = get_ground_truth_assignments_for_zacharys_karate_club() ax2 = fig.add_subplot(122) nx.draw(nx_graph, pos=pos, with_labels=True, node_size=500, node_color=gt_labels.tolist(), font_color='white') ax2.title.set_text('Ground-Truth Node Assignments') fig.suptitle('Do you spot the difference? 😐', y=-0.01) plt.show() ###Output _____no_output_____ ###Markdown Graph Classification on MUTAG (Molecules) In the previous section, we used our GCN and GAT networks on a node classification problem. Now, let's use the same model architectures on a **graph classification task**. The main difference from our previous setup is that instead of observing individual node latents, we are now attempting to summarize them into one embedding vector, representative of the entire graph, which we then use to predict the class of this graph.We will do this on one of the most common tasks of this type -- **molecular property prediction**, where molecules are represented as graphs. Nodes correspond to atoms, and edges represent the bonds between them. We will use the **MUTAG** dataset for this example, a common dataset from the [TUDatasets](https://chrsmrrs.github.io/datasets/) collection.We will download this graph dataset from pytorch geometric, and convert it to a jraph graph dataset. Please install pytorch and pytorch geometric (just for dataset purposes). ###Code # Install required packages. !pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.10.0+cu113.html !pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.10.0+cu113.html !pip install torch-geometric import torch_geometric from torch_geometric.datasets import TUDataset mutag_pytorch_dataset = TUDataset(root='./', name='MUTAG') #@title Convert Pytorch graph dataset to jraph def convert_pytorch_graph_to_jraph(pytorch_g: torch_geometric.data.Data) -> Tuple[jraph.GraphsTuple, jnp.ndarray]: """Converts a single pytorch graph Data object to a jraph Graphstuple. Args: pytorch_g: A pytorch-geometric Data object, containing one graph. Returns: Tuple of jraph Graphstuple containing a single graph, and the target. """ node_features, edge_features, senders, receivers, globals, targets = None, None, None, None, None, None if 'x' in pytorch_g: node_features = jnp.array(pytorch_g.x) if 'edge_attr' in pytorch_g: edge_features = jnp.array(pytorch_g.edge_attr) if 'edge_index' in pytorch_g: senders = jnp.array(pytorch_g.edge_index[0]) receivers = jnp.array(pytorch_g.edge_index[1]) if 'y' in pytorch_g: target = jnp.array(pytorch_g.y) n_node = jnp.array([pytorch_g.num_nodes]) n_edge = jnp.array([pytorch_g.num_edges]) jraph_g = jraph.GraphsTuple( nodes=node_features, edges=edge_features, senders=senders, receivers=receivers, n_node=n_node, n_edge=n_edge, globals=globals ) return jraph_g, target def convert_pytorch_dataset_to_jraph(pytorch_dataset): """Converts a pytorch dataset to a jraph graph dataset.""" jraph_dataset = [] for pytorch_g in pytorch_dataset: sample = {} sample['input_graph'], sample['target'] = convert_pytorch_graph_to_jraph(pytorch_g) jraph_dataset.append(sample) return jraph_dataset mutag_ds = convert_pytorch_dataset_to_jraph(mutag_pytorch_dataset) len(mutag_ds) # Inspect the first graph g = mutag_ds[0]['input_graph'] print(f'Number of nodes: {g.n_node[0]}') print(f'Number of edges: {g.n_edge[0]}') print(f'Node features shape: {g.nodes.shape}') print(f'Edge features shape: {g.edges.shape}') draw_jraph_graph_structure(g) print(f"Target: {mutag_ds[0]['target']}") ###Output _____no_output_____ ###Markdown We see that there are 188 graphs, to be classified in one of 2 classes, representing "their mutagenic effect on a specific gram negative bacterium". Node features represent the 1-hot encoding of the atom type (0=C, 1=N, 2=O, 3=F, 4=I, 5=Cl, 6=Br). Edge features (`edge_attr`) represent the bond type, which we will here ignore.Let's split the dataset to use the first 150 graphs as the training set (and the rest as the test set). ###Code train_mutag_ds = mutag_ds[:150] test_mutag_ds = mutag_ds[150:] ###Output _____no_output_____ ###Markdown Padding Graphs to Speed Up TrainingSince jax recompiles the program for each graph size, training would take a long time due to recompilation for different graph sizes. To address that, we pad the number of nodes and edges in the graphs to nearest power of two. Since jax maintains a cacheof compiled programs, the compilation cost is amortized. ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py def _nearest_bigger_power_of_two(x: int) -> int: """Computes the nearest power of two greater than x for padding.""" y = 2 while y < x: y *= 2 return y def pad_graph_to_nearest_power_of_two( graphs_tuple: jraph.GraphsTuple) -> jraph.GraphsTuple: """Pads a batched `GraphsTuple` to the nearest power of two. For example, if a `GraphsTuple` has 7 nodes, 5 edges and 3 graphs, this method would pad the `GraphsTuple` nodes and edges: 7 nodes --> 8 nodes (2^3) 5 edges --> 8 edges (2^3) And since padding is accomplished using `jraph.pad_with_graphs`, an extra graph and node is added: 8 nodes --> 9 nodes 3 graphs --> 4 graphs Args: graphs_tuple: a batched `GraphsTuple` (can be batch size 1). Returns: A graphs_tuple batched to the nearest power of two. """ # Add 1 since we need at least one padding node for pad_with_graphs. pad_nodes_to = _nearest_bigger_power_of_two(jnp.sum(graphs_tuple.n_node)) + 1 pad_edges_to = _nearest_bigger_power_of_two(jnp.sum(graphs_tuple.n_edge)) # Add 1 since we need at least one padding graph for pad_with_graphs. # We do not pad to nearest power of two because the batch size is fixed. pad_graphs_to = graphs_tuple.n_node.shape[0] + 1 return jraph.pad_with_graphs(graphs_tuple, pad_nodes_to, pad_edges_to, pad_graphs_to) ###Output _____no_output_____ ###Markdown Graph Network Model DefinitionWe will use `jraph.GraphNetwork()` to build our graph model. The `GraphNetwork` architecture is defined in [Battaglia et al. (2018)](https://arxiv.org/pdf/1806.01261.pdf).We first define update functions for nodes, edges, and the full graph (global). We will use MLP blocks for all three. ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py @jraph.concatenated_args def edge_update_fn(feats: jnp.ndarray) -> jnp.ndarray: """Edge update function for graph net.""" net = hk.Sequential( [hk.Linear(128), jax.nn.relu, hk.Linear(128)]) return net(feats) @jraph.concatenated_args def node_update_fn(feats: jnp.ndarray) -> jnp.ndarray: """Node update function for graph net.""" net = hk.Sequential( [hk.Linear(128), jax.nn.relu, hk.Linear(128)]) return net(feats) @jraph.concatenated_args def update_global_fn(feats: jnp.ndarray) -> jnp.ndarray: """Global update function for graph net.""" # MUTAG is a binary classification task, so output pos neg logits. net = hk.Sequential( [hk.Linear(128), jax.nn.relu, hk.Linear(2)]) return net(feats) def net_fn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: # Add a global paramater for graph classification. graph = graph._replace(globals=jnp.zeros([graph.n_node.shape[0], 1])) embedder = jraph.GraphMapFeatures( hk.Linear(128), hk.Linear(128), hk.Linear(128)) net = jraph.GraphNetwork( update_node_fn=node_update_fn, update_edge_fn=edge_update_fn, update_global_fn=update_global_fn) return net(embedder(graph)) ###Output _____no_output_____ ###Markdown Loss and Accuracy FunctionDefine the classification cross-entropy loss and accuracy function. ###Code def compute_loss(params: jnp.ndarray, graph: jraph.GraphsTuple, label: jnp.ndarray, net: jraph.GraphsTuple) -> jnp.ndarray: """Computes loss and accuracy.""" pred_graph = net.apply(params, graph) preds = jax.nn.log_softmax(pred_graph.globals) targets = jax.nn.one_hot(label, 2) # Since we have an extra 'dummy' graph in our batch due to padding, we want # to mask out any loss associated with the dummy graph. # Since we padded with `pad_with_graphs` we can recover the mask by using # get_graph_padding_mask. mask = jraph.get_graph_padding_mask(pred_graph) # Cross entropy loss. loss = -jnp.mean(preds * targets * mask[:, None]) # Accuracy taking into account the mask. accuracy = jnp.sum( (jnp.argmax(pred_graph.globals, axis=1) == label) * mask)/jnp.sum(mask) return loss, accuracy ###Output _____no_output_____ ###Markdown Training and Evaluation Functions ###Code # Adapted from https://github.com/deepmind/jraph/blob/master/jraph/ogb_examples/train.py def train(dataset, num_train_steps: int) -> jnp.ndarray: """Training loop.""" # Transform impure `net_fn` to pure functions with hk.transform. net = hk.without_apply_rng(hk.transform(net_fn)) # Get a candidate graph and label to initialize the network. graph = dataset[0]['input_graph'] # Initialize the network. params = net.init(jax.random.PRNGKey(42), graph) # Initialize the optimizer. opt_init, opt_update = optax.adam(1e-4) opt_state = opt_init(params) compute_loss_fn = functools.partial(compute_loss, net=net) # We jit the computation of our loss, since this is the main computation. # Using jax.jit means that we will use a single accelerator. If you want # to use more than 1 accelerator, use jax.pmap. More information can be # found in the jax documentation. compute_loss_fn = jax.jit(jax.value_and_grad( compute_loss_fn, has_aux=True)) for idx in range(num_train_steps): graph = dataset[idx % len(dataset)]['input_graph'] label = dataset[idx % len(dataset)]['target'] # Jax will re-jit your graphnet every time a new graph shape is encountered. # In the limit, this means a new compilation every training step, which # will result in *extremely* slow training. To prevent this, pad each # batch of graphs to the nearest power of two. Since jax maintains a cache # of compiled programs, the compilation cost is amortized. graph = pad_graph_to_nearest_power_of_two(graph) # Since padding is implemented with pad_with_graphs, an extra graph has # been added to the batch, which means there should be an extra label. label = jnp.concatenate([label, jnp.array([0])]) (loss, acc), grad = compute_loss_fn(params, graph, label) updates, opt_state = opt_update(grad, opt_state, params) params = optax.apply_updates(params, updates) if idx % 50 == 0: print(f'step: {idx}, loss: {loss}, acc: {acc}') print('Training finished') return params def evaluate(dataset, params): """Evaluation Script.""" # Transform impure `net_fn` to pure functions with hk.transform. net = hk.without_apply_rng(hk.transform(net_fn)) # Get a candidate graph and label to initialize the network. graph = dataset[0]['input_graph'] accumulated_loss = 0 accumulated_accuracy = 0 compute_loss_fn = jax.jit(functools.partial(compute_loss, net=net)) for idx in range(len(dataset)): graph = dataset[idx]['input_graph'] label = dataset[idx]['target'] graph = pad_graph_to_nearest_power_of_two(graph) label = jnp.concatenate([label, jnp.array([0])]) loss, acc = compute_loss_fn(params, graph, label) accumulated_accuracy += acc accumulated_loss += loss if idx % 100 == 0: print(f'Evaluated {idx + 1} graphs') print('Completed evaluation.') loss = accumulated_loss / idx accuracy = accumulated_accuracy / idx print(f'Eval loss: {loss}, accuracy {accuracy}') return loss, accuracy params = train(train_mutag_ds, num_train_steps=500) evaluate(test_mutag_ds, params) ###Output _____no_output_____ ###Markdown We converge at ~76% test accuracy. We could of course further tune the parameters to improve this result. Link prediction on CORA (Citation Network) The final problem type we will explore is **link prediction**, an instance of an **edge-level** task. Given a graph, our goal is to predict whether a certain edge $(u,v)$ should be present or not. This is often useful in the recommender system settings (e.g., propose new friends in a social network, propose a movie to a user).As before, the first step is to obtain node latents $h_i$ using a GNN. In this context we will use the autoencoder language and call this GNN **encoder**. Then, we learn a binary classifier $f: (h_i, h_j) \to z_{i,j}$ (**decoder**), predicting if an edge $(i,j)$ should exist or not. While we could use a more elaborate decoder (e.g., an MLP), a common approach we will also use here is to focus on obtaining good node embeddings, and for the decoder simply use the similarity between node latents, i.e. $z_{i,j} = h_i^T h_j$. For this problem we will use the [**Cora** dataset](https://relational.fit.cvut.cz/dataset/CORA), a citation graph containing 2708 scientific publications. For each publication we have a 1433-dimensional feature vector, which is a bag-of-words representation (with a small, fixed dictionary) of the paper text. The edges in this graph represent citations, and are commonly treated as undirected. Each paper is in one of seven topics (classes) so you can also use this dataset for node classification. ###Code cora_pytorch_ds = torch_geometric.datasets.Planetoid(root='/', name='Cora') cora_ds = convert_pytorch_dataset_to_jraph(cora_pytorch_ds) ###Output _____no_output_____ ###Markdown Splitting Edges and Adding "Negative" EdgesFor the link prediction task, we split the edges into train, val and test sets and also add "negative" examples (edges that do not correspond to a citation). We will ignore the topic classes.For the validation and test splits, we add the same number of existing edges ("positive examples") and non-existing edges ("negative examples").In contrast to the validation and test splits, the training split only contains positive examples (set $T_+$). The $|T_+|$ negative examples to be used during training will be sampled ad hoc in each epoch and uniformly at random from all edges that are not in $T_+$. This allows the model to see a wider range of negative examples. ###Code def train_val_test_split_edges(graph: jraph.GraphsTuple, val_perc: float = 0.05, test_perc: float = 0.1): """Split edges in input graph into train, val and test splits. For val and test sets, also include negative edges. Based on torch_geometric.utils.train_test_split_edges. """ mask = graph.senders < graph.receivers senders = graph.senders[mask] receivers = graph.receivers[mask] num_val = int(val_perc * senders.shape[0]) num_test = int(test_perc * senders.shape[0]) permuted_indices = onp.random.permutation(range(senders.shape[0])) senders = senders[permuted_indices] receivers = receivers[permuted_indices] if graph.edges is not None: edges = graph.edges[permuted_indices] val_senders = senders[:num_val] val_receivers = receivers[:num_val] if graph.edges is not None: val_edges = edges[:num_val] test_senders = senders[num_val: num_val+num_test] test_receivers = receivers[num_val: num_val+num_test] if graph.edges is not None: test_edges = edges[num_val: num_val+num_test] train_senders = senders[num_val+num_test:] train_receivers = receivers[num_val+num_test:] train_edges = None if graph.edges is not None: train_edges = edges[num_val+num_test:] # make training edges undirected by adding reverse edges back in train_senders_undir = jnp.concatenate((train_senders, train_receivers)) train_receivers_undir = jnp.concatenate((train_receivers, train_senders)) train_senders = train_senders_undir train_receivers = train_receivers_undir # Negative edges. num_nodes = graph.n_node[0] # Create a negative adjacency mask, s.t. mask[i, j] = True iff edge i->j does # not exist in the original graph. neg_adj_mask = onp.ones((num_nodes, num_nodes), dtype=onp.uint8) # upper triangular part neg_adj_mask = onp.triu(neg_adj_mask, k=1) neg_adj_mask[graph.senders, graph.receivers] = 0 neg_adj_mask = neg_adj_mask.astype(onp.bool) neg_senders, neg_receivers = neg_adj_mask.nonzero() perm = onp.random.permutation(range(len(neg_senders))) neg_senders = neg_senders[perm] neg_receivers = neg_receivers[perm] val_neg_senders = neg_senders[:num_val] val_neg_receivers = neg_receivers[:num_val] test_neg_senders = neg_senders[num_val: num_val + num_test] test_neg_receivers = neg_receivers[num_val: num_val + num_test] train_graph = jraph.GraphsTuple( nodes=graph.nodes, edges=train_edges, senders=train_senders, receivers=train_receivers, n_node=graph.n_node, n_edge=jnp.array([len(train_senders)]), globals=graph.globals ) return train_graph, neg_adj_mask, val_senders, val_receivers, val_neg_senders, val_neg_receivers, test_senders, test_receivers, test_neg_senders, test_neg_receivers ###Output _____no_output_____ ###Markdown Test the Edge Splitting Function ###Code graph = cora_ds[0]['input_graph'] train_graph, neg_adj_mask, val_pos_senders, val_pos_receivers, val_neg_senders, val_neg_receivers, test_pos_senders, test_pos_receivers, test_neg_senders, test_neg_receivers = train_val_test_split_edges(graph) print(f'Train set: {train_graph.senders.shape[0]} positive edges, we will sample the same number of negative edges at runtime') print(f'Val set: {val_pos_senders.shape[0]} positive edges, {val_neg_senders.shape[0]} negative edges') print(f'Test set: {test_pos_senders.shape[0]} positive edges, {test_neg_senders.shape[0]} negative edges') print(f'Negative adjacency mask shape: {neg_adj_mask.shape}') print(f'Numbe of negative edges to sample from: {neg_adj_mask.sum()}') ###Output _____no_output_____ ###Markdown *Note*: It will often happen during training that as a negative example, we sample an initially existing edge (that is now e.g. a positive example in the test set). We are however not allowed to check for this, as we should be unaware of the existence of test edges during training.Assuming our dot product decoder, we are essentially attempting to bring the latents of endpoints of edges from $T_+$ closer together, and make the latents of all other pairs of nodes as distant as possible. As this is impossible to fully satisfy, the hope is that the model will "fail" to distance those pairs of nodes where the edges should actually exist (positive examples from the test set). Graph Network Model DefinitionWe will use jraph.GraphNetwork to build our graph net model.We first define update functions for node features. We are not using edge or global features for this task. ###Code @jraph.concatenated_args def node_update_fn(feats: jnp.ndarray) -> jnp.ndarray: """Node update function for graph net.""" net = hk.Sequential( [hk.Linear(128), jax.nn.relu, hk.Linear(64)]) return net(feats) def net_fn(graph: jraph.GraphsTuple) -> jraph.GraphsTuple: """Network definition.""" graph = graph._replace(globals=jnp.zeros([graph.n_node.shape[0], 1])) net = jraph.GraphNetwork( update_node_fn=node_update_fn, update_edge_fn=None, update_global_fn=None) return net(graph) def decode(pred_graph: jraph.GraphsTuple, senders, receivers) -> jnp.DeviceArray: """Given a set of candidate edges, take dot product of respective nodes. Args: pred_graph: input graph. senders: Senders of candidate edges. receivers: Receivers of candidate edges. Returns: For each edge, computes dot product of the features of the two nodes. """ return jnp.squeeze(jnp.sum(pred_graph.nodes[senders] * pred_graph.nodes[receivers], axis=1)) ###Output _____no_output_____ ###Markdown To evaluate our model, we first apply the sigmoid function to obtained dot products to get a score $s_{i,j} \in [0,1]$ for each edge. Now, we can pick a threshold $\tau$ and say that we predict all pairs $(i,j)$ s.t. $s_{i,j} \geq \tau$ as edges (and all the rest as non-edges). Loss and ROC-AUC-Metric FunctionDefine the binary classification cross-entropy loss.To aggregate the results over all choices of $\tau$, we will use ROC-AUC (the area under the ROC curve) as our evaluation metric. ###Code from sklearn.metrics import roc_auc_score def compute_bce_with_logits_loss(x: jnp.DeviceArray, y: jnp.DeviceArray) -> jnp.DeviceArray: """Computes binary cross-entropy with logits loss. Combines sigmoid and BCE, and uses log-sum-exp trick for numerical stability. See https://stackoverflow.com/a/66909858 if you want to learn more. Args: x: Predictions (logits). y: Labels. Returns: Binary cross-entropy loss with mean aggregation. """ max_val = jnp.clip(x, 0, None) loss = x - x * y + max_val + jnp.log(jnp.exp(-max_val) + jnp.exp((-x - max_val))) return loss.mean() def compute_loss(params, graph, senders, receivers, labels, net): """Computes loss.""" pred_graph = net.apply(params, graph) preds = decode(pred_graph, senders, receivers) loss = compute_bce_with_logits_loss(preds, labels) return loss, preds def compute_roc_auc_score(preds: jnp.DeviceArray, labels: jnp.DeviceArray) -> jnp.DeviceArray: """Computes roc auc (area under the curve) score for classification.""" s = jax.nn.sigmoid(preds) roc_auc = roc_auc_score(labels, s) return roc_auc ###Output _____no_output_____ ###Markdown Helper function for sampling negative edges during training. ###Code def negative_sampling( graph: jraph.GraphsTuple, num_neg_samples: int, key: jnp.DeviceArray) -> Tuple[jnp.DeviceArray, jnp.DeviceArray]: """Samples negative edges, i.e. edges that don't exist in the input graph.""" num_nodes = graph.n_node[0] total_possible_edges = num_nodes**2 # convert 2D edge indices to 1D representation. pos_idx = graph.senders * num_nodes + graph.receivers # Percentage to oversample edges, so most likely will sample enough neg edges. alpha = jnp.abs(1 / (1 - 1.1 * (graph.senders.shape[0] / total_possible_edges))) perm = jax.random.randint( key, shape=(int(alpha * num_neg_samples),), minval=0, maxval=total_possible_edges, dtype=jnp.uint32) # mask where sampled edges are positive edges. mask = jnp.isin(perm, pos_idx) # remove positive edges. perm = perm[~mask][:num_neg_samples] # convert 1d back to 2d edge indices. neg_senders = perm // num_nodes neg_receivers = perm % num_nodes return neg_senders, neg_receivers ###Output _____no_output_____ ###Markdown Let's write the training loop: ###Code def train(dataset, num_epochs: int): """Training loop.""" key = jax.random.PRNGKey(42) # Transform impure `net_fn` to pure functions with hk.transform. net = hk.without_apply_rng(hk.transform(net_fn)) # Get a candidate graph and label to initialize the network. graph = dataset[0]['input_graph'] train_graph, _, val_pos_s, val_pos_r, val_neg_s, val_neg_r, test_pos_s, \ test_pos_r, test_neg_s, test_neg_r = train_val_test_split_edges( graph) # Prepare the validation and test data. val_senders = jnp.concatenate((val_pos_s, val_neg_s)) val_receivers = jnp.concatenate((val_pos_r, val_neg_r)) val_labels = jnp.concatenate( (jnp.ones(len(val_pos_s)), jnp.zeros(len(val_neg_s)))) test_senders = jnp.concatenate((test_pos_s, test_neg_s)) test_receivers = jnp.concatenate((test_pos_r, test_neg_r)) test_labels = jnp.concatenate( (jnp.ones(len(test_pos_s)), jnp.zeros(len(test_neg_s)))) # Initialize the network. params = net.init(key, train_graph) # Initialize the optimizer. opt_init, opt_update = optax.adam(1e-4) opt_state = opt_init(params) compute_loss_fn = functools.partial(compute_loss, net=net) # We jit the computation of our loss, since this is the main computation. # Using jax.jit means that we will use a single accelerator. If you want # to use more than 1 accelerator, use jax.pmap. More information can be # found in the jax documentation. compute_loss_fn = jax.jit(jax.value_and_grad(compute_loss_fn, has_aux=True)) for epoch in range(num_epochs): num_neg_samples = train_graph.senders.shape[0] train_neg_senders, train_neg_receivers = negative_sampling( train_graph, num_neg_samples=num_neg_samples, key=key) train_senders = jnp.concatenate((train_graph.senders, train_neg_senders)) train_receivers = jnp.concatenate( (train_graph.receivers, train_neg_receivers)) train_labels = jnp.concatenate( (jnp.ones(len(train_graph.senders)), jnp.zeros(len(train_neg_senders)))) (train_loss, train_preds), grad = compute_loss_fn(params, train_graph, train_senders, train_receivers, train_labels) updates, opt_state = opt_update(grad, opt_state, params) params = optax.apply_updates(params, updates) if epoch % 10 == 0 or epoch == (num_epochs - 1): train_roc_auc = compute_roc_auc_score(train_preds, train_labels) val_loss, val_preds = compute_loss(params, train_graph, val_senders, val_receivers, val_labels, net) val_roc_auc = compute_roc_auc_score(val_preds, val_labels) print( f'epoch: {epoch}, train_loss: {train_loss:.3f}, ' f'train_roc_auc: {train_roc_auc:.3f}, val_loss: {val_loss:.3f}, ' f'val_roc_auc: {val_roc_auc:.3f}' ) test_loss, test_preds = compute_loss(params, train_graph, test_senders, test_receivers, test_labels, net) test_roc_auc = compute_roc_auc_score(test_preds, test_labels) print('Training finished') print( f'epoch: {epoch}, test_loss: {test_loss:.3f}, test_roc_auc: {test_roc_auc:.3f}' ) return params ###Output _____no_output_____ ###Markdown Let's train the model! We expect the model to reach roughly test_roc_auc of 0.84.(Note that ROC-AUC is a scalar between 0 and 1, with 1 being the ROC-AUC of a perfect classifier.) ###Code params = train(cora_ds, num_epochs=200) ###Output _____no_output_____
notebooks/example_regiosqm2018.ipynb
###Markdown Example: Use RegioSQM2018Reference- https://github.com/jensengroup/regiosqm- https://pubs.rsc.org/en/content/articlelanding/2018/SC/C7SC04156J ###Code %load_ext autoreload %autoreload 2 %matplotlib inline import logging import sys # Show progress bars on pandas functions from tqdm.auto import tqdm tqdm.pandas() import numpy as np import pandas as pd from IPython.display import SVG from rdkit import Chem from rdkit.Chem import AllChem, PandasTools from rdkit.Chem.Draw import MolsToGridImage, MolToImage, rdMolDraw2D import ppqm # Set logging logging.basicConfig(stream=sys.stdout, level=logging.INFO) logging.getLogger("xtb").setLevel(logging.INFO) show_progress = True # Set DataFrames visuals PandasTools.RenderImagesInAllDataFrames(images=True) pd.set_option('display.float_format','{:.2f}'.format) from IPython.display import HTML ###Output _____no_output_____ ###Markdown Import regiosqm ###Code import regiosqm_lib as regiolib from regiosqm_lib.methods import regiosqm2018 ###Output _____no_output_____ ###Markdown Define molecule ###Code #smiles = "Cc1cc(NCCO)nc(-c2ccc(Br)cc2)n1" # CHEMBL1956589 #smiles = "n1(C)ccnc1" #smiles = "c1cnc(N)c(O[C@@H](c2cc(Cl)ccc2C(F)(F)F)C)c1" smiles = "c1(N(C)C)cccnc1" smiles = "c1(c(ccc(c1)N)F)[C@]1(NC(N(S(=O)(=O)C1)C)NC(=O)OC(C)(C)C)C" # smiles = "n1cccn1c1ncccn1" molobj = Chem.MolFromSmiles(smiles) mol_ = Chem.Mol(molobj, True) atoms = mol_.GetNumAtoms() for idx in range( atoms ): mol_.GetAtomWithIdx( idx ).SetProp( 'molAtomMapNumber', str( mol_.GetAtomWithIdx( idx ).GetIdx() ) ) HTML(PandasTools.PrintAsBase64PNGString(mol_)) ###Output _____no_output_____ ###Markdown Generate and calculate energies of tautomers and protonations ###Code %%time pdf = regiosqm2018.predict_regioselective_dataframe(molobj) ###Output _____no_output_____ ###Markdown Overview of all target sites and energies ###Code HTML(pdf.to_html()) ###Output _____no_output_____ ###Markdown With the all energies, select green and red sites ###Code mol = regiosqm2018.predict_regioselective_sites(molobj, pdf) mol green_indices = mol.GetProp("regiosqm2018_cut1").strip('][').split(', ') red_indices = mol.GetProp("regiosqm2018_cut2").strip('][').split(', ') ###Output _____no_output_____ ###Markdown Show results ###Code # Define pretty colors colors = dict() colors["green"] = (119, 198, 110) colors["green"] = tuple(x/255 for x in colors["green"]) colors["red"] = (201, 43, 38) colors["red"] = tuple(x/255 for x in colors["red"]) # Find reactive centers and convert index type to int. # rdkit doesn't understand np.int green_indices = [int(x) for x in green_indices if x] red_indices = [int(x) for x in red_indices if x not in green_indices and x] # All highlights highlights = green_indices + red_indices # Map highlight to a color colormap = dict() colormap.update({key: [colors["green"]] for key in green_indices}) colormap.update({key: [colors["red"]] for key in red_indices}) # should be working, but does not respect colors # MolToImage( # molobj, # highlightAtoms=highlights, # highlightMap=colormap, # size=(500,500), #) # http://rdkit.blogspot.com/2020/04/new-drawing-options-in-202003-release.html d2d = rdMolDraw2D.MolDraw2DSVG(500, 500) d2d.DrawMoleculeWithHighlights(molobj, "Regioselective site(s)", dict(colormap), {}, {}, {}) d2d.FinishDrawing() SVG(d2d.GetDrawingText()) ###Output _____no_output_____
advanced_functionality/parquet_to_recordio_protobuf/parquet_to_recordio_protobuf.ipynb
###Markdown Converting the Parquet data format to recordIO-wrapped protobuf------ Contents1. [Introduction](Introduction)1. [Optional data ingestion](Optional-data-ingestion) 1. [Download the data](Download-the-data) 1. [Convert into Parquet format](Convert-into-Parquet-format)1. [Data conversion](Data-conversion) 1. [Convert to recordIO protobuf format](Convert-to-recordIO-protobuf-format) 1. [Upload to S3](Upload-to-S3)1. [Training the linear model](Training-the-linear-model) IntroductionIn this notebook we illustrate how to convert a Parquet data format into the recordIO-protobuf format that many SageMaker algorithms consume. For the demonstration, first we'll convert the publicly available MNIST dataset into the Parquet format. Subsequently, it is converted into the recordIO-protobuf format and uploaded to S3 for consumption by the linear learner algorithm. ###Code import os import io import re import boto3 import pandas as pd import numpy as np import time from sagemaker import get_execution_role role = get_execution_role() bucket = '<S3 bucket>' prefix = 'sagemaker/parquet' !conda install -y -c conda-forge fastparquet scikit-learn ###Output _____no_output_____ ###Markdown Optional data ingestion Download the data ###Code %%time import pickle, gzip, numpy, urllib.request, json # Load the dataset urllib.request.urlretrieve("http://deeplearning.net/data/mnist/mnist.pkl.gz", "mnist.pkl.gz") with gzip.open('mnist.pkl.gz', 'rb') as f: train_set, valid_set, test_set = pickle.load(f, encoding='latin1') from fastparquet import write from fastparquet import ParquetFile def save_as_parquet_file(dataset, filename, label_col): X = dataset[0] y = dataset[1] data = pd.DataFrame(X) data[label_col] = y data.columns = data.columns.astype(str) #Parquet expexts the column names to be strings write(filename, data) def read_parquet_file(filename): pf = ParquetFile(filename) return pf.to_pandas() def features_and_target(df, label_col): X = df.loc[:, df.columns != label_col].values y = df[label_col].values return [X, y] ###Output _____no_output_____ ###Markdown Convert into Parquet format ###Code trainFile = 'train.parquet' validFile = 'valid.parquet' testFile = 'test.parquet' label_col = 'target' save_as_parquet_file(train_set, trainFile, label_col) save_as_parquet_file(valid_set, validFile, label_col) save_as_parquet_file(test_set, testFile, label_col) ###Output _____no_output_____ ###Markdown Data conversionSince algorithms have particular input and output requirements, converting the dataset is also part of the process that a data scientist goes through prior to initiating training. E.g., the Amazon SageMaker implementation of Linear Learner takes recordIO-wrapped protobuf. Most of the conversion effort is handled by the Amazon SageMaker Python SDK, imported as `sagemaker` below. ###Code dfTrain = read_parquet_file(trainFile) dfValid = read_parquet_file(validFile) dfTest = read_parquet_file(testFile) train_X, train_y = features_and_target(dfTrain, label_col) valid_X, valid_y = features_and_target(dfValid, label_col) test_X, test_y = features_and_target(dfTest, label_col) ###Output _____no_output_____ ###Markdown Convert to recordIO protobuf format ###Code import io import numpy as np import sagemaker.amazon.common as smac trainVectors = np.array([t.tolist() for t in train_X]).astype('float32') trainLabels = np.where(np.array([t.tolist() for t in train_y]) == 0, 1, 0).astype('float32') bufTrain = io.BytesIO() smac.write_numpy_to_dense_tensor(bufTrain, trainVectors, trainLabels) bufTrain.seek(0) validVectors = np.array([t.tolist() for t in valid_X]).astype('float32') validLabels = np.where(np.array([t.tolist() for t in valid_y]) == 0, 1, 0).astype('float32') bufValid = io.BytesIO() smac.write_numpy_to_dense_tensor(bufValid, validVectors, validLabels) bufValid.seek(0) ###Output _____no_output_____ ###Markdown Upload to S3 ###Code import boto3 import os key = 'recordio-pb-data' boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train', key)).upload_fileobj(bufTrain) s3_train_data = 's3://{}/{}/train/{}'.format(bucket, prefix, key) print('uploaded training data location: {}'.format(s3_train_data)) boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation', key)).upload_fileobj(bufValid) s3_validation_data = 's3://{}/{}/validation/{}'.format(bucket, prefix, key) print('uploaded validation data location: {}'.format(s3_validation_data)) ###Output _____no_output_____ ###Markdown Training the linear modelOnce we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. Since this data is relatively small, it isn't meant to show off the performance of the Linear Learner training algorithm, although we have tested it on multi-terabyte datasets.This example takes four to six minutes to complete. Majority of the time is spent provisioning hardware and loading the algorithm container since the dataset is small.First, let's specify our containers. Since we want this notebook to run in all 4 of Amazon SageMaker's regions, we'll create a small lookup. More details on algorithm containers can be found in [AWS documentation](https://docs-aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html). ###Code containers = {'us-west-2': '174872318107.dkr.ecr.us-west-2.amazonaws.com/linear-learner:latest', 'us-east-1': '382416733822.dkr.ecr.us-east-1.amazonaws.com/linear-learner:latest', 'us-east-2': '404615174143.dkr.ecr.us-east-2.amazonaws.com/linear-learner:latest', 'eu-west-1': '438346466558.dkr.ecr.eu-west-1.amazonaws.com/linear-learner:latest'} linear_job = 'linear-' + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime()) print("Job name is:", linear_job) linear_training_params = { "RoleArn": role, "TrainingJobName": linear_job, "AlgorithmSpecification": { "TrainingImage": containers[boto3.Session().region_name], "TrainingInputMode": "File" }, "ResourceConfig": { "InstanceCount": 1, "InstanceType": "ml.c4.2xlarge", "VolumeSizeInGB": 10 }, "InputDataConfig": [ { "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/train/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, "CompressionType": "None", "RecordWrapperType": "None" }, { "ChannelName": "validation", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/validation/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, "CompressionType": "None", "RecordWrapperType": "None" } ], "OutputDataConfig": { "S3OutputPath": "s3://{}/{}/".format(bucket, prefix) }, "HyperParameters": { "feature_dim": "784", "mini_batch_size": "200", "predictor_type": "binary_classifier", "epochs": "10", "num_models": "32", "loss": "absolute_loss" }, "StoppingCondition": { "MaxRuntimeInSeconds": 60 * 60 } } ###Output _____no_output_____ ###Markdown Now let's kick off our training job in SageMaker's distributed, managed training, using the parameters we just created. Because training is managed (AWS handles spinning up and spinning down hardware), we don't have to wait for our job to finish to continue, but for this case, let's setup a while loop so we can monitor the status of our training. ###Code %%time sm = boto3.Session().client('sagemaker') sm.create_training_job(**linear_training_params) status = sm.describe_training_job(TrainingJobName=linear_job)['TrainingJobStatus'] print(status) sm.get_waiter('training_job_completed_or_stopped').wait(TrainingJobName=linear_job) if status == 'Failed': message = sm.describe_training_job(TrainingJobName=linear_job)['FailureReason'] print('Training failed with the following error: {}'.format(message)) raise Exception('Training job failed') sm.describe_training_job(TrainingJobName=linear_job)['TrainingJobStatus'] ###Output _____no_output_____ ###Markdown Converting the Parquet data format to recordIO-wrapped protobuf------ Contents1. [Introduction](Introduction)1. [Optional data ingestion](Optional-data-ingestion) 1. [Download the data](Download-the-data) 1. [Convert into Parquet format](Convert-into-Parquet-format)1. [Data conversion](Data-conversion) 1. [Convert to recordIO protobuf format](Convert-to-recordIO-protobuf-format) 1. [Upload to S3](Upload-to-S3)1. [Training the linear model](Training-the-linear-model) IntroductionIn this notebook we illustrate how to convert a Parquet data format into the recordIO-protobuf format that many SageMaker algorithms consume. For the demonstration, first we'll convert the publicly available MNIST dataset into the Parquet format. Subsequently, it is converted into the recordIO-protobuf format and uploaded to S3 for consumption by the linear learner algorithm. ###Code import os import io import re import boto3 import pandas as pd import numpy as np import time import sagemaker from sagemaker import get_execution_role role = get_execution_role() sagemaker_session = sagemaker.Session() bucket = sagemaker_session.default_bucket() prefix = 'sagemaker/DEMO-parquet' !conda install -y -c conda-forge fastparquet scikit-learn ###Output _____no_output_____ ###Markdown Optional data ingestion Download the data ###Code %%time import pickle, gzip, numpy, urllib.request, json # Load the dataset urllib.request.urlretrieve("http://deeplearning.net/data/mnist/mnist.pkl.gz", "mnist.pkl.gz") with gzip.open('mnist.pkl.gz', 'rb') as f: train_set, valid_set, test_set = pickle.load(f, encoding='latin1') from fastparquet import write from fastparquet import ParquetFile def save_as_parquet_file(dataset, filename, label_col): X = dataset[0] y = dataset[1] data = pd.DataFrame(X) data[label_col] = y data.columns = data.columns.astype(str) #Parquet expexts the column names to be strings write(filename, data) def read_parquet_file(filename): pf = ParquetFile(filename) return pf.to_pandas() def features_and_target(df, label_col): X = df.loc[:, df.columns != label_col].values y = df[label_col].values return [X, y] ###Output _____no_output_____ ###Markdown Convert into Parquet format ###Code trainFile = 'train.parquet' validFile = 'valid.parquet' testFile = 'test.parquet' label_col = 'target' save_as_parquet_file(train_set, trainFile, label_col) save_as_parquet_file(valid_set, validFile, label_col) save_as_parquet_file(test_set, testFile, label_col) ###Output _____no_output_____ ###Markdown Data conversionSince algorithms have particular input and output requirements, converting the dataset is also part of the process that a data scientist goes through prior to initiating training. E.g., the Amazon SageMaker implementation of Linear Learner takes recordIO-wrapped protobuf. Most of the conversion effort is handled by the Amazon SageMaker Python SDK, imported as `sagemaker` below. ###Code dfTrain = read_parquet_file(trainFile) dfValid = read_parquet_file(validFile) dfTest = read_parquet_file(testFile) train_X, train_y = features_and_target(dfTrain, label_col) valid_X, valid_y = features_and_target(dfValid, label_col) test_X, test_y = features_and_target(dfTest, label_col) ###Output _____no_output_____ ###Markdown Convert to recordIO protobuf format ###Code import io import numpy as np import sagemaker.amazon.common as smac trainVectors = np.array([t.tolist() for t in train_X]).astype('float32') trainLabels = np.where(np.array([t.tolist() for t in train_y]) == 0, 1, 0).astype('float32') bufTrain = io.BytesIO() smac.write_numpy_to_dense_tensor(bufTrain, trainVectors, trainLabels) bufTrain.seek(0) validVectors = np.array([t.tolist() for t in valid_X]).astype('float32') validLabels = np.where(np.array([t.tolist() for t in valid_y]) == 0, 1, 0).astype('float32') bufValid = io.BytesIO() smac.write_numpy_to_dense_tensor(bufValid, validVectors, validLabels) bufValid.seek(0) ###Output _____no_output_____ ###Markdown Upload to S3 ###Code import boto3 import os key = 'recordio-pb-data' boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train', key)).upload_fileobj(bufTrain) s3_train_data = 's3://{}/{}/train/{}'.format(bucket, prefix, key) print('uploaded training data location: {}'.format(s3_train_data)) boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation', key)).upload_fileobj(bufValid) s3_validation_data = 's3://{}/{}/validation/{}'.format(bucket, prefix, key) print('uploaded validation data location: {}'.format(s3_validation_data)) ###Output _____no_output_____ ###Markdown Training the linear modelOnce we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. Since this data is relatively small, it isn't meant to show off the performance of the Linear Learner training algorithm, although we have tested it on multi-terabyte datasets.This example takes four to six minutes to complete. Majority of the time is spent provisioning hardware and loading the algorithm container since the dataset is small.First, let's specify our containers. Since we want this notebook to run in all 4 of Amazon SageMaker's regions, we'll create a small lookup. More details on algorithm containers can be found in [AWS documentation](https://docs-aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html). ###Code from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(boto3.Session().region_name, 'linear-learner') linear_job = 'DEMO-linear-' + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime()) print("Job name is:", linear_job) linear_training_params = { "RoleArn": role, "TrainingJobName": linear_job, "AlgorithmSpecification": { "TrainingImage": container, "TrainingInputMode": "File" }, "ResourceConfig": { "InstanceCount": 1, "InstanceType": "ml.c4.2xlarge", "VolumeSizeInGB": 10 }, "InputDataConfig": [ { "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/train/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, "CompressionType": "None", "RecordWrapperType": "None" }, { "ChannelName": "validation", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/validation/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, "CompressionType": "None", "RecordWrapperType": "None" } ], "OutputDataConfig": { "S3OutputPath": "s3://{}/{}/".format(bucket, prefix) }, "HyperParameters": { "feature_dim": "784", "mini_batch_size": "200", "predictor_type": "binary_classifier", "epochs": "10", "num_models": "32", "loss": "absolute_loss" }, "StoppingCondition": { "MaxRuntimeInSeconds": 60 * 60 } } ###Output _____no_output_____ ###Markdown Now let's kick off our training job in SageMaker's distributed, managed training, using the parameters we just created. Because training is managed (AWS handles spinning up and spinning down hardware), we don't have to wait for our job to finish to continue, but for this case, let's setup a while loop so we can monitor the status of our training. ###Code %%time sm = boto3.Session().client('sagemaker') sm.create_training_job(**linear_training_params) status = sm.describe_training_job(TrainingJobName=linear_job)['TrainingJobStatus'] print(status) sm.get_waiter('training_job_completed_or_stopped').wait(TrainingJobName=linear_job) if status == 'Failed': message = sm.describe_training_job(TrainingJobName=linear_job)['FailureReason'] print('Training failed with the following error: {}'.format(message)) raise Exception('Training job failed') sm.describe_training_job(TrainingJobName=linear_job)['TrainingJobStatus'] ###Output _____no_output_____ ###Markdown Converting the Parquet data format to recordIO-wrapped protobuf------ Contents1. [Introduction](Introduction)1. [Optional data ingestion](Optional-data-ingestion) 1. [Download the data](Download-the-data) 1. [Convert into Parquet format](Convert-into-Parquet-format)1. [Data conversion](Data-conversion) 1. [Convert to recordIO protobuf format](Convert-to-recordIO-protobuf-format) 1. [Upload to S3](Upload-to-S3)1. [Training the linear model](Training-the-linear-model) IntroductionIn this notebook we illustrate how to convert a Parquet data format into the recordIO-protobuf format that many SageMaker algorithms consume. For the demonstration, first we'll convert the publicly available MNIST dataset into the Parquet format. Subsequently, it is converted into the recordIO-protobuf format and uploaded to S3 for consumption by the linear learner algorithm. ###Code import os import io import re import boto3 import pandas as pd import numpy as np import time from sagemaker import get_execution_role role = get_execution_role() bucket = '<S3 bucket>' prefix = 'sagemaker/DEMO-parquet' !conda install -y -c conda-forge fastparquet scikit-learn ###Output _____no_output_____ ###Markdown Optional data ingestion Download the data ###Code %%time import pickle, gzip, numpy, urllib.request, json # Load the dataset urllib.request.urlretrieve("http://deeplearning.net/data/mnist/mnist.pkl.gz", "mnist.pkl.gz") with gzip.open('mnist.pkl.gz', 'rb') as f: train_set, valid_set, test_set = pickle.load(f, encoding='latin1') from fastparquet import write from fastparquet import ParquetFile def save_as_parquet_file(dataset, filename, label_col): X = dataset[0] y = dataset[1] data = pd.DataFrame(X) data[label_col] = y data.columns = data.columns.astype(str) #Parquet expexts the column names to be strings write(filename, data) def read_parquet_file(filename): pf = ParquetFile(filename) return pf.to_pandas() def features_and_target(df, label_col): X = df.loc[:, df.columns != label_col].values y = df[label_col].values return [X, y] ###Output _____no_output_____ ###Markdown Convert into Parquet format ###Code trainFile = 'train.parquet' validFile = 'valid.parquet' testFile = 'test.parquet' label_col = 'target' save_as_parquet_file(train_set, trainFile, label_col) save_as_parquet_file(valid_set, validFile, label_col) save_as_parquet_file(test_set, testFile, label_col) ###Output _____no_output_____ ###Markdown Data conversionSince algorithms have particular input and output requirements, converting the dataset is also part of the process that a data scientist goes through prior to initiating training. E.g., the Amazon SageMaker implementation of Linear Learner takes recordIO-wrapped protobuf. Most of the conversion effort is handled by the Amazon SageMaker Python SDK, imported as `sagemaker` below. ###Code dfTrain = read_parquet_file(trainFile) dfValid = read_parquet_file(validFile) dfTest = read_parquet_file(testFile) train_X, train_y = features_and_target(dfTrain, label_col) valid_X, valid_y = features_and_target(dfValid, label_col) test_X, test_y = features_and_target(dfTest, label_col) ###Output _____no_output_____ ###Markdown Convert to recordIO protobuf format ###Code import io import numpy as np import sagemaker.amazon.common as smac trainVectors = np.array([t.tolist() for t in train_X]).astype('float32') trainLabels = np.where(np.array([t.tolist() for t in train_y]) == 0, 1, 0).astype('float32') bufTrain = io.BytesIO() smac.write_numpy_to_dense_tensor(bufTrain, trainVectors, trainLabels) bufTrain.seek(0) validVectors = np.array([t.tolist() for t in valid_X]).astype('float32') validLabels = np.where(np.array([t.tolist() for t in valid_y]) == 0, 1, 0).astype('float32') bufValid = io.BytesIO() smac.write_numpy_to_dense_tensor(bufValid, validVectors, validLabels) bufValid.seek(0) ###Output _____no_output_____ ###Markdown Upload to S3 ###Code import boto3 import os key = 'recordio-pb-data' boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train', key)).upload_fileobj(bufTrain) s3_train_data = 's3://{}/{}/train/{}'.format(bucket, prefix, key) print('uploaded training data location: {}'.format(s3_train_data)) boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation', key)).upload_fileobj(bufValid) s3_validation_data = 's3://{}/{}/validation/{}'.format(bucket, prefix, key) print('uploaded validation data location: {}'.format(s3_validation_data)) ###Output _____no_output_____ ###Markdown Training the linear modelOnce we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. Since this data is relatively small, it isn't meant to show off the performance of the Linear Learner training algorithm, although we have tested it on multi-terabyte datasets.This example takes four to six minutes to complete. Majority of the time is spent provisioning hardware and loading the algorithm container since the dataset is small.First, let's specify our containers. Since we want this notebook to run in all 4 of Amazon SageMaker's regions, we'll create a small lookup. More details on algorithm containers can be found in [AWS documentation](https://docs-aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html). ###Code containers = {'us-west-2': '174872318107.dkr.ecr.us-west-2.amazonaws.com/linear-learner:latest', 'us-east-1': '382416733822.dkr.ecr.us-east-1.amazonaws.com/linear-learner:latest', 'us-east-2': '404615174143.dkr.ecr.us-east-2.amazonaws.com/linear-learner:latest', 'eu-west-1': '438346466558.dkr.ecr.eu-west-1.amazonaws.com/linear-learner:latest'} linear_job = 'DEMO-linear-' + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime()) print("Job name is:", linear_job) linear_training_params = { "RoleArn": role, "TrainingJobName": linear_job, "AlgorithmSpecification": { "TrainingImage": containers[boto3.Session().region_name], "TrainingInputMode": "File" }, "ResourceConfig": { "InstanceCount": 1, "InstanceType": "ml.c4.2xlarge", "VolumeSizeInGB": 10 }, "InputDataConfig": [ { "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/train/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, "CompressionType": "None", "RecordWrapperType": "None" }, { "ChannelName": "validation", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/validation/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, "CompressionType": "None", "RecordWrapperType": "None" } ], "OutputDataConfig": { "S3OutputPath": "s3://{}/{}/".format(bucket, prefix) }, "HyperParameters": { "feature_dim": "784", "mini_batch_size": "200", "predictor_type": "binary_classifier", "epochs": "10", "num_models": "32", "loss": "absolute_loss" }, "StoppingCondition": { "MaxRuntimeInSeconds": 60 * 60 } } ###Output _____no_output_____ ###Markdown Now let's kick off our training job in SageMaker's distributed, managed training, using the parameters we just created. Because training is managed (AWS handles spinning up and spinning down hardware), we don't have to wait for our job to finish to continue, but for this case, let's setup a while loop so we can monitor the status of our training. ###Code %%time sm = boto3.Session().client('sagemaker') sm.create_training_job(**linear_training_params) status = sm.describe_training_job(TrainingJobName=linear_job)['TrainingJobStatus'] print(status) sm.get_waiter('training_job_completed_or_stopped').wait(TrainingJobName=linear_job) if status == 'Failed': message = sm.describe_training_job(TrainingJobName=linear_job)['FailureReason'] print('Training failed with the following error: {}'.format(message)) raise Exception('Training job failed') sm.describe_training_job(TrainingJobName=linear_job)['TrainingJobStatus'] ###Output _____no_output_____ ###Markdown Converting the Parquet data format to recordIO-wrapped protobuf------ Contents1. [Introduction](Introduction)1. [Optional data ingestion](Optional-data-ingestion) 1. [Download the data](Download-the-data) 1. [Convert into Parquet format](Convert-into-Parquet-format)1. [Data conversion](Data-conversion) 1. [Convert to recordIO protobuf format](Convert-to-recordIO-protobuf-format) 1. [Upload to S3](Upload-to-S3)1. [Training the linear model](Training-the-linear-model) IntroductionIn this notebook we illustrate how to convert a Parquet data format into the recordIO-protobuf format that many SageMaker algorithms consume. For the demonstration, first we'll convert the publicly available MNIST dataset into the Parquet format. Subsequently, it is converted into the recordIO-protobuf format and uploaded to S3 for consumption by the linear learner algorithm. ###Code import os import io import re import boto3 import pandas as pd import numpy as np import time import sagemaker from sagemaker import get_execution_role role = get_execution_role() sagemaker_session = sagemaker.Session() bucket = sagemaker_session.default_bucket() prefix = "sagemaker/DEMO-parquet" !conda install -y -c conda-forge fastparquet scikit-learn ###Output _____no_output_____ ###Markdown Optional data ingestion Download the data ###Code %%time import pickle, gzip, numpy, urllib.request, json # Load the dataset urllib.request.urlretrieve("http://deeplearning.net/data/mnist/mnist.pkl.gz", "mnist.pkl.gz") with gzip.open("mnist.pkl.gz", "rb") as f: train_set, valid_set, test_set = pickle.load(f, encoding="latin1") from fastparquet import write from fastparquet import ParquetFile def save_as_parquet_file(dataset, filename, label_col): X = dataset[0] y = dataset[1] data = pd.DataFrame(X) data[label_col] = y data.columns = data.columns.astype(str) # Parquet expexts the column names to be strings write(filename, data) def read_parquet_file(filename): pf = ParquetFile(filename) return pf.to_pandas() def features_and_target(df, label_col): X = df.loc[:, df.columns != label_col].values y = df[label_col].values return [X, y] ###Output _____no_output_____ ###Markdown Convert into Parquet format ###Code trainFile = "train.parquet" validFile = "valid.parquet" testFile = "test.parquet" label_col = "target" save_as_parquet_file(train_set, trainFile, label_col) save_as_parquet_file(valid_set, validFile, label_col) save_as_parquet_file(test_set, testFile, label_col) ###Output _____no_output_____ ###Markdown Data conversionSince algorithms have particular input and output requirements, converting the dataset is also part of the process that a data scientist goes through prior to initiating training. E.g., the Amazon SageMaker implementation of Linear Learner takes recordIO-wrapped protobuf. Most of the conversion effort is handled by the Amazon SageMaker Python SDK, imported as `sagemaker` below. ###Code dfTrain = read_parquet_file(trainFile) dfValid = read_parquet_file(validFile) dfTest = read_parquet_file(testFile) train_X, train_y = features_and_target(dfTrain, label_col) valid_X, valid_y = features_and_target(dfValid, label_col) test_X, test_y = features_and_target(dfTest, label_col) ###Output _____no_output_____ ###Markdown Convert to recordIO protobuf format ###Code import io import numpy as np import sagemaker.amazon.common as smac trainVectors = np.array([t.tolist() for t in train_X]).astype("float32") trainLabels = np.where(np.array([t.tolist() for t in train_y]) == 0, 1, 0).astype("float32") bufTrain = io.BytesIO() smac.write_numpy_to_dense_tensor(bufTrain, trainVectors, trainLabels) bufTrain.seek(0) validVectors = np.array([t.tolist() for t in valid_X]).astype("float32") validLabels = np.where(np.array([t.tolist() for t in valid_y]) == 0, 1, 0).astype("float32") bufValid = io.BytesIO() smac.write_numpy_to_dense_tensor(bufValid, validVectors, validLabels) bufValid.seek(0) ###Output _____no_output_____ ###Markdown Upload to S3 ###Code import boto3 import os key = "recordio-pb-data" boto3.resource("s3").Bucket(bucket).Object(os.path.join(prefix, "train", key)).upload_fileobj( bufTrain ) s3_train_data = "s3://{}/{}/train/{}".format(bucket, prefix, key) print("uploaded training data location: {}".format(s3_train_data)) boto3.resource("s3").Bucket(bucket).Object(os.path.join(prefix, "validation", key)).upload_fileobj( bufValid ) s3_validation_data = "s3://{}/{}/validation/{}".format(bucket, prefix, key) print("uploaded validation data location: {}".format(s3_validation_data)) ###Output _____no_output_____ ###Markdown Training the linear modelOnce we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. Since this data is relatively small, it isn't meant to show off the performance of the Linear Learner training algorithm, although we have tested it on multi-terabyte datasets.This example takes four to six minutes to complete. Majority of the time is spent provisioning hardware and loading the algorithm container since the dataset is small.First, let's specify our containers. Since we want this notebook to run in all 4 of Amazon SageMaker's regions, we'll create a small lookup. More details on algorithm containers can be found in [AWS documentation](https://docs-aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html). ###Code from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(boto3.Session().region_name, "linear-learner") linear_job = "DEMO-linear-" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime()) print("Job name is:", linear_job) linear_training_params = { "RoleArn": role, "TrainingJobName": linear_job, "AlgorithmSpecification": {"TrainingImage": container, "TrainingInputMode": "File"}, "ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.c4.2xlarge", "VolumeSizeInGB": 10}, "InputDataConfig": [ { "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/train/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated", } }, "CompressionType": "None", "RecordWrapperType": "None", }, { "ChannelName": "validation", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/validation/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated", } }, "CompressionType": "None", "RecordWrapperType": "None", }, ], "OutputDataConfig": {"S3OutputPath": "s3://{}/{}/".format(bucket, prefix)}, "HyperParameters": { "feature_dim": "784", "mini_batch_size": "200", "predictor_type": "binary_classifier", "epochs": "10", "num_models": "32", "loss": "absolute_loss", }, "StoppingCondition": {"MaxRuntimeInSeconds": 60 * 60}, } ###Output _____no_output_____ ###Markdown Now let's kick off our training job in SageMaker's distributed, managed training, using the parameters we just created. Because training is managed (AWS handles spinning up and spinning down hardware), we don't have to wait for our job to finish to continue, but for this case, let's setup a while loop so we can monitor the status of our training. ###Code %%time sm = boto3.Session().client("sagemaker") sm.create_training_job(**linear_training_params) status = sm.describe_training_job(TrainingJobName=linear_job)["TrainingJobStatus"] print(status) sm.get_waiter("training_job_completed_or_stopped").wait(TrainingJobName=linear_job) if status == "Failed": message = sm.describe_training_job(TrainingJobName=linear_job)["FailureReason"] print("Training failed with the following error: {}".format(message)) raise Exception("Training job failed") sm.describe_training_job(TrainingJobName=linear_job)["TrainingJobStatus"] ###Output _____no_output_____ ###Markdown Converting the Parquet data format to recordIO-wrapped protobuf------ Contents1. [Introduction](Introduction)1. [Optional data ingestion](Optional-data-ingestion) 1. [Download the data](Download-the-data) 1. [Convert into Parquet format](Convert-into-Parquet-format)1. [Data conversion](Data-conversion) 1. [Convert to recordIO protobuf format](Convert-to-recordIO-protobuf-format) 1. [Upload to S3](Upload-to-S3)1. [Training the linear model](Training-the-linear-model) IntroductionIn this notebook we illustrate how to convert a Parquet data format into the recordIO-protobuf format that many SageMaker algorithms consume. For the demonstration, first we'll convert the publicly available MNIST dataset into the Parquet format. Subsequently, it is converted into the recordIO-protobuf format and uploaded to S3 for consumption by the linear learner algorithm. ###Code import os import io import re import boto3 import pandas as pd import numpy as np import time from sagemaker import get_execution_role role = get_execution_role() bucket = '<S3 bucket>' prefix = 'sagemaker/DEMO-parquet' !conda install -y -c conda-forge fastparquet scikit-learn ###Output _____no_output_____ ###Markdown Optional data ingestion Download the data ###Code %%time import pickle, gzip, numpy, urllib.request, json # Load the dataset urllib.request.urlretrieve("http://deeplearning.net/data/mnist/mnist.pkl.gz", "mnist.pkl.gz") with gzip.open('mnist.pkl.gz', 'rb') as f: train_set, valid_set, test_set = pickle.load(f, encoding='latin1') from fastparquet import write from fastparquet import ParquetFile def save_as_parquet_file(dataset, filename, label_col): X = dataset[0] y = dataset[1] data = pd.DataFrame(X) data[label_col] = y data.columns = data.columns.astype(str) #Parquet expexts the column names to be strings write(filename, data) def read_parquet_file(filename): pf = ParquetFile(filename) return pf.to_pandas() def features_and_target(df, label_col): X = df.loc[:, df.columns != label_col].values y = df[label_col].values return [X, y] ###Output _____no_output_____ ###Markdown Convert into Parquet format ###Code trainFile = 'train.parquet' validFile = 'valid.parquet' testFile = 'test.parquet' label_col = 'target' save_as_parquet_file(train_set, trainFile, label_col) save_as_parquet_file(valid_set, validFile, label_col) save_as_parquet_file(test_set, testFile, label_col) ###Output _____no_output_____ ###Markdown Data conversionSince algorithms have particular input and output requirements, converting the dataset is also part of the process that a data scientist goes through prior to initiating training. E.g., the Amazon SageMaker implementation of Linear Learner takes recordIO-wrapped protobuf. Most of the conversion effort is handled by the Amazon SageMaker Python SDK, imported as `sagemaker` below. ###Code dfTrain = read_parquet_file(trainFile) dfValid = read_parquet_file(validFile) dfTest = read_parquet_file(testFile) train_X, train_y = features_and_target(dfTrain, label_col) valid_X, valid_y = features_and_target(dfValid, label_col) test_X, test_y = features_and_target(dfTest, label_col) ###Output _____no_output_____ ###Markdown Convert to recordIO protobuf format ###Code import io import numpy as np import sagemaker.amazon.common as smac trainVectors = np.array([t.tolist() for t in train_X]).astype('float32') trainLabels = np.where(np.array([t.tolist() for t in train_y]) == 0, 1, 0).astype('float32') bufTrain = io.BytesIO() smac.write_numpy_to_dense_tensor(bufTrain, trainVectors, trainLabels) bufTrain.seek(0) validVectors = np.array([t.tolist() for t in valid_X]).astype('float32') validLabels = np.where(np.array([t.tolist() for t in valid_y]) == 0, 1, 0).astype('float32') bufValid = io.BytesIO() smac.write_numpy_to_dense_tensor(bufValid, validVectors, validLabels) bufValid.seek(0) ###Output _____no_output_____ ###Markdown Upload to S3 ###Code import boto3 import os key = 'recordio-pb-data' boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train', key)).upload_fileobj(bufTrain) s3_train_data = 's3://{}/{}/train/{}'.format(bucket, prefix, key) print('uploaded training data location: {}'.format(s3_train_data)) boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation', key)).upload_fileobj(bufValid) s3_validation_data = 's3://{}/{}/validation/{}'.format(bucket, prefix, key) print('uploaded validation data location: {}'.format(s3_validation_data)) ###Output _____no_output_____ ###Markdown Training the linear modelOnce we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. Since this data is relatively small, it isn't meant to show off the performance of the Linear Learner training algorithm, although we have tested it on multi-terabyte datasets.This example takes four to six minutes to complete. Majority of the time is spent provisioning hardware and loading the algorithm container since the dataset is small.First, let's specify our containers. Since we want this notebook to run in all 4 of Amazon SageMaker's regions, we'll create a small lookup. More details on algorithm containers can be found in [AWS documentation](https://docs-aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html). ###Code containers = {'us-west-2': '174872318107.dkr.ecr.us-west-2.amazonaws.com/linear-learner:latest', 'us-east-1': '382416733822.dkr.ecr.us-east-1.amazonaws.com/linear-learner:latest', 'us-east-2': '404615174143.dkr.ecr.us-east-2.amazonaws.com/linear-learner:latest', 'eu-west-1': '438346466558.dkr.ecr.eu-west-1.amazonaws.com/linear-learner:latest', 'ap-northeast-1': '351501993468.dkr.ecr.ap-northeast-1.amazonaws.com/linear-learner:latest'} linear_job = 'DEMO-linear-' + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime()) print("Job name is:", linear_job) linear_training_params = { "RoleArn": role, "TrainingJobName": linear_job, "AlgorithmSpecification": { "TrainingImage": containers[boto3.Session().region_name], "TrainingInputMode": "File" }, "ResourceConfig": { "InstanceCount": 1, "InstanceType": "ml.c4.2xlarge", "VolumeSizeInGB": 10 }, "InputDataConfig": [ { "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/train/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, "CompressionType": "None", "RecordWrapperType": "None" }, { "ChannelName": "validation", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/validation/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, "CompressionType": "None", "RecordWrapperType": "None" } ], "OutputDataConfig": { "S3OutputPath": "s3://{}/{}/".format(bucket, prefix) }, "HyperParameters": { "feature_dim": "784", "mini_batch_size": "200", "predictor_type": "binary_classifier", "epochs": "10", "num_models": "32", "loss": "absolute_loss" }, "StoppingCondition": { "MaxRuntimeInSeconds": 60 * 60 } } ###Output _____no_output_____ ###Markdown Now let's kick off our training job in SageMaker's distributed, managed training, using the parameters we just created. Because training is managed (AWS handles spinning up and spinning down hardware), we don't have to wait for our job to finish to continue, but for this case, let's setup a while loop so we can monitor the status of our training. ###Code %%time sm = boto3.Session().client('sagemaker') sm.create_training_job(**linear_training_params) status = sm.describe_training_job(TrainingJobName=linear_job)['TrainingJobStatus'] print(status) sm.get_waiter('training_job_completed_or_stopped').wait(TrainingJobName=linear_job) if status == 'Failed': message = sm.describe_training_job(TrainingJobName=linear_job)['FailureReason'] print('Training failed with the following error: {}'.format(message)) raise Exception('Training job failed') sm.describe_training_job(TrainingJobName=linear_job)['TrainingJobStatus'] ###Output _____no_output_____ ###Markdown Converting the Parquet data format to recordIO-wrapped protobuf------ Contents1. [Introduction](Introduction)1. [Optional data ingestion](Optional-data-ingestion) 1. [Download the data](Download-the-data) 1. [Convert into Parquet format](Convert-into-Parquet-format)1. [Data conversion](Data-conversion) 1. [Convert to recordIO protobuf format](Convert-to-recordIO-protobuf-format) 1. [Upload to S3](Upload-to-S3)1. [Training the linear model](Training-the-linear-model) IntroductionIn this notebook we illustrate how to convert a Parquet data format into the recordIO-protobuf format that many SageMaker algorithms consume. For the demonstration, first we'll convert the publicly available MNIST dataset into the Parquet format. Subsequently, it is converted into the recordIO-protobuf format and uploaded to S3 for consumption by the linear learner algorithm. ###Code import os import io import re import boto3 import pandas as pd import numpy as np import time from sagemaker import get_execution_role role = get_execution_role() bucket = '<S3 bucket>' prefix = 'sagemaker/DEMO-parquet' !conda install -y -c conda-forge fastparquet scikit-learn ###Output _____no_output_____ ###Markdown Optional data ingestion Download the data ###Code %%time import pickle, gzip, numpy, urllib.request, json # Load the dataset urllib.request.urlretrieve("http://deeplearning.net/data/mnist/mnist.pkl.gz", "mnist.pkl.gz") with gzip.open('mnist.pkl.gz', 'rb') as f: train_set, valid_set, test_set = pickle.load(f, encoding='latin1') from fastparquet import write from fastparquet import ParquetFile def save_as_parquet_file(dataset, filename, label_col): X = dataset[0] y = dataset[1] data = pd.DataFrame(X) data[label_col] = y data.columns = data.columns.astype(str) #Parquet expexts the column names to be strings write(filename, data) def read_parquet_file(filename): pf = ParquetFile(filename) return pf.to_pandas() def features_and_target(df, label_col): X = df.loc[:, df.columns != label_col].values y = df[label_col].values return [X, y] ###Output _____no_output_____ ###Markdown Convert into Parquet format ###Code trainFile = 'train.parquet' validFile = 'valid.parquet' testFile = 'test.parquet' label_col = 'target' save_as_parquet_file(train_set, trainFile, label_col) save_as_parquet_file(valid_set, validFile, label_col) save_as_parquet_file(test_set, testFile, label_col) ###Output _____no_output_____ ###Markdown Data conversionSince algorithms have particular input and output requirements, converting the dataset is also part of the process that a data scientist goes through prior to initiating training. E.g., the Amazon SageMaker implementation of Linear Learner takes recordIO-wrapped protobuf. Most of the conversion effort is handled by the Amazon SageMaker Python SDK, imported as `sagemaker` below. ###Code dfTrain = read_parquet_file(trainFile) dfValid = read_parquet_file(validFile) dfTest = read_parquet_file(testFile) train_X, train_y = features_and_target(dfTrain, label_col) valid_X, valid_y = features_and_target(dfValid, label_col) test_X, test_y = features_and_target(dfTest, label_col) ###Output _____no_output_____ ###Markdown Convert to recordIO protobuf format ###Code import io import numpy as np import sagemaker.amazon.common as smac trainVectors = np.array([t.tolist() for t in train_X]).astype('float32') trainLabels = np.where(np.array([t.tolist() for t in train_y]) == 0, 1, 0).astype('float32') bufTrain = io.BytesIO() smac.write_numpy_to_dense_tensor(bufTrain, trainVectors, trainLabels) bufTrain.seek(0) validVectors = np.array([t.tolist() for t in valid_X]).astype('float32') validLabels = np.where(np.array([t.tolist() for t in valid_y]) == 0, 1, 0).astype('float32') bufValid = io.BytesIO() smac.write_numpy_to_dense_tensor(bufValid, validVectors, validLabels) bufValid.seek(0) ###Output _____no_output_____ ###Markdown Upload to S3 ###Code import boto3 import os key = 'recordio-pb-data' boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train', key)).upload_fileobj(bufTrain) s3_train_data = 's3://{}/{}/train/{}'.format(bucket, prefix, key) print('uploaded training data location: {}'.format(s3_train_data)) boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation', key)).upload_fileobj(bufValid) s3_validation_data = 's3://{}/{}/validation/{}'.format(bucket, prefix, key) print('uploaded validation data location: {}'.format(s3_validation_data)) ###Output _____no_output_____ ###Markdown Training the linear modelOnce we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. Since this data is relatively small, it isn't meant to show off the performance of the Linear Learner training algorithm, although we have tested it on multi-terabyte datasets.This example takes four to six minutes to complete. Majority of the time is spent provisioning hardware and loading the algorithm container since the dataset is small.First, let's specify our containers. Since we want this notebook to run in all 4 of Amazon SageMaker's regions, we'll create a small lookup. More details on algorithm containers can be found in [AWS documentation](https://docs-aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html). ###Code from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(boto3.Session().region_name, 'linear-learner') linear_job = 'DEMO-linear-' + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime()) print("Job name is:", linear_job) linear_training_params = { "RoleArn": role, "TrainingJobName": linear_job, "AlgorithmSpecification": { "TrainingImage": container, "TrainingInputMode": "File" }, "ResourceConfig": { "InstanceCount": 1, "InstanceType": "ml.c4.2xlarge", "VolumeSizeInGB": 10 }, "InputDataConfig": [ { "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/train/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, "CompressionType": "None", "RecordWrapperType": "None" }, { "ChannelName": "validation", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/validation/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, "CompressionType": "None", "RecordWrapperType": "None" } ], "OutputDataConfig": { "S3OutputPath": "s3://{}/{}/".format(bucket, prefix) }, "HyperParameters": { "feature_dim": "784", "mini_batch_size": "200", "predictor_type": "binary_classifier", "epochs": "10", "num_models": "32", "loss": "absolute_loss" }, "StoppingCondition": { "MaxRuntimeInSeconds": 60 * 60 } } ###Output _____no_output_____ ###Markdown Now let's kick off our training job in SageMaker's distributed, managed training, using the parameters we just created. Because training is managed (AWS handles spinning up and spinning down hardware), we don't have to wait for our job to finish to continue, but for this case, let's setup a while loop so we can monitor the status of our training. ###Code %%time sm = boto3.Session().client('sagemaker') sm.create_training_job(**linear_training_params) status = sm.describe_training_job(TrainingJobName=linear_job)['TrainingJobStatus'] print(status) sm.get_waiter('training_job_completed_or_stopped').wait(TrainingJobName=linear_job) if status == 'Failed': message = sm.describe_training_job(TrainingJobName=linear_job)['FailureReason'] print('Training failed with the following error: {}'.format(message)) raise Exception('Training job failed') sm.describe_training_job(TrainingJobName=linear_job)['TrainingJobStatus'] ###Output _____no_output_____ ###Markdown Converting the Parquet data format to recordIO-wrapped protobuf------ Contents1. [Introduction](Introduction)1. [Optional data ingestion](Optional-data-ingestion) 1. [Download the data](Download-the-data) 1. [Convert into Parquet format](Convert-into-Parquet-format)1. [Data conversion](Data-conversion) 1. [Convert to recordIO protobuf format](Convert-to-recordIO-protobuf-format) 1. [Upload to S3](Upload-to-S3)1. [Training the linear model](Training-the-linear-model) IntroductionIn this notebook we illustrate how to convert a Parquet data format into the recordIO-protobuf format that many SageMaker algorithms consume. For the demonstration, first we'll convert the publicly available MNIST dataset into the Parquet format. Subsequently, it is converted into the recordIO-protobuf format and uploaded to S3 for consumption by the linear learner algorithm. ###Code import os import io import re import boto3 import pandas as pd import numpy as np import time from sagemaker import get_execution_role role = get_execution_role() bucket = '<S3 bucket>' prefix = 'sagemaker/DEMO-parquet' !conda install -y -c conda-forge fastparquet scikit-learn ###Output _____no_output_____ ###Markdown Optional data ingestion Download the data ###Code %%time import pickle, gzip, numpy, urllib.request, json # Load the dataset urllib.request.urlretrieve("http://deeplearning.net/data/mnist/mnist.pkl.gz", "mnist.pkl.gz") with gzip.open('mnist.pkl.gz', 'rb') as f: train_set, valid_set, test_set = pickle.load(f, encoding='latin1') from fastparquet import write from fastparquet import ParquetFile def save_as_parquet_file(dataset, filename, label_col): X = dataset[0] y = dataset[1] data = pd.DataFrame(X) data[label_col] = y data.columns = data.columns.astype(str) #Parquet expexts the column names to be strings write(filename, data) def read_parquet_file(filename): pf = ParquetFile(filename) return pf.to_pandas() def features_and_target(df, label_col): X = df.loc[:, df.columns != label_col].values y = df[label_col].values return [X, y] ###Output _____no_output_____ ###Markdown Convert into Parquet format ###Code trainFile = 'train.parquet' validFile = 'valid.parquet' testFile = 'test.parquet' label_col = 'target' save_as_parquet_file(train_set, trainFile, label_col) save_as_parquet_file(valid_set, validFile, label_col) save_as_parquet_file(test_set, testFile, label_col) ###Output _____no_output_____ ###Markdown Data conversionSince algorithms have particular input and output requirements, converting the dataset is also part of the process that a data scientist goes through prior to initiating training. E.g., the Amazon SageMaker implementation of Linear Learner takes recordIO-wrapped protobuf. Most of the conversion effort is handled by the Amazon SageMaker Python SDK, imported as `sagemaker` below. ###Code dfTrain = read_parquet_file(trainFile) dfValid = read_parquet_file(validFile) dfTest = read_parquet_file(testFile) train_X, train_y = features_and_target(dfTrain, label_col) valid_X, valid_y = features_and_target(dfValid, label_col) test_X, test_y = features_and_target(dfTest, label_col) ###Output _____no_output_____ ###Markdown Convert to recordIO protobuf format ###Code import io import numpy as np import sagemaker.amazon.common as smac trainVectors = np.array([t.tolist() for t in train_X]).astype('float32') trainLabels = np.where(np.array([t.tolist() for t in train_y]) == 0, 1, 0).astype('float32') bufTrain = io.BytesIO() smac.write_numpy_to_dense_tensor(bufTrain, trainVectors, trainLabels) bufTrain.seek(0) validVectors = np.array([t.tolist() for t in valid_X]).astype('float32') validLabels = np.where(np.array([t.tolist() for t in valid_y]) == 0, 1, 0).astype('float32') bufValid = io.BytesIO() smac.write_numpy_to_dense_tensor(bufValid, validVectors, validLabels) bufValid.seek(0) ###Output _____no_output_____ ###Markdown Upload to S3 ###Code import boto3 import os key = 'recordio-pb-data' boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'train', key)).upload_fileobj(bufTrain) s3_train_data = 's3://{}/{}/train/{}'.format(bucket, prefix, key) print('uploaded training data location: {}'.format(s3_train_data)) boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, 'validation', key)).upload_fileobj(bufValid) s3_validation_data = 's3://{}/{}/validation/{}'.format(bucket, prefix, key) print('uploaded validation data location: {}'.format(s3_validation_data)) ###Output _____no_output_____ ###Markdown Training the linear modelOnce we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. Since this data is relatively small, it isn't meant to show off the performance of the Linear Learner training algorithm, although we have tested it on multi-terabyte datasets.This example takes four to six minutes to complete. Majority of the time is spent provisioning hardware and loading the algorithm container since the dataset is small.First, let's specify our containers. Since we want this notebook to run in all 4 of Amazon SageMaker's regions, we'll create a small lookup. More details on algorithm containers can be found in [AWS documentation](https://docs-aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html). ###Code containers = {'us-west-2': '174872318107.dkr.ecr.us-west-2.amazonaws.com/linear-learner:latest', 'us-east-1': '382416733822.dkr.ecr.us-east-1.amazonaws.com/linear-learner:latest', 'us-east-2': '404615174143.dkr.ecr.us-east-2.amazonaws.com/linear-learner:latest', 'eu-west-1': '438346466558.dkr.ecr.eu-west-1.amazonaws.com/linear-learner:latest', 'ap-northeast-1': '351501993468.dkr.ecr.ap-northeast-1.amazonaws.com/linear-learner:latest', 'ap-northeast-2': '835164637446.dkr.ecr.ap-northeast-2.amazonaws.com/linear-learner:latest'} linear_job = 'DEMO-linear-' + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime()) print("Job name is:", linear_job) linear_training_params = { "RoleArn": role, "TrainingJobName": linear_job, "AlgorithmSpecification": { "TrainingImage": containers[boto3.Session().region_name], "TrainingInputMode": "File" }, "ResourceConfig": { "InstanceCount": 1, "InstanceType": "ml.c4.2xlarge", "VolumeSizeInGB": 10 }, "InputDataConfig": [ { "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/train/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, "CompressionType": "None", "RecordWrapperType": "None" }, { "ChannelName": "validation", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/validation/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, "CompressionType": "None", "RecordWrapperType": "None" } ], "OutputDataConfig": { "S3OutputPath": "s3://{}/{}/".format(bucket, prefix) }, "HyperParameters": { "feature_dim": "784", "mini_batch_size": "200", "predictor_type": "binary_classifier", "epochs": "10", "num_models": "32", "loss": "absolute_loss" }, "StoppingCondition": { "MaxRuntimeInSeconds": 60 * 60 } } ###Output _____no_output_____ ###Markdown Now let's kick off our training job in SageMaker's distributed, managed training, using the parameters we just created. Because training is managed (AWS handles spinning up and spinning down hardware), we don't have to wait for our job to finish to continue, but for this case, let's setup a while loop so we can monitor the status of our training. ###Code %%time sm = boto3.Session().client('sagemaker') sm.create_training_job(**linear_training_params) status = sm.describe_training_job(TrainingJobName=linear_job)['TrainingJobStatus'] print(status) sm.get_waiter('training_job_completed_or_stopped').wait(TrainingJobName=linear_job) if status == 'Failed': message = sm.describe_training_job(TrainingJobName=linear_job)['FailureReason'] print('Training failed with the following error: {}'.format(message)) raise Exception('Training job failed') sm.describe_training_job(TrainingJobName=linear_job)['TrainingJobStatus'] ###Output _____no_output_____
lessons/python/ep2-loops.ipynb
###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) print(char) ###Output o x y g e n n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. ###Code word = 'oxygen' for banana in word: print(banana) ###Output o x y g e n ###Markdown Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print(vowel) print(length) print('There are', length, 'vowels') ###Output a 1 e 2 i 3 o 4 u 5 There are 5 vowels ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' print(letter) for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output z a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for num in range(3, 10, 2): print(num) ###Output 3 5 7 9 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for number in range(1, 4): print(number) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code result = 1 for number in range(0, 3): result = result * 5 print(result) print(result) ###Output 5 25 125 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code newstring = '' oldstring = 'Newton' for char in oldstring: newstring = char + newstring print(newstring) print(newstring) ###Output N eN weN tweN otweN notweN notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 x = 5 coefficients = [2, 4, 3] for idx, coef in enumerate(coefficients): y = y + coef * x**idx print(y) print(y) ###Output 2 22 97 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) print(word) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print('There are', length, 'vowels') ###Output There are 5 vowels ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for num in range(3,10,2): print(num) range? ###Output _____no_output_____ ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in range(1,4): print(num) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code 6 ###Output _____no_output_____ ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code result=1 for i in range (0,3,1): result=result*5 print(result) ###Output 5 25 125 ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code letter = 'Newton' result='' count=0 for letter in 'Newton': result=letter+result print(letter) print(result) ###Output N N e eN w weN t tweN o otweN n notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for i, c in enumerate(cc): y = # magic required here print(y) ###Output _____no_output_____ ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' #word size changes print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for i in word: print(i) print(i) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output _____no_output_____ ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for i in 'aeiou': length = length + 1 print('There are', length, 'vowels') length = 0 for x in 'aeious': length = length + 1 print(length) print(x) print('There are ', length, 'characters') ###Output 1 a 2 e 3 i 4 o 5 u 6 s There are 6 characters ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for i in range(3): print(i) for i in range(2, 5): print(i) for i in range(3, 10, 2): #increment print(i) for i in range(8, 2, -1): #decrement print(i) #does not include the value asked for at the end ###Output 8 7 6 5 4 3 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for i in range(3): print(i+1) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code w = 'oxygen' for i in w: print(i) ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code print(5**3) value = 5 x = 5 for i in range(2): x = x * value print(x) result = 1 for i in range(3): result *= 5 print(result) ###Output 5 25 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code print('a' + 'b') word = 'Newton' for i in range(5, -1, -1): print(word[i], end = '') word = "testing" print(word[::-1]) word = "Facebook" for char in range(len(word)-1, -1, -1): print(word[char], end = "") ###Output koobecaF ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i = ', i, 'j = ', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coeff = [2, 4, 3] y = coeff[0]* x**0 + coeff[1]*x**1 + coeff[2]*x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for i, c in enumerate(cc): y = # magic required here print(y) ###Output _____no_output_____ ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'leading artist' for char in word: print(char) ###Output l e a d i n g a r t i s t ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for i in word: print(i) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. ###Code word = 'oxygen' ###Output _____no_output_____ ###Markdown Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'oxygen': length = length + 1 print('There are', length, 'vowels') ###Output There are 6 vowels ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' print(letter) for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output z a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print('aeiou', 'abcd', sep = ' # ', end = ' ') print("hello") ###Output aeiou # abcd hello ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for i in range(200): print(i+1, end=' ') ###Output 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for i in range(1,4): print(i) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) # 6 times = number of characters in the string ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code a = 1 for i in range(3): a = a * 5 print(a) ###Output 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code string = 'Newton' for char in range(len(string)-1,-1,-1): print(string[char], end = '') ###Output notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 x = 5 cc = [2, 4, 3] for i, c in enumerate(cc): y += c * x**i print(y) ###Output 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we’ll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` Word is a variable - and in this case it is a string because its individual characters.Sqaure brackets [] accesses elements. ###Code word = 'lead' print(word) print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterix or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word’s characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don’t exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` Shift L (capital L) you get the line numbers so you know where the errors are. ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here’s a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' print (word) for char in word: print(char) ###Output l e a d ###Markdown Using char is a shortcut for adding all the individual charactersThis is a for loopand it has indent when you are in a for loop This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What’s in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. vowel is a variable name in the for loop Here’s another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` here we are assigning the value of 0 to the word length - so when you add one it becomes 1, then 2 etc and loops through remember, what ever is on the right of the equal is done first. ###Code length = 0 for vowel in 'aeiou': length = length + 1 print('There are', length, 'vowels') l = 0 for i in'aeiou': l+=1 print(l) ###Output 5 ###Markdown It’s worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that’s being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' print('Before the loop: letter=', letter) for letter in 'abc': print('Inside the loop: letter=', letter) print('After the loop: letter=', letter) ###Output Before the loop: letter= z Inside the loop: letter= a Inside the loop: letter= b Inside the loop: letter= c After the loop: letter= c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven’t met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. the word 'number' could be anything, we could use 'i' or whatever we want but we have to assign a variable. ###Code for number in range(10): print(number) for number in range(1,10): print(number) for number in range(1,10,2): print(number) ###Output 1 3 5 7 9 ###Markdown adding the other paramater, prints every second number Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for number in range (1,4): print (number) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' counter = 0 for char in word: # print(char) counter = counter + 1 print (counter) ###Output 6 ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - ie 5 * 5 * 5). ###Code print(5 ** 3) x = 5 for i in range (1,3): x = x*5 print(i,x) print (x) ###Output 1 25 2 125 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. this concatonates the letter to make them into one variable ###Code print('a' + 'b') string1 = 'Newton' print(string1) string2 = '' print(string2) for char in string1: string2 = char + string2 print (string2) ###Output Newton notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` you can have more than 1 variable in a for loop - so in this case we have variable i and variable j. ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) list1 = 2.22, 4.44, 3.33 print (list1) print (list1[1]) ###Output 4.44 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficents. Here's a starting templates ... ###Code y = 0 x = 5 coefficient = [2, 4, 3] for i, c in enumerate(coefficient): print ('i', i, 'c', c) y = y + c* x**i print(y) ###Output i 0 c 2 i 1 c 4 i 2 c 3 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output _____no_output_____ ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code # initialise a variable length = 0 # iterate through the collection for vowel in 'aeiou': length = length + 1 print('There are', length, 'vowels') # check what happens when you iterate length = 0 for vowel in 'aeiou': length = length + 1 print(length) print(vowel) ###Output There are 5 vowels 1 a 2 e 3 i 4 o 5 u ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for number in range(0, 100, 5): print(number, end = ' ') ###Output 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for number in range(1, 4): print(number) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code n = 5 multi = n for iteration in range(1,3): multi = multi*n print(multi) ###Output 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code word = 'Newton' # the last letter word[len(word)-1] # solution 1 string = 'Newton' for char in range(len(string)-1,-1,-1): print(string[char], end = '') # solution 2 reverse = '' count = 0 for i in word: reverse = reverse + word[len(word)-count-1] count = count + 1 print(reverse) ###Output notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 x = 5 cc = [2, 4, 3] for i, c in enumerate(cc): y += c*x**i print(y) ###Output 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we’ll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word) print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output lead l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterix or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word’s characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don’t exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) ###Output t i n ###Markdown Here’s a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = "oxygen" for i in word: print(i) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What’s in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. ###Code l = 0 for i in "aeiou": l+=1 print(l) ###Output 1 2 3 4 5 ###Markdown Here’s another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print('There are', length, 'vowels') ###Output There are 5 vowels ###Markdown It’s worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that’s being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' print(letter) for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output z a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven’t met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for number in range(10): print(number) print(range(10)) print(type(range(10))) for i in range(3,10,2): print(i) ###Output 3 5 7 9 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for i in range(1,4): print(i) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' counter = 0 for char in word: counter = counter + 1 print(char) print(counter) ###Output o 1 x 2 y 3 g 4 e 5 n 6 ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - ie 5 * 5 * 5). ###Code x = 5 for i in range(1,3): x = x*5 print(i,x) print(x) ###Output 1 25 25 2 125 125 3 625 625 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code print("a"+ "b") string1 = "Newton" print(string1) string2 ="" print(string2) for char in string1: string2 = char + string2 print(string2) ###Output Newton notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficents. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for index, coeff in enumerate(coefficient): print("index", index, "coeff", coeff) # y = y + coeff*x**power y = y + coeff *(numpy.power(x,index)) print(y) ###Output index 0 coeff 2 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output _____no_output_____ ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'me' for i in word: print(i) ###Output m e ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output _____no_output_____ ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for i in range(1,4): print(i) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = ' oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code product = 5 for i in range(2): product = product * 5 print(product) ###Output 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code str = 'Haresh' l = len(str) text = '' for i in range(l-1,-1,-1): text = text + str[i] print(text) ###Output hseraH ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for i, c in enumerate(cc): y = # magic required here print(y) ###Output _____no_output_____ ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word) print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output lead l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for xx__yy in 'aeiou': length = length + 1 print(length) print(xx__yy) print('There are', length, 'xx__yy') ###Output 1 a 2 e 3 i 4 o 5 u There are 5 xx__yy ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' #doesn't matter what letter is assigned to the variable before the for loop. Each letter is put into the varianle 'letter'. for letter in 'abc': print(letter) print('after the loop, letter is', letter) #after the loop is finished, the last element you assign to the variable is what is left. ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) print(len([1,2,3,4,5,6,7,8])) ###Output 5 8 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for num in range (2,8): print(num, end = '') #when you indent code it means that it belongs to the code above #print? print('blah', 'blah', 'blah', sep='#', end ='-?-') print('this is a second line...') ###Output 234567blah#blah#blah-?-this is a second line... ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in range (1,4): print(num) # to add in a step... for num in range (1,10,2): print(num) # range can also start from a higher number at decrement for num in range (8,3,-1): print(num) ###Output 1 2 3 1 3 5 7 9 8 7 6 5 4 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) print('there are', word, 'oxygen') print(len('oxygen')) ###Output o x y g e n there are oxygen oxygen 6 ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code word = 'wow' value = 1 for num in word: value = value*5 print(value) ###Output 5 25 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code word = 'Newton' print(word[::-1]) for char in word: print(char) word = 'Newton' for char in range(len(word)-1,-1,-1): print(word[char], end='') ###Output notweN N e w t o n notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 x = 5 cc = [2,4,3] coefficient = [2, 4, 3] for i, c in enumerate(cc): y += c * x**i print(y) ###Output 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we’ll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` Note: even strings start form 0 ###Code word='lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) print(word) ###Output lead ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterix or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word’s characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don’t exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here’s a better approach:```word = 'lead'for char in word: print(char)``` NOTE: loops work with indetations, not with parenthesis ###Code word='lead' print(word) for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output _____no_output_____ ###Markdown What’s in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here’s another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code l=0 for i in 'aeiou': l=l+1 print('There are', l, 'vowels') ###Output There are 5 vowels ###Markdown It’s worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that’s being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` Note: the variable letter will change in the loop, and after the loop will remain changed: it overrite the existing variable out side the loop ###Code letter='z' print('before the loop:', letter) for letter in 'abc': print('inside loop:',letter) print('after the loop:', letter) ###Output before the loop: z inside loop: a inside loop: b inside loop: c after the loop: c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven’t met yet, so we should always use it when we can. ###Code print(len(word)) ###Output 4 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for number in range(10): print(number) print(range(10)) print(type(range(10))) for number in range(3,10,3): print(number) ###Output 3 6 9 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for n in range(1,4): print(n) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code l=0 for char in 'oxygen': l=l+1 print(l) ###Output 6 ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - ie 5 * 5 * 5). Note: range(a,b) create an iteraation stuff with number of elements equal to (b-a) ###Code c=5 for i in range(1,3): c=c*5 print(c) ###Output 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code string1='Newton' print(string1) string2='' for char in string1: string2=char+string2 print(string2) ###Output Newton notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) list1=[2.22, 4.44, 3.33] print(list1) ###Output [2.22, 4.44, 3.33] ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficents. Here's a starting templates ... ###Code y = 0 x=5 coefficient = [2, 4, 3] for i, c in enumerate(coefficient): y =y+ c*x**i print(y) ###Output 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we’ll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print (word) word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterix or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word’s characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don’t exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) ###Output t i n ###Markdown Here’s a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' print(word) print(word[0]) for char in word: print (char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What’s in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. ###Code word = 'oxygen' for banana in word: print(banana) ###Output o x y g e n ###Markdown Here’s another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code l=0 for i in 'aeiou': l+=1 print(l) ###Output 5 ###Markdown It’s worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that’s being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' print ('before the loop: letter', letter) for letter in 'abc': print('inside the loop: letter', letter) print('after the loop: letter', letter) ###Output before the loop: letter z inside the loop: letter a inside the loop: letter b inside the loop: letter c after the loop: letter c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven’t met yet, so we should always use it when we can. ###Code print (len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for number in range(10): print(number) for i in range(3,10): print(i) for i in range(3,10,3): print(i) ###Output 3 6 9 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for i in range(1,4): print(i) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - ie 5 * 5 * 5). ###Code print(5 ** 3) x = 5 for i in range(1,4): x = x*5 print(i,x) print(x) ###Output 1 25 2 125 3 625 625 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code print('a'+'b') string1='Newton' print(string1) string2='' print(string2) for char in string1: string2 =char +string2 print(string2) ###Output Newton notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficents. Here's a starting templates ... ###Code y = 0 x = 5 coefficient = [2, 4, 3,3,4,5,6] for i, c in enumerate(coefficients): print('i',i,'c',c) y = y + c*x**i print(y) import numpy print(numpy.power(2,3)) ###Output 8 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we’ll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterix or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word’s characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don’t exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output _____no_output_____ ###Markdown Here’s a better approach:```word = 'lead'for char in word: print(char)``` This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output _____no_output_____ ###Markdown What’s in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here’s another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` It’s worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that’s being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven’t met yet, so we should always use it when we can. From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - ie 5 * 5 * 5). Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficents. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for i, c in enumerate(cc): y = # magic required here print(y) ###Output _____no_output_____ ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'dogs' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output d o g s ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output _____no_output_____ ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output _____no_output_____ ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output _____no_output_____ ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. ###Code word = 'oxygen' for characters in word: print(characters) print (word) ###Output _____no_output_____ ###Markdown Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print(length) print('There are', length, 'vowels') ###Output _____no_output_____ ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output _____no_output_____ ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output _____no_output_____ ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for i in range(4): print(i) ###Output _____no_output_____ ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) # loop 5 times ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code number = 5 # herer i am saying set the original value ---5 for i in range (2): # herer i am set how many times I want to loop -- i.e. loop 0 = 5 ;loop 1 = (5) * 5; loop 2 = (5*5)*5 number = number * 5 # let the number time 5 print(number) ###Output 125 ###Markdown ![image.png](attachment:e72d64c0-1ac0-4696-8891-df340ae09d09.png) Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code word = 'Newton' for char in word: print() #not sure what to do???????????????????????????? x = 'Newton'[::-1] print(x) # this is just print reverse without using any loop newstring = "" oldstring = "Newton" for char in oldstring: # newstring = char + newstring print(newstring) ###Output N eN weN tweN otweN notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33, 5.55]): # if in a list [ ] then the first value is always an index, the print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 i = 3 j = 5.55 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] # notice in a list, index 0 here is 2. y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for idx, coef in enumerate (coefficient): #enumerate is a type of loop y = y + coef * x**idx print(y) print(y) ###Output 2 22 97 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print('There are', length, 'vowels','aeiou') ###Output There are 5 vowels aeiou ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code #letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for num in range(3): print(num) ###Output 0 1 2 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in "range(200)": print(num, end=' ') print? print('aeiou','abcd',sep='#', end='?') print('this is a second line....') range? for num in range(2,6,3): print(num) for num in range(8,3,-1): print(num) ###Output 8 7 6 5 4 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) #6 times it is executed ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code a=1 for i in range(3): a= a*5 print(a) ###Output 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code string='Newton' for char in range(1, len(string)+1): print(string[-char]) print(string[::-1]) word="test" print(word[::-1]) ###Output tset ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 x=5 coefficient = [2, 4, 3] for i, c in enumerate(coefficient): y +=c* x**i print(y) ###Output 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygenation' for char in word: print(char) ###Output o x y g e n a t i o n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 #why 0? because you want to select the first element in the collection. for vowel in 'aeiou': #vowel is the variable name which you have assigned as a name to the variable. The aeiou is the collection. length = length + 1 print(length) print(vowel) print('There are', length, 'vowels') ###Output 1 a 2 e 3 i 4 o 5 u There are 5 vowels ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abcz7': print(letter) print('after the loop, letter is', letter) ###Output a b c z 7 after the loop, letter is 7 ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) #print the size of the collection print(len('collection of charachters')) ###Output 25 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code # this is doing something a certain number of times, eg: read the first five numbers #this will give you 3 numbers since you have specified 3. If you specified 200, would print 200 numbers. for num in range(3): print(num) print('aeiou', 'abcd') #create a new parameter as a separator print? #create a new parameter as a separator print('aeiou', 'abcd', sep = '#') range? # step is the 3rd number # range can also start from higher number to lower number for num in range (2,6,3): print(num) for num in range (8,3,-1): print(num) ###Output 8 7 6 5 4 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in range(1,4): print(num) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code # 6 word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code x = 1 # because we want to loop through only once for num in range(3): x = x * 5 print(x) # if you indented the print, it would print the multiplication results (3 lines), as the print becomes part of function # alternative answer? result = 1 for i in range (0,3): result = result * 5 print(result) ###Output 5 25 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code word = 'Newton' for char in range(len(string) -1, -1, -1): print(string[char], end ='') # alternative answer word = 'Newton' print(word[::-1]) ###Output notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 x = 5 cc = [2, 4, 3] # cc is coefficient for i, c in enumerate(cc): y += c * x**i #total = total + value print(y) ###Output 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) print(word[0],word[1],word[2],word[3]) ###Output l e a d l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. ###Code word = 'oxygen' print(len(word)) for banana in word: print(banana, end = '') print(banana, end = '') print(banana) ###Output 6 ooxxyyggeennn ###Markdown Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print('There are', length, 'vowels') # Check length print(len('aeiou')) print(vowel) print(len(vowel)) ###Output There are 5 vowels 5 u 1 ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code x = range(6) for n in x: print(n) ###Output 0 1 2 3 4 5 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code x = range(1, 4) for n in x: print(n) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' length = 0 for char in word: length = length + 1 print(char) print(length) ###Output o x y g e n 6 ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code length = 0 result = 5 for count in range(1, 3): result = result * 5 print(result) ###Output 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code word = 'fred' word[::-1] ###Output _____no_output_____ ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for i, c in enumerate(cc): y = # magic required here print(y) ###Output _____no_output_____ ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we’ll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) ###Output l e a ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterix or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word’s characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don’t exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here’s a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) #char = character? #indentation means python will execute this inside the loop ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What’s in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here’s another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code l = 0 for vowel in 'aeiou': length = l + 1 print('There are', length, 'vowels') #same as below, just below is shorter code l = 0 for vowel in 'aeiou': l+= l print('There are', length, 'vowels') ###Output There are 1 vowels ###Markdown It’s worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that’s being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' print('Before the loop: letter=', letter) for letter in 'abc': print('Inside the loop: letter=', letter) print('After the loop: letter=', letter) #to make code into a comment and back press ctl+/ ###Output Before the loop: letter= z Inside the loop: letter= a Inside the loop: letter= b Inside the loop: letter= c After the loop: letter= c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven’t met yet, so we should always use it when we can. ###Code print(len(word)) print(word) ###Output 6 oxygen ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for number in range(10): print(number) print(range(10)) print(type(range(10))) #if code isn't working and kernal is active, check if cell is Raw or Code and change to Code for i in range(3,10): print(i) for i in range(3,10,2): print(i) ###Output 3 5 7 9 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for i in range(1,4): print(i) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' counter = 0 for char in word: counter = counter + 1 print(counter) ###Output 6 ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - ie 5 * 5 * 5). ###Code print(5**3) x = 5 for i in range(1,4): x = x*5 print(i,x) print(x) x=5 for i in range(1,3): #0,2 or 1,3 works the same? x=x*5 print(i,x) print(x) ###Output 1 25 2 125 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code print('a'+'b') string1 = 'Newton' print(string1) string2 = '' print(string2) for char in string1: string2 = char+string2 ## string2+char will give 'Newton' - it has to be char+string2 print(string2) ###Output Newton notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =', i, 'j =', j) list1=[2.22, 4.44, 3.33] print(list1) ###Output [2.22, 4.44, 3.33] ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficents. Here's a starting templates ... ###Code y = 0 x = 5 coefficient = [2, 4, 3] for i, c in enumerate(coefficient): print('i', i, 'c', c) y = y + c*x**i ### same as above cell, just shorter print(y) # Code from above cell but with more easy to understand loop variable names import numpy y = 0 x = 5 coefficient = [2, 4, 3] for index, coeff in enumerate(coefficient): print('index', index, 'coeff', coeff) y = y + coeff*(numpy.power(x,index)) ### can use 'power' in place of index too, but confusing with the numpy.power function print(y) ###Output index 0 coeff 2 index 1 coeff 4 index 2 coeff 3 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word='lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word='oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print('There are', length, 'vowels') ###Output There are 5 vowels ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter='z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for num in range(3, 20, 4): print(num) ###Output 3 7 11 15 19 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in range(1,4): print(num) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? 6 Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code answer=1 for num in range(5,26,20): answer=answer * num print(answer) ###Output 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code A = 'Newton' for word in range(5,-1,-1): print(A[word], end="") ###Output notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code enumerate? y = 0 coefficient = [2, 4, 3] for i, c in enumerate(coefficient): y += coefficients[i] * x**i print(y) ###Output 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output _____no_output_____ ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. ###Code word = 'oxygen' for banana in word: print(banana) ###Output o x y g e n ###Markdown Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length= length + 1 print('There are', length, 'vowels') ###Output There are 5 vowels ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for num in range(2,4): print(num) for num1 in range(2,12,2): print(num1, end=' ') print('ddd','sxca',sep='-33-') ###Output 2 3 2 4 6 8 10 ddd-33-sxca ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in range(1,4): print(num) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code for num, char in enumerate('oxygen'): print( num, char) print(len('oxygen')) ###Output 0 o 1 x 2 y 3 g 4 e 5 n 6 ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code exp=1 for var in range(0,3): exp=5*exp print(exp) ###Output 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code word='Newton' for string in range(len(word)-1,-1,-1): print(word[string],end='') print(word[::-1]) ###Output notweNnotweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code y=0 x=5 cc=[2,4,3] for i,c in enumerate(cc): y+= c* x**i print(y) #+= is same as y=y+value #-= is same as y=y-value ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for i, c in enumerate(cc): y = # magic required here print(y) ###Output _____no_output_____ ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word) word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'eternity' for char in word: print(char) ###Output e t e r n i t y ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. ###Code word = 'oxygen' for banana in word: print(banana) ###Output o x y g e n ###Markdown Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print('There are', length, 'vowels') ###Output There are 5 vowels ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'p' for letter in 'abcde': print(letter) print('after the loop, letter is', letter) ###Output a b c d e after the loop, letter is e ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('velociraptor and I')) ###Output 18 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for num in range(10, 2, -1): print(num) range? ###Output _____no_output_____ ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in range(1, 4): print(num) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown 6 times Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code print(5 ** 3) result = 1 for i in range(0, 3): result = result * 5 print(result) ###Output 5 25 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code word1 = 'Newton' word2 = '' count = 0 for i in word1: word2 = word2 + word1[len(word1)-count-] count = count+1 print(word2) ###Output _____no_output_____ ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code y = 0 x = 5 cc = [2, 4, 3] for i, c in enumerate(cc): y += c * x**i print(y) ###Output 97 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for i, c in enumerate(cc): y = # magic required here print(y) ###Output _____no_output_____ ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) print(char) ###Output l l e e a a d d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for num in range(2, 5): print(num) range? ###Output _____no_output_____ ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in range(1, 4): print(num) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? 6 Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code num = 5 for time in range(2): num = num * 5 print(num) ###Output 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code string = 'Newton' for i in range(len(name)-1, -1, -1): print(string[i]) ###Output notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 x = 5 cc = [2, 4, 3] for i, c in enumerate(cc): y = y + c * x**i print(y) ###Output 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output _____no_output_____ ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print('There are', length, 'vowels') ###Output There are 5 vowels ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for num in range(1,3): print(num) ###Output 1 2 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in range(1,4): print(num) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code n = 5 multi = n for power in range(1,3): multi = multi*n print(multi) ###Output 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code word = 'Newton' len('Newton') for ###Output _____no_output_____ ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for i, c in enumerate(cc): y = # magic required here print(y) ###Output _____no_output_____ ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. ###Code word='oxygen' for banana in word: print(banana) ###Output o x y g e n ###Markdown Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print(vowel) print(length) print('There are', length, 'vowels') ###Output a 1 e 2 i 3 o 4 u 5 There are 5 vowels ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for num in range(3, 10, 2): print(num) ###Output 3 5 7 9 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in range(1,4): print(num) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown 6 Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code print (5**3) print(25*5) ###Output 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code print('a' + 'b') word = 'Newton' [::-1] print(word) newstring = '' oldstring = 'Newton' for char in oldstring: newstring = char + newstring print(newstring) print(newstring) ###Output N eN weN tweN otweN notweN notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i=',i, 'j=',j) ###Output i= 0 j= 2.22 i= 1 j= 4.44 i= 2 j= 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output _____no_output_____ ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for i, c in enumerate(cc): y = # magic required here print(y) y = 0 x = 5 coefficients = [2, 4, 3] for idx, coef in enumerate(coefficients): y = y + coef * x**idx print(y) print(y) ###Output 2 22 97 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output _____no_output_____ ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char, '---', sep = '.', end='') print('\n') help(print) ###Output o.---x.---y.---g.---e.---n.--- Help on built-in function print in module builtins: print(...) print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False) Prints the values to a stream, or to sys.stdout by default. Optional keyword arguments: file: a file-like object (stream); defaults to the current sys.stdout. sep: string inserted between values, default a space. end: string appended after the last value, default a newline. flush: whether to forcibly flush the stream. ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output _____no_output_____ ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in range(3): print(num+1) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code num = 1 for count in range(0,3): num = num * 5 print(num) ###Output 5 25 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code word1 = 'abcdef' word2 = '' count = 0 for i in word1: word2 = word2 + word1[len(word1)-count-1] count = count+1 print(word2) ###Output fedcba ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 x = 5 coefficients = [2, 4, 3] for i, c in enumerate(coefficients): y += c * x**i print(y) ###Output 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' ###Output _____no_output_____ ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output _____no_output_____ ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'Crichton' for char in word: print(char) ###Output C r i c h t o n ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output _____no_output_____ ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. ###Code print? ###Output _____no_output_____ ###Markdown Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print(vowel, ' = ', length, ' , ', end='') print('There are', length, 'vowels') ###Output a = 1 , e = 2 , i = 3 , o = 4 , u = 5 , There are 5 vowels ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in range(1,4): print(num) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code print(5*5*5) ###Output 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code word = 'Newton' print(word[::-1]) ###Output Newto ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for i, c in enumerate(cc): y = # magic required here print(y) ###Output _____no_output_____ ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print (char) ###Output _____no_output_____ ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output _____no_output_____ ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. ###Code word = 'oxygen' for banana in word: print(banana) ###Output o x y g e n ###Markdown Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print('There are', length, 'vowels') ###Output There are 5 vowels ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) print(len('abcdefg')) ###Output 7 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for i in range(3, 10, 2): print(i) ###Output 3 5 7 9 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for i in range(1, 4): print(i) for i in range(1, 4): print(i, end = ' ') print() ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code print('6 times') ###Output 6 times ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code newstring = '' oldstring = 'Newton' for char in oldstring: newstring = char + newstring print(newstring) print(newstring) ###Output N eN weN tweN otweN notweN notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 x = 5 coefficients = [2, 4, 3] for idx, coef in enumerate(coefficients): y = y + coef * x**idx print(y) print(y) ###Output 2 22 97 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we’ll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'fun' print(word[0]) print(word[2]) ###Output f n ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterix or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word’s characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don’t exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output _____no_output_____ ###Markdown Here’s a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' print(word) ###Output lead ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code for letter in word: print(letter) ###Output l e a d ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What’s in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here’s another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code l = 0 for i in 'aeiou': i=+1 print(i) ###Output 1 ###Markdown It’s worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that’s being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' print('before the loop: letter=', letter) for letter in 'abc': print(letter) print('After the loop: letter', letter) ###Output before the loop: letter= z a b c After the loop: letter c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven’t met yet, so we should always use it when we can. From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for number in range(10): print(number) ###Output 0 1 2 3 4 5 6 7 8 9 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for number in range(1,4): print(number) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - ie 5 * 5 * 5). ###Code x = 5 for i in range(1,3): x = x*5 print(i,x) print(x) ###Output 1 25 2 125 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code string1= 'Newton' print(string1) string2= "" print(string2) for char in string1: string2 = char + string2 print(string2) ###Output Newton notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficents. Here's a starting templates ... ###Code y = 0 x = 5 coefficient = [2, 4, 3] for i, c in enumerate(coefficient): print ('i', i, 'c', c) print(y) ###Output i 0 c 2 i 1 c 4 i 2 c 3 0 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(word) ###Output o x y g e n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print(length) print(vowel) print('There are', length, 'vowels') ###Output 1 a 2 e 3 i 4 o 5 u There are 5 vowels ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abcd': print(letter) print('after the loop, letter is', letter) ###Output a b c d after the loop, letter is d ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code range(3) for num in range(20): print(num, end=' ') print('aeiou', 'abcde', 'xyz', sep='-?') print('this is a second line...') ###Output aeiou-?abcde-?xyz this is a second line... ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code print(5 ** 3) ###Output 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code print('a' + 'b') ###Output ab ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 x = 5 coefficient = [2, 4, 3] for i, c in enumerate(coefficient): y += c * x**i print(y) total = 5 total += 6 total -= (6 + 7 * 3) # same as totsl = total - 6 print(total) ###Output -16 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we’ll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = "lead" print(word) print(word[0]) print(word[1]) ###Output l e ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterix or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word’s characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don’t exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) ###Output t i n ###Markdown Here’s a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = "lead" for letter in word: print(letter) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = "oxygen" for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output _____no_output_____ ###Markdown What’s in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here’s another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print('There are', length, 'vowels') ###Output There are 5 vowels ###Markdown It’s worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that’s being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' print('before the loop: letter=', letter) for letter in 'abc': print('Inside the loop: letter=', letter) print('After the loop: letters=', letter) ###Output before the loop: letter= z Inside the loop: letter= a Inside the loop: letter= b Inside the loop: letter= c After the loop: letters= c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven’t met yet, so we should always use it when we can. ###Code print(len(word)) ###Output 6 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for number in range(10): print(number) print(range(10)) print(type(range(10))) for i in range(3, 10, 3): print(i) ###Output 3 6 9 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for i in range(1, 4): print(i) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' counter = 0 for char in word: # print(char) counter = counter + 1 print(counter) ###Output 6 ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - ie 5 * 5 * 5). ###Code x = 5 for i in range(1,4): x = x*5 print(i, x) print(x) ###Output 1 25 2 125 3 625 625 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code print('a' + 'b') string1 = 'Newton' print(string1) string2 = '' print(string2) for char in string1: string2 = char + string2 print(string2) ###Output Newton notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) list1 = [2.22, 4.44, 3.33] print(list1) ###Output [2.22, 4.44, 3.33] ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) import numpy print(numpy.power(2,3)) ###Output 8 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficents. Here's a starting templates ... ###Code y = 0 x = 5 coefficient = [2, 4, 3] for index, coeff in enumerate(coefficient): print('power', power, 'coeff', coeff) y = y + coeff*(numpy.power(x,index)) print(y) ###Output power 2 coeff 2 power 2 coeff 4 power 2 coeff 3 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) word = 'oxygen' for char in word: print(char) for char in word: print(char) print(word) ###Output o x y g e n o x y g e n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. ###Code word = 'oxygen' for banana in word: print(banana) ###Output o x y g e n ###Markdown Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print('There are', length, 'vowels') length = 0 for consonents in 'ssdgsgj' : length = length + 1 print(consonents) print(length) print('There are', length, 'consonents') ###Output s 1 s 2 d 3 g 4 s 5 g 6 j 7 There are 7 consonents ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for num in range(3, 10, 2): print(num) ###Output 3 5 7 9 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in range(1, 4): print(num) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' print(len(word)) ###Output 6 ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code number1 = 5 number2 = 3 print('Value is', number1 ** number2) ###Output Value is 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code txt = "Newton"[::-1] print(txt) newstring = '' oldstring = 'Newton' for char in oldstring: newstring = char + newstring print (newstring) print(newstring) ###Output N eN weN tweN otweN notweN notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) y = 0 x = 5 coefficients = [2, 4, 3] for idx, coef in enumerate(coefficients): y= y + coef * x**idx print(y) print(y) ###Output 2 22 97 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for i, c in enumerate(cc): y = # magic required here print(y) ###Output _____no_output_____ ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we’ll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word) ###Output lead ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterix or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word’s characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don’t exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output _____no_output_____ ###Markdown Here’s a better approach:```word = 'lead'for char in word: print(char)``` ###Code for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What’s in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here’s another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for i in 'aeiou': length = length + 1 print('There are', length, 'vowels') ###Output There are 5 vowels ###Markdown It’s worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that’s being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven’t met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for i in range(1, 4): print(i) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - ie 5 * 5 * 5). ###Code print(5**3) ###Output 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code print('a' + 'b') string1 = 'Newton' print(string1) string2 = "" print(string2) for char in string1: string2 = char + string2 print(string2) ###Output Newton notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) list1 = [2.22, 4.44, 3.33] print(list1) ###Output [2.22, 4.44, 3.33] ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficents. Here's a starting templates ... ###Code y = 0 x = 5 coefficient = [2, 4, 3] for i, c in enumerate(coefficient): print('i', i, 'c', c) y = y + c*x**i print(y) ###Output i 0 c 2 i 1 c 4 i 2 c 3 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output l e a d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print('There are', length, 'vowels') ###Output There are 5 vowels ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('aeiou')) ###Output 5 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for num in range(8, 3, -1): print(num) ###Output 8 7 6 5 4 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in range(1, 4): print(num) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code 6 ###Output _____no_output_____ ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code value = 1 for x in range(3): value = value*5 print(value) print('after the loop, value is', value) ###Output 5 25 125 after the loop, value is 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code value = 'Newton' result = '' for x in value: result = x+ result print(result) ###Output notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 x = 5 cc = [2, 4, 3] for i, c in enumerate(cc): y += c * x**i # magic required here print(y) ###Output 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'lead' print(word[0]) print(word[1]) print(word[2]) print(word[3]) print(word) ###Output l e a d lead ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'oxygenation' for char in word: print(char) ###Output o x y g e n a t i o n ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for xx__yy in 'this is a string that cannot be changed...': length = length + 1 print(length) print(xx__yy) print('There are', length, 'characters') ###Output 1 t 2 h 3 i 4 s 5 6 i 7 s 8 9 a 10 11 s 12 t 13 r 14 i 15 n 16 g 17 18 t 19 h 20 a 21 t 22 23 c 24 a 25 n 26 n 27 o 28 t 29 30 b 31 e 32 33 c 34 h 35 a 36 n 37 g 38 e 39 d 40 . 41 . 42 . There are 42 characters ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abcz7': print(letter) print('after the loop, letter is', letter) ###Output a b c z 7 after the loop, letter is 7 ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len('this is also a collection of characters...!@#$$%%^&&*(()())')) ###Output 59 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for num in range(8, 3, -1): print(num) range? print('aeiou', 'abcde', 'xyz', sep='#', end='-?-') print('this is a second line...') ###Output aeiou#abcde#xyz-?-this is a second line... ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for num in range(1,4): print(num) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code result = 1 for i in range(0, 5, 1): result = result * 5 print(result) ###Output 5 25 125 625 3125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code word1 = 'Facebooking' word2 = '' count=0 for i in word1: word2 = word2 + word1[len(word1)-count-1] count = count+1 print(word2) word = "testing 1 2 3" print(word[::-1]) string = 'Newton' for char in range(len(string)-1,-1,-1): print(string[char],end='') ###Output notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) ###Output i = 0 j = 2.22 i = 1 j = 4.44 i = 2 j = 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 x = 5 cc = [2, 4, 3] for i, c in enumerate(cc): y += c * x**i print(y) total = 5 total += 6 total -= (6 + 7 * 3) # same as total = total - 6 print(total) ###Output -16 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word='lead' print(word[0]) print(word[3]) ###Output l d ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output t i n ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = 'lead' for char in word: print(char) ###Output l e a d ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code word='oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for xx in 'kelly': length = length + 1 print(length) print(xx) print('there are', length, 'xx') ###Output 1 k 2 e 3 l 4 l 5 y there are 5 xx ###Markdown It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter ='z' for letter in 'abc': print(letter) print('after the loop,letter is',letter) ###Output a b c after the loop,letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. ###Code print(len([1,2,3,4])) ###Output 4 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for num in range (88,8,-9): print(num) range? ###Output _____no_output_____ ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for i in range (1,4): print(i) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' for char in word: print(char) ###Output o x y g e n ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). ###Code a=1 for x in range(3): a=a*5 print(a) ###Output 5 25 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code char = 'Newton' for string in range(len(char)-1,-1,-1): print(char[string],end='') ###Output notweN ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i,j in enumerate ([2.22,4.44,3.33]): print('i=',i,'j',j) ###Output i= 0 j 2.22 i= 1 j 4.44 i= 2 j 3.33 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for i, c in enumerate(cc): y = # magic required here print(y) total=8 total *= 4 #same as total = total * 4 print(total) x=0 while x<5: x=x+1 print(x) ###Output 1 2 3 4 5 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we’ll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = "lead" print(word) print(word[0]) print(word[1]) ###Output e ###Markdown This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterix or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word’s characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don’t exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) ###Output t i n ###Markdown Here’s a better approach:```word = 'lead'for char in word: print(char)``` ###Code word = "lead" print(word) ###Output lead ###Markdown This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. ###Code for char in word: print(char) ###Output l e a d ###Markdown The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output o o x x y y g g e e n n oxygen ###Markdown What’s in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here’s another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` ###Code length = 0 for vowel in 'aeiou': length = length + 1 print('There are', length, 'vowels') ###Output There are 5 vowels ###Markdown It’s worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that’s being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` ###Code letter = 'z' for letter in 'abc': print(letter) print('after the loop, letter is', letter) ###Output a b c after the loop, letter is c ###Markdown Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven’t met yet, so we should always use it when we can. ###Code print(len('word')) ###Output 4 ###Markdown From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. ###Code for number in range(10): print(number) print(range(10)) print(type(range(10))) for i in range(3, 10, 2): print(i) ###Output 3 5 7 9 ###Markdown Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` ###Code for i in range(1, 4): print(i) ###Output 1 2 3 ###Markdown Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? ###Code word = 'oxygen' counter = 0 for char in word: #print(char) counter = counter +1 print(counter) ###Output 6 ###Markdown Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - ie 5 * 5 * 5). ###Code print(5**3) x = 5 for i in range(0,2): x = x*5 print(i, x) print(x) ###Output 0 25 1 125 125 ###Markdown Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. ###Code print("a" + "b") string1 = "Newton" print(string1) String2 = "" print(string2) for char in string1: string2 = char + string2 print(string2) ###Output Newton ###Markdown Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` ###Code for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j) list1 = [2.22, 4.44, 3.33] print(list1) print(list1[1]) ###Output 4.44 ###Markdown Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. ###Code x = 5 coefficients = [2, 4, 3] y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2 print(y) ###Output 97 ###Markdown Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficents. Here's a starting templates ... ###Code y = 0 x = 5 coefficient = [2, 4, 3] for power, coeff in enumerate(coefficient): print("powe", power, "coeff",coeff) y = y + coeff*x**power print(y) ###Output powe 0 coeff 2 powe 1 coeff 4 powe 2 coeff 3 97 ###Markdown Programming with Python Episode 2 - Repeating Actions with LoopsTeaching: 30 min, Exercises: 30 min Objectives- Explain what a for loop does.- Correctly write for loops to repeat simple calculations.- Trace changes to a loop variable as the loop runs.- Trace changes to other variables as they are updated by a for loop. How can I do the same operations on many different values?In the last episode, we wrote some code that plots some values of interest from our first inflammation dataset (`inflammation-01.csv`, and revealed some suspicious features in it. We have a dozen data sets right now, though, and more on the way. We want to create plots for all of our data sets with a single statement. To do that, we'll have to teach the computer how to repeat things.An example simple task that we might want to repeat is printing each character in a word on a line of its own. For example the if the variable `word` contains the string `lead`, we would like to print:```lead```In Python, a string is just an ordered collection of characters. In our example `l` `e` `a` `d`. Every character has a unique number associated with it – its index. This means that we can access characters in a string using their indices. For example, we can get the first character of the word `lead`, by using `word[0]`. One way to print each character is to use four print statements:```word = 'lead'print(word[0])print(word[1])print(word[2])print(word[3])``` This is a bad approach for three reasons:- Not scalable. Imagine you need to print characters of a string that is hundreds of letters long. It might be easier just to type them in manually.- Difficult to maintain. If we want to decorate each printed character with an asterisk or any other character, we would have to change four lines of code. While this might not be a problem for short strings, it would definitely be a problem for longer ones.- Fragile. If we use it with a word that has more characters than what we initially envisioned, it will only display part of the word's characters. A shorter string, on the other hand, will cause an error because it will be trying to display part of the string that don't exist.```word = 'tin'print(word[0])print(word[1])print(word[2])print(word[3])``` ###Code word = 'tin' print(word[0]) print(word[1]) print(word[2]) print(word[3]) ###Output _____no_output_____ ###Markdown Here's a better approach:```word = 'lead'for char in word: print(char)``` This is shorter — certainly shorter than something that prints every character in a hundred-letter string — and more robust as well:```word = 'oxygen'for char in word: print(char)```The improved version uses a `for` loop to repeat an operation — in this case, printing letters — once for each thing in a sequence. The general form of a `for` loop is:```for variable in collection: do things using variable, such as print```In our example, `char` is the variable, `word` is the collection being looped through and `print(char)` is the thing we want to do.We can call the loop variable anything we like, but there must be a colon `:` at the end of the line starting the loop, and we must *indent* anything we want to run inside the loop. Unlike many other languages, there is no syntax to signify the end of the loop body (e.g. `endfor`) - a loop ends when you stop indenting.```word = 'oxygen'for char in word: print(char) print(char)print(word)``` ###Code word = 'oxygen' for char in word: print(char) print(char) print(word) ###Output _____no_output_____ ###Markdown What's in a name?In the example above, the loop variable was given the name `char` as a mnemonic; it is short for *character*. We can choose any name we want for variables. We might just as easily have chosen the name `banana` for the loop variable, as long as we use the same name when we use the variable inside the loop:word = 'oxygen'for banana in word: print(banana)It is a good idea to choose variable names that are meaningful, otherwise it would be more difficult to understand what the loop is doing. Here's another loop that repeatedly updates a variable:```length = 0for vowel in 'aeiou': length = length + 1print('There are', length, 'vowels')``` It's worth tracing the execution of this little program step by step. Since there are five characters in `'a'` `'e'` `'i'` `'o'` `'u'`, the statement on line 3 will be executed five times. At the start of the loop, `length` is `0` (zero) (the value assigned to it on line 1) and `vowel` is `'a'`. The statement *inside* the loop adds `1` to the old value of `length`, producing `1`, and assigns `length` the new value. The next time around, `vowel` is `'e'` and `length` is 1, so `length` is updated to be 2. After three more updates, 'length' is '5'; since there is nothing left in 'aeiou' for Python to process, the loop finishes and the `print` statement on line 4 tells us our final answer. Note that a loop variable is just a variable that's being used to record progress in a loop. It still exists after the loop is over, and we can re-use variables previously defined as loop variables as well:```letter = 'z'for letter in 'abc': print(letter)print('after the loop, letter is', letter)``` Note also that finding the length of a string is such a common operation that Python actually has a built-in function to do it called `len`:```print(len('aeiou'))````len` is much faster than any function we could write ourselves, and much easier to read than a two-line loop; it will also give us the length of many other things that we haven't met yet, so we should always use it when we can. From 1 to nPython has a built-in function called `range` that generates a sequence of numbers. `range` can accept 1, 2, or 3 parameters:- if one parameter is given, `range` generates a sequence of that length, starting at zero and incrementing by 1. For example, `range(3)` produces the numbers 0, 1, 2.- if two parameters are given, `range` starts at the first and ends just before the second, incrementing by one. For example, `range(2, 5)` produces 2, 3, 4.- if 'range' is given 3 parameters, it starts at the first one, ends just before the second one, and increments by the third one. For example, 'range(3, 10, 2)' produces 3, 5, 7, 9. Exercises Using ranges ...Using `range`, write a loop that uses range to print the first 3 natural numbers:```123``` Understanding loopsGiven the following loop:```word = 'oxygen'for char in word: print(char)```How many times is the body of the loop executed? Computing Powers With LoopsExponentiation is built into Python:```print(5 ** 3)```produces 125.Write a loop that calculates the same result as `5 ** 3` using multiplication (and without exponentiation - i.e. 5 * 5 * 5). Reverse a StringKnowing that two strings can be concatenated using the `+` operator:```print('a' + 'b')```write a loop that takes a string and produces a new string with the characters in reverse order, so 'Newton' becomes 'notweN'. Computing the Value of a PolynomialThe built-in function `enumerate` takes a sequence (e.g. a list) and generates a new sequence of the same length. Each element of the new sequence is a pair composed of the index and the value from the original sequence:```for i, j in enumerate([2.22, 4.44, 3.33]): print('i =',i, 'j =', j)``` Suppose you have encoded a polynomial as a list of coefficients in the following way: The first element is the constant term (x^0), the second element is the coefficient of the linear term (x^1), the third is the coefficient of the quadratic term (x^2), etc.So to evaluate:```y = 2 + 4x + 3x^2```where x = 5, we could use the following code:```x = 5coefficients = [2, 4, 3]y = coefficients[0] * x**0 + coefficients[1] * x**1 + coefficients[2] * x**2print(y)```Try it - you should get the answer `97`. Now, write a loop using `enumerate` which computes the value y of any polynomial, given and x any coefficients. Here's a starting templates ... ###Code y = 0 coefficient = [2, 4, 3] for i, c in enumerate(cc): y = # magic required here print(y) ###Output _____no_output_____
notebooks/02.02-Pixel-count-inspector.ipynb
###Markdown Inspect the incremental pixel counts ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format = 'retina' from astropy.io import fits import astropy.visualization from fast_histogram import histogram1d from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) hdu_counts = fits.open('../data/FFI_counts/C101_FFI_mask.fits', memmap=True) plt.figure(figsize=(14,14)) plt.imshow(hdu_counts['MOD.OUT 10.2'].data, interpolation='none', ); ###Output _____no_output_____
2_1_1.ipynb
###Markdown 2.1.구글 Colab 기반 이그나이트는 작성 시점을 기준으로 다음의 백엔드들을 지원한다.- backends from native torch distributed configuration: “nccl”, “gloo”, “mpi”- XLA on TPUs via pytorch/xla- using Horovod framework as a backend각 의미를 이해하기 위해 가능한 단순한 내용의 코드를 준비하고, 이를 구글 Colab 기반으로 동작시켜 본다. 2.1.1. CPU단 분산처리 구글 Colab은, 별도로 런타임 유형을 지정하지 않는 경우 GPU나 TPU가 없는 VM(Virtual Machine)이 기본 할당된다. 이는 Colab 페이지의 상단메뉴에서 ‘런타임’ > ‘런타임 유형 변경’ 선택 시 나오는 대화상자가, 아래 그림에서처럼 하드웨어 가속기가 None으로 표시되는 상태인 것으로 확인 가능하다. 이 상태에서 기본 CPU만을 이용하여 분산처리를 진행해보자. 2.1.1.1. 패키지 설치 먼저 현 시점 기준 Colab에서 제공하는 VM은, 이그나이트가 사전 설치되어 있지 않은 상태이다. 따라서 다음과 같이 이그나이트의 최신 version을 설치한다. 참고로 pip 명령어는 package installer for python)의 약자이며, 아래 명령문 실행 시 이그나이트의 pre-release를 PyPI(python package index)로부터 설치하게 된다. ###Code !pip install --pre pytorch-ignite ###Output Requirement already satisfied: pytorch-ignite in /usr/local/lib/python3.7/dist-packages (0.5.0.dev20210910) Requirement already satisfied: torch<2,>=1.3 in /usr/local/lib/python3.7/dist-packages (from pytorch-ignite) (1.9.0+cu102) Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch<2,>=1.3->pytorch-ignite) (3.7.4.3) ###Markdown 결과에 따르면, 본 가이드라인 작성 시점의 최신 버전인 xxxx년 x월 xx일자 (예: 2021년 9월 10일자) 이그나이트 x.x.x (예: 0.5.0)가 설치 되었음을 알 수 있다.. 2.1.1.2. 노드 1개, 노드 당 프로세스 수 1개 실행 설치가 완료되었다면, 아래의 코드를 실행한다. 코드에서 수행하는 작업은 아래와 같다.~~코드라인 22번부터 31번까지인 idist.Parallel 컨텍스트 매니징 방식만 주의해서 보자. 그 이외는 중요하지 않다~~ (라인 1)에서는 이그나이트의 distributed 패키지를 idist라는 이름으로 불러들이고, (라인 26-28)에서는 백엔드로 gloo를 할당하는 등의 설정 작업을 지정한 후, (라인 22)에서의 training 함수를, (라인 30-31)에서 컨택스트 매니징이 가능한 idist.Parallel을 이용하여 run 시킨다. (라인 22)의 training 함수는 몇 가지 정보를 출력한 후, 무의미한 작업을 반복하도록 작성되었다. 그리고 (라인 27)에서는 분산처리 설정에 해당하는 dist_configs 딕셔너리에 nproc_per_node 키에 해당하는 값으로 2를 설정하였다. 이는 2개의 자식 프로세스를 생성(spawn)하여 분산 처리를 진행하라는 명령으로 생각하면 된다. ###Code import ignite.distributed as idist from functools import wraps import time import random def fn_timer(function): @wraps(function) def function_timer(*args, **kwargs): t0 = time.time() result = function(*args, **kwargs) t1 = time.time() print (idist.get_rank(), " : Total time running %s: %s seconds" % (function.__name__, str(t1-t0))) return result return function_timer @fn_timer def random_sort(n): return sorted([random.random() for i in range(n)]) def training(local_rank, config, **kwargs): print(idist.get_rank(), ': run with config:', config, '- backend=', idist.backend()) random_sort(2500000) backend = 'gloo' # or "xla-tpu" or None dist_configs = {'nproc_per_node': 1, "start_method": "fork"} # or dist_configs = {...} config = {'c': 12345} with idist.Parallel(backend=backend, **dist_configs) as parallel: parallel.run(training, config, a=1, b=2) ###Output 2021-09-11 01:40:26,746 ignite.distributed.launcher.Parallel INFO: Initialized distributed launcher with backend: 'gloo' 2021-09-11 01:40:26,748 ignite.distributed.launcher.Parallel INFO: - Parameters to spawn processes: nproc_per_node: 1 nnodes: 1 node_rank: 0 start_method: fork 2021-09-11 01:40:26,750 ignite.distributed.launcher.Parallel INFO: Spawn function '<function training at 0x7f39881228c0>' in 1 processes ###Markdown 결과의 첫번 째 출력 항목은 아래와 같으며, distributed launcher가 gloo 백엔드로 초기화 되었음을 표시한다. ~~누군가의 실행에 따라 날짜 등은 계속 변한다. 그리고 이그나이트 버전 변경에 따라 표시되는 내용도 변할 수 있다.~~ >*2021-06-25 07:48:33,646 ignite.distributed.launcher.Parallel INFO: Initialized distributed launcher with backend: 'gloo'* 상기 ‘’’1.4 분산 딥러닝 기본 지식’’’ 항목에서 언급된 바와 같이 분산처리를 위해서는 컴퓨팅 코어 간 통신이 필요하며, 본 예제에서는 컴퓨팅 코어가 CPU인 경우이므로 gloo나 mpi이 백엔드로 이용된 것이다. 그리고 만일 이 컴퓨팅 코어가 GPU인 경우 nccl 백엔드 이용이 가능하다. 이와 관련해서는 DISTRIBUTED COMMUNICATION PACKAGE - TORCH.DISTRIBUTED 페이지의 [rule of thumb](https://pytorch.org/docs/stable/distributed.html) 항목을 참조한다. ~~분산처리의 역사만큼이나 다양한 분산처리 방식이 존재한다. TCP 등을 이용해 직접 프로세스간 통신을 처리하는 방법도 있지만, 하이레벨 관점에서 CPU는 gloo, GPU는 nccl, TPU는 xla 백엔드를 사용해야 한다고 생각하는 것이 정신건강에 좋다.~~ 두번 째 출력 항목은 node 1개(nnodes: 1)에서, 노드 당 1개의 분산처리 작업 (nproc_per_node: 1)이 진행될 것임을 알려준다. >*2021-06-25 07:48:33,649 ignite.distributed.launcher.Parallel INFO: - Parameters to spawn processes:* >>*nproc_per_node: 1* *nnodes: 1* *node_rank: 0* *start_method: fork* 세번 째 출력 항목은 각 프로세스 내에서 print문에 의한 출력과 측정된 소요 시간을 보여주고 있다. rank가 0번 하나로 표기되었고, 이는 프로세스 총 합이 1개임을 보여 준다.>*2021-06-25 07:48:33,651 ignite.distributed.launcher.Parallel INFO: Spawn function '' in 1 processes*>>*0 : run with config: {'c': 12345} - backend= gloo* *0 : Total time running random_sort: 2.119020938873291 seconds* 그리고 마지막 출력 항목은 분산처리가 모두 끝났음을 알리고 있다.>*2021-06-25 07:48:36,009 ignite.distributed.launcher.Parallel INFO: End of run* 여기서 프로세스 시작을 알리는 시간(세번 째 출력 항목)이 07시 48분 33초이고, 분산처리가 모두 끝났음을 알리는 시간(마지막 출력 항목)은 07시 48분 36초임을 확인해 보자. 각 2.1초 정도의 시간이 소요되는 작업(training 함수)을 총 1번 진행하였으며, 전체 처리 시간은 약 2.5초 소요되었음을 알려주고있다. ~~처리 시간은 Colab에서 할당해주는 VM 종류에 따라 변할 수 있다~~ 2.1.1.3. 노드 1개, 노드 당 프로세스 수 2개 실행 만일 노드 당 분산처리 작업(nproc_per_node)을 2개로 바꾸면 어떻게 될까? 이를 위해 아래의 코드를 추가하고 실행한다. 코드에서는 노드 당 2개의 분산처리 작업을 수행할 수 있도록 nproc_per_node 항목에 2를 할당하였다. ###Code dist_configs['nproc_per_node'] = 2 with idist.Parallel(backend=backend, **dist_configs) as parallel: parallel.run(training, config, a=1, b=2) ###Output 2021-09-11 01:40:28,412 ignite.distributed.launcher.Parallel INFO: Initialized distributed launcher with backend: 'gloo' 2021-09-11 01:40:28,414 ignite.distributed.launcher.Parallel INFO: - Parameters to spawn processes: nproc_per_node: 2 nnodes: 1 node_rank: 0 start_method: fork 2021-09-11 01:40:28,417 ignite.distributed.launcher.Parallel INFO: Spawn function '<function training at 0x7f39881228c0>' in 2 processes ###Markdown 아래 출력 항목은 node 1개(nnodes: 1)에서, 노드 당 2개의 분산처리 작업 (nproc_per_node: 2)이 진행될 것임을 알려준다. >*2021-06-25 05:48:33,629 ignite.distributed.launcher.Parallel INFO: - Parameters to spawn processes:* >>*nproc_per_node: 2* *nnodes: 1* *node_rank: 0* *start_method: fork* 그리고 아래 출력 항목은 각 프로세스 내에서 print문에 의한 출력과 측정된 소요 시간을 보여주고 있다. rank가 각 0번과 1번으로 표기되었고, 프로세스 총 합은 2개임을 보여 준다.>*2021-06-25 05:48:33,651 이그나이트.distributed.launcher.Parallel INFO: Spawn function '' in 2 processes* *1 : run with config: {'c': 12345} - backend= gloo* *0 : run with config: {'c': 12345} - backend= gloo* *0 : Total time running random_sort: 2.119020938873291 seconds* *1 : Total time running random_sort: 2.177091121673584 seconds* 그리고 마지막 출력 항목은 분산처리가 모두 끝났음을 알리고 있다.>*2021-06-25 05:48:36,619 이그나이트.distributed.launcher.Parallel INFO: End of run* 여기서 프로세스 시작을 알리는 시간(세번 째 출력 항목)이 05시 48분 33초이고, 분산처리가 모두 끝났음을 알리는 시간(마지막 출력 항목)은 05시 48분 36초임을 확인해 보자. 각 2.1초 정도의 시간이 소요되는 작업(training 함수)을 총 2번 진행하였으나, 분산처리의 도움으로 전체 처리 시간은 3초에 그친다. 2.1.1.4. 노드 1개, 노드 당 프로세스 수 8개 실행 만일 노드 당 분산처리 작업(nproc_per_node)를 8개로 바꾸면 어떻게 될까? ###Code dist_configs['nproc_per_node'] = 8 with idist.Parallel(backend=backend, **dist_configs) as parallel: parallel.run(training, config, a=1, b=2) ###Output 2021-09-11 01:40:31,223 ignite.distributed.launcher.Parallel INFO: Initialized distributed launcher with backend: 'gloo' 2021-09-11 01:40:31,226 ignite.distributed.launcher.Parallel INFO: - Parameters to spawn processes: nproc_per_node: 8 nnodes: 1 node_rank: 0 start_method: fork 2021-09-11 01:40:31,228 ignite.distributed.launcher.Parallel INFO: Spawn function '<function training at 0x7f39881228c0>' in 8 processes ###Markdown 다음과 같이 8개의 child process가 생성되어 작업이 진행되었음을 알 수 있다.>*2021-06-25 07:44:08,255 이그나이트.distributed.launcher.Parallel INFO: Initialized distributed launcher with backend: 'gloo' 2021-06-25 07:44:08,256 이그나이트.distributed.launcher.Parallel INFO: - Parameters to spawn processes:* >>*nproc_per_node: 8* *nnodes: 1* *node_rank: 0* *start_method: fork* >*2021-06-25 07:44:08,259 이그나이트.distributed.launcher.Parallel INFO: Spawn function '' in 8 processes* *0 : run with config: {'c': 12345} - backend= gloo* *4 : run with config: {'c': 12345} - backend= gloo* *5 : run with config: {'c': 12345} - backend= gloo* *3 : run with config: {'c': 12345} - backend= gloo* *2 : run with config: {'c': 12345} - backend= gloo* *1 : run with config: {'c': 12345} - backend= gloo* *6 : run with config: {'c': 12345} - backend= gloo* *7 : run with config: {'c': 12345} - backend= gloo* *7 : Total time running random_sort: 8.505528688430786 seconds* *0 : Total time running random_sort: 8.733642339706421 seconds* *6 : Total time running random_sort: 8.731879472732544 seconds* *4 : Total time running random_sort: 8.774088859558105 seconds* *3 : Total time running random_sort: 8.826892614364624 seconds* *2 : Total time running random_sort: 8.832333087921143 seconds* *1 : Total time running random_sort: 8.853899955749512 seconds* *5 : Total time running random_sort: 8.880859375 seconds* >*2021-06-25 07:44:19,250 이그나이트.distributed.launcher.Parallel INFO: End of run* 07시 44분 08초에 생성되어, 07시 44분 19초에 작업이 끝났음을 눈여겨 보자. 프로세스간 통신 등의 오버헤드로 인해 좀 더 시간이 소요되었으나 여전히 전체 시간은 단축되었음을 알 수 있다. 물론 아래와 같이 더 많은 수의 프로세스를 생성하여 작업하는 것도 가능하다. ###Code dist_configs['nproc_per_node'] = 50 with idist.Parallel(backend=backend, **dist_configs) as parallel: parallel.run(training, config, a=1, b=2) ###Output 2021-09-11 01:40:42,509 ignite.distributed.launcher.Parallel INFO: Initialized distributed launcher with backend: 'gloo' 2021-09-11 01:40:42,512 ignite.distributed.launcher.Parallel INFO: - Parameters to spawn processes: nproc_per_node: 50 nnodes: 1 node_rank: 0 start_method: fork 2021-09-11 01:40:42,514 ignite.distributed.launcher.Parallel INFO: Spawn function '<function training at 0x7f39881228c0>' in 50 processes ###Markdown 2.1.1.5. VM 리소스 확인 이제 상기 분산처리 작업을 진행한 Colab의 VM에 대해 확인해보자. ###Code !cat /proc/cpuinfo ###Output processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 85 model name : Intel(R) Xeon(R) CPU @ 2.00GHz stepping : 3 microcode : 0x1 cpu MHz : 1999.999 cache size : 39424 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa bogomips : 3999.99 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 85 model name : Intel(R) Xeon(R) CPU @ 2.00GHz stepping : 3 microcode : 0x1 cpu MHz : 1999.999 cache size : 39424 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 1 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa bogomips : 3999.99 clflush size : 64 cache_alignment : 64 address sizes : 46 bits physical, 48 bits virtual power management: ###Markdown 상기 실험 시 Colab으로부터 할당받은 VM은 총 2개의 인텔 제온 프로세서가 탑재되어 있고, 각 프로세서 당 코어 수는 1개이다. 따라서, 총 2개의 코어에서 8개의 (논리적) child process를 생성하여 분산처리 작업이 진행되었다는 것을 알 수 있다.>*processor : 0* ... *model name : Intel(R) Xeon(R) CPU @ 2.30GHz* ... *cpu cores : 1* ... *processor : 1* ... *model name : Intel(R) Xeon(R) CPU @ 2.30GHz* ... *cpu cores : 1* ... 2.1.1.6. 결과 해석 이제까지 Colab 기반으로 CPU단에서 분산처리를 진행하는 방식을 살펴보았다. 그리고 ‘’’2.1.1.4절의 VM 리소스 확인’’’ 항목에서, 분산처리를 위해 Colab으로부터 할당받은 VM이 2-CPU로 구성된 것이었음을 확인하였다. >*참고로 2-CPU, 즉 보드 위에 여러 개의 CPU를 탑재하는 multi-CPU 방식은 2000년대 초반 시절에 [무어의 법칙](https://en.wikipedia.org/wiki/Moore%27s_law)/[폴락의 법칙](https://en.wikipedia.org/wiki/Pollack%27s_rule) 극복을 위해 고려되던 것이다. multi-CPU방식은 CPU 별로 어드레싱을 분리할 수 있으므로 사용 가능한 램 용량을 크게 늘릴 수 있다는 등의 장점도 있다. 그러나 이후 하나의 다이(die)에 2개의 코어를 얹는 인텔 코어2가 큰 성공을 거두면서, 멀티코어(multi-core) 방식이 새로운 주류가 되었다. 현 시점 기준으로는 인텔의 최신 10세대 Core i9이 10코어/20쓰레드를 지원하고, AMD는 3세대 Ryzen 9에서 인텔보다 앞서는 16코어/32쓰레드를 지원한다. ~~AMD 직원 대상 리사 수 지지율 조사 결과 무려 98%가 지지!!~~* 분산처리에서는 여러 개의 프로세스를 생성하여 동시에 작업을 진행시키는 것이 중요하다. 여기서 동시에 작업을 진행시킨다는 의미는 동시에 작업을 하는 것처럼 보이는 것([concurrent computing](https://en.wikipedia.org/wiki/Concurrent_computing))이 아니라, 말 그대로 동시에 작업이 진행되는 것([parallel computing](https://en.wikipedia.org/wiki/Parallel_computing))을 말한다. 코어 1개에서는 1개의 프로세스가 실행될 수 있다. 시분할 스케쥴링 등의 방법을 통해 여러 개의 프로세스가 동시에 실행되는 것처럼 보일 수 있지만, 실제로 한 시점에 실행되는 프로세스는 코어당 1개이다. ~~예외로 인텔의 하이퍼쓰레드 같은 기술이 있지만 단순화를 위해 언급하지 않는다.~~ 따라서 2-코어의 경우 2개의 프로세스가 동시에 실행될 수 있으며, 이는 Colab으로부터 할당받은 2-CPU의 경우에도 동일하다. ~~Colab의 2-CPU는, 분산처리 관점에서 multi-core의 하나인 dual-core 경우와 동일하게 취급해도 무방하다.~~ 이를 상기 ‘’’2.1.1.2 노드 1개, 노드 당 프로세스 수 2개 실행” 항목에서 얻은 결과와 비교하여 생각해 보자. >*2021-06-25 07:48:33,651 이그나이트.distributed.launcher.Parallel INFO: Spawn function '' in 2 processes* *1 : run with config: {'c': 12345} - backend= gloo* *0 : run with config: {'c': 12345} - backend= gloo* *0 : Total time running random_sort: 2.119020938873291 seconds* *1 : Total time running random_sort: 2.177091121673584 seconds* 개별 프로세스 당 약 2.1초가 소요되는 작업을 진행하였고, 분산처리 기준의 전체 작업시간은 약 3초이다. 그리고 결과 메시지에 포함된 rank id 0번과 1번 표시로부터, 할당받은 VM에 포함된 2개의 CPU에 프로세스가 각각 1개씩 분할되어 작업이 진행되었음을 확인할 수 있다. 또한 전체 작업 시간이 3초로서, 개별 프로세스 당 작업 시간인 2.1초보다 약 0.9초 더 소요된 이유는 데이터의 교환, 컨텍스트 스위치로 인한 캐시 적중 실패 등의 오버헤드로 인한 것이다. 이에 대해 좀 더 자세히 알고 싶은 경우 [암달의 법칙](https://en.wikipedia.org/wiki/Amdahl%27s_law) 링크를 참조한다. 이제 ‘’’2.1.1.3 노드 1개, 노드 당 프로세스 수 8개 실행”항목에서 얻은 결과와도 비교해본다.>*2021-06-25 07:44:08,259 이그나이트.distributed.launcher.Parallel INFO: Spawn function '' in 8 processes* *0 : run with config: {'c': 12345} - backend= gloo* *4 : run with config: {'c': 12345} - backend= gloo* *5 : run with config: {'c': 12345} - backend= gloo* *3 : run with config: {'c': 12345} - backend= gloo* *2 : run with config: {'c': 12345} - backend= gloo* *1 : run with config: {'c': 12345} - backend= gloo* *6 : run with config: {'c': 12345} - backend= gloo* *7 : run with config: {'c': 12345} - backend= gloo* *7 : Total time running random_sort: 8.505528688430786 seconds* *0 : Total time running random_sort: 8.733642339706421 seconds* *6 : Total time running random_sort: 8.731879472732544 seconds* *4 : Total time running random_sort: 8.774088859558105 seconds* *3 : Total time running random_sort: 8.826892614364624 seconds* *2 : Total time running random_sort: 8.832333087921143 seconds* *1 : Total time running random_sort: 8.853899955749512 seconds* *5 : Total time running random_sort: 8.880859375 seconds* *2021-06-25 07:44:19,250 이그나이트.distributed.launcher.Parallel INFO: End of run* 앞에서는 각 프로세스 당 약 2.1초가 소요되었는데, 이번에는 프로세스 당 8.7초에서 8.8초가 소요되었음을 알 수 있다. 이는 2-CPU에 8개의 분산처리 작업이 요청됨에 따라, 마치 동시에 모든 작업이 처리되는 것처럼 보이도록 진행되었기 때문이다. 그럼에도 불구하고, 각 CPU 코어 관점에서 볼 때 여러 개의 프로세스가 교체되는 오버헤드가 추가 발생했지만 2.1초 분량의 작업이 8차례 순차 작업되는 시간보다는 줄어들었음을 알 수 있다. 그리고 아래 그림은 wandb(weight and bias) tool을 이용하여 추적한 CPU utilization 확인 결과이다. Note: This is not an official [LG AI Research](https://www.lgresearch.ai/) product but sample code provided for an educational purposeauthor: John H. Kim email: [email protected] / [email protected] ###Code ###Output _____no_output_____
Titanic-preprocessing.ipynb
###Markdown Load and prepare the Titanic datasetMany ideas in this notebook are inspired from https://www.kaggle.com/mnassrib/titanic-logistic-regression-with-python. Document dependenciesSkip this section. To document the dependencies, execute the following cells after executing all relevant import statements. ###Code !pip install --user watermark %load_ext watermark print() %watermark -v print() %watermark -iv ###Output CPython 3.7.4 IPython 7.8.0 tensorflow_datasets 3.1.0 numpy 1.18.5 pandas 0.25.1 tensorflow 2.1.0 ###Markdown Import packages ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow_datasets as tfds import tensorflow as tf ###Output _____no_output_____ ###Markdown Load the Titanic dataset ###Code ds = tfds.load('titanic', split='train') print(type(ds)) for elem in tfds.as_numpy(ds.take(1)): dict_of_lists = {key: [] for key in elem['features']} dict_of_lists['survived'] = [] dict_of_lists for elem in tfds.as_numpy(ds): dict_of_lists['survived'].append(elem['survived']) for key in elem['features']: dict_of_lists[key].append(elem['features'][key]) print(dict_of_lists) df_raw = pd.DataFrame(data=dict_of_lists) df_raw.head() ###Output _____no_output_____ ###Markdown Pre-processing 1: Drop columns that will not be used furtherThe column 'cabin' has mostly unknown values. 'boat' is known almost exclusively for survivors. The others don't hold useful information. ###Code df_drop = df_raw.drop(columns=['boat', 'body', 'cabin', 'home.dest', 'name', 'ticket']) df_drop.head() ###Output _____no_output_____ ###Markdown 2: Clean the remaining features. embarked ###Code df_drop.pivot(columns='embarked', values='survived').count() ###Output _____no_output_____ ###Markdown embarked category 3 corresponds to missing values, of which there are two. They can be replaced with the most frequent port of embarkation (2, corresponding to Southampton) ###Code df_embarked = df_drop.copy() df_embarked['embarked'] = df_drop.embarked.apply(lambda x: min(2,x)) df_embarked.pivot(columns='embarked', values='survived').count() ###Output _____no_output_____ ###Markdown Then, `embarked` is finally one-hot-encoded. It is not necessary to keep all three embarkation ports, because they are linearly dependent with the "mean column" or column of 1s that is equivalent to adding a bias term in the linear model, which is done in net_fn. ###Code df_embarked_1hot = pd.get_dummies(df_embarked, columns=['embarked']).drop(columns=['embarked_2']) df_embarked_1hot.head() ###Output _____no_output_____ ###Markdown fare ###Code df_embarked_1hot.plot(y='fare', kind='density')#, bins=[-1]+list(range(0,550,50))) print(df_embarked_1hot['fare'].min()) print((df_embarked_1hot['fare'] < 0).sum()) ###Output -1.0 1 ###Markdown We simlmpy standardize the `fare` feature, ignoring the one passenger with a negative fare. ###Code df_fare = df_embarked_1hot.copy() df_fare['fare'] = (df_fare['fare'] - df_fare['fare'].mean()) / df_fare['fare'].std() df_fare['fare'].describe() ###Output _____no_output_____ ###Markdown parch and sibspThese columns are combined into a binary variable indicating whether the passenger travelled alone: ###Code df_parch = df_fare.copy() df_parch['travel_alone']=np.where((df_parch["sibsp"]+df_parch["parch"])>0, 0, 1) df_parch.head() df_parch = df_parch.drop(columns=['sibsp', 'parch']) df_parch.head() ###Output _____no_output_____ ###Markdown Passenger class ###Code df_parch.pivot(columns='pclass', values='survived').count() ###Output _____no_output_____ ###Markdown pclass is one-hot encoded. It is not necessary to keep all three passenger classes, because they are linearly dependent with the "mean column" or column of 1s that is equivalent to adding a bias term in the linear model, which is done in net_fn. ###Code df_pclass = pd.get_dummies(df_parch, columns=['pclass']).drop(columns=['pclass_2']) df_pclass.head() ###Output _____no_output_____ ###Markdown sex ###Code df_pclass.pivot(columns='sex', values='survived').count() ###Output _____no_output_____ ###Markdown The `sex` feature is good to go. age ###Code proportion_missing = np.sum(df_pclass['age'] < 0) / df_pclass.shape[0] proportion_missing df_pclass.plot(y='age', kind='hist', bins=[-1]+list(range(0,101,5))) df_age = df_pclass.copy() df_age['age_group'] = pd.cut(df_age['age'], 16) df_age.groupby(by='age_group').mean().plot(y='survived', rot=60, kind='bar') ###Output _____no_output_____ ###Markdown 20.1% of `age` values are missing. Since the age distribution is skewed, we impute them with the column median instaed of the mean, before We also create a new feature `IsMinor` for children under the age of 16, following MNassri (cf. link at the top of the notebook), even though we do not find a surprising difference in survival rates between minors and adults. ###Code df_age['age_missing'] = df_age['age'] < 0 df_age.head(10) age_known_series = df_age.loc[~df_age['age_missing'], 'age'] age_known_series # Use the median for missing values age_median = age_known_series.median() age_mean = age_known_series.mean() age_std = age_known_series.std() print('median:', age_median,'mean:', age_mean, 'std:', age_std) df_age['age_imputed'] = df_age['age'] df_age.loc[df_age['age_missing'], 'age_imputed'] = age_median df_age.head(10) df_age.plot(y='age_imputed', kind='hist', bins = 32) df_age['age_standardized'] = (df_age['age_imputed'] - age_mean) / age_std df_age.head(10) df_age['is_minor'] = (df_age['age'] < 17).astype(int) df_age.head(55) df_age.loc[df_age['age_missing'], 'is_minor'] = proportion_missing df_age.head(55) df = df_age.drop(columns=['age_missing', 'age_group', 'age_imputed', 'age']) df.head() ###Output _____no_output_____ ###Markdown Saving the result ###Code df.to_csv("../data/non-private/Titanic_preprocessed.csv", index=False) df_age.to_csv("../data/non-private/Titanic_preprocessed_full.csv", index=False) ###Output _____no_output_____ ###Markdown 3. Create X, y, and age, by splitting off the `survived` column and copying `age` ###Code df = pd.read_csv('../data/non-private/Titanic_preprocessed.csv') df_age = pd.read_csv('../data/non-private/Titanic_preprocessed_full.csv') na = np.sum(df_age['age'] < 0) / len(df) print("unknown age:", na) lower_21 = np.sum(df_age['age'] < 21) / len(df) print("0-20 years:", lower_21 - na) lower_36 = np.sum(df_age['age'] < 36) / len(df) print("21-35 years:", lower_36 - lower_21) print("36+ years:", np.sum(df_age['age'] >=36) / len(df)) print("na values:", 1 - df_age['age'].count() / len(df)) age_original = df_age.loc[:, ['age']].copy() y = df.loc[:,['survived']].copy() X = df.drop(columns=['survived']) age_original.head() y.head() X.head() X.columns ###Output _____no_output_____ ###Markdown Save the feature matrix $\mathbf{X}$, the target vector $\mathbf{y}$, and the age (for reference) ###Code import os save_dir = '../data/non-private/predict_titanic_survived' if not os.path.isdir(save_dir): os.mkdir(save_dir) X.to_csv(os.path.join(save_dir, 'X.csv'), index=False) y.to_csv(os.path.join(save_dir, 'y.csv'), index=False) age_original.to_csv(os.path.join(save_dir, 'age.csv'), index=False) y.shape y2 = np.loadtxt(os.path.join(save_dir, 'y.csv'), skiprows=1, delimiter=',') y2.shape X['fare'].min() ###Output _____no_output_____
ATSC_500/ATSC_500_Assignment_XI_Entrainment_layer_growth.ipynb
###Markdown Show the vertical profile of domain-wide averaged buoyancy flux ###Code from mpl_toolkits.mplot3d import Axes3D W1 = W[0, ...] TABS1 = TABS[0, ...] Fb_ave = ((W1 - W1.mean(0))*(TABS1 - TABS1.mean(0))).mean((2, 3)) GridZ, GridT = np.meshgrid(z, time) L = len(time) ztop = np.zeros(L) for i in range(L): ztop[i] = z[np.flipud(np.cumsum(np.abs(Fb_ave[i, ::-1]))) < 0.1].min() fake_x = np.zeros(L) B = plt.cm.RdBu(255-25) R = plt.cm.RdBu(25) # plot fig = plt.figure(figsize=(12, 6)) ax = fig.gca(projection='3d') ax.grid(linestyle=':') ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax.xaxis.set_tick_params(labelsize=14) ax.yaxis.set_tick_params(labelsize=14) ax.zaxis.set_tick_params(labelsize=14) ax.xaxis._axinfo["grid"]['linestyle'] = ':' ax.yaxis._axinfo["grid"]['linestyle'] = ':' ax.zaxis._axinfo["grid"]['linestyle'] = ':' ax.xaxis._axinfo["grid"]['color'] = 'k' ax.yaxis._axinfo["grid"]['color'] = 'k' ax.zaxis._axinfo["grid"]['color'] = 'k' ax.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 1.0)) ax.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 1.0)) ax.w_zaxis.set_pane_color((1.0, 1.0, 1.0, 1.0)) ax.set_xlabel('\n\nBuoyancy flux [$\mathrm{K\cdot m\cdot s^{-1}}$]', fontsize=14) ax.set_ylabel('\n\nTime [s]', fontsize=14) ax.set_zlabel('\nHeight [m]', fontsize=14) ax.set_title('Buoyancy flux and mixed layer height evolutions', fontsize=16, y=1.075) ax.set_xlim3d([-0.2, 1.1]) ax.set_ylim3d([time[0], time[L-1]]) for i in range(0, L, 2): ax.plot(Fb_ave[i, :], GridT[i, :], GridZ[i, :], color=B, lw=3.5) ax.plot(fake_x[::2], time[::2], ztop[::2], ls='--', color=R, lw=3.5, label='PBL height') LG = ax.legend(bbox_to_anchor=(0.9, 0.95), prop={'size':14}); LG.draw_frame(False) ax.view_init(45, -30) plt.tight_layout() ###Output _____no_output_____ ###Markdown Verify if the height growing as $\sqrt{\mathrm{time}}$ ###Code secs = time-time[0] k, b = np.polyfit(np.sqrt(secs), ztop, 1) fig = plt.figure(figsize=(7, 3)) ax = fig.gca() ax.grid(linestyle=':') ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax.xaxis.set_tick_params(labelsize=14) ax.yaxis.set_tick_params(labelsize=14) [j.set_linewidth(2.5) for j in ax.spines.values()] ax.tick_params(axis="both", which="both", bottom="off", top="off", \ labelbottom="on", left="off", right="off", labelleft="on") ax.set_xlabel('Seconds after start [s]', fontsize=14) ax.set_ylabel('Mixed layer height [m]', fontsize=14) ax.plot(secs, ztop, '--', lw=3.5, color=B) ax.plot(secs, k*np.sqrt(secs)+b, ls='-', lw=2.5, color=R) ax.text(0.15, 600, 'y = '+str(np.around(k, 2))+' $\sqrt{t}$ +'+str(np.around(b, 2)), fontsize=14) ###Output _____no_output_____
concepts/Data Structures/01 LinkedLists/04 Reverse a Linked List.ipynb
###Markdown Reversing a linked list exerciseGiven a singly linked list, return another linked list that is the reverse of the first. ###Code # Helper Code class Node: def __init__(self, value): self.value = value self.next = None class LinkedList: def __init__(self): self.head = None def append(self, value): if self.head is None: self.head = Node(value) return node = self.head while node.next: node = node.next node.next = Node(value) def __iter__(self): node = self.head while node: yield node.value node = node.next def __repr__(self): return str([v for v in self]) ###Output _____no_output_____ ###Markdown Write the function definition here ###Code # Solution # Time complexity O(N) def reverse(linked_list): """ Reverse the inputted linked list Args: linked_list(obj): Linked List to be reversed Returns: obj: Reveresed Linked List """ new_list = LinkedList() prev_node = None """ A simple idea - Pick a node from the original linked list traversing form the beginning, and prepend it to the new linked list. We have to use a loop to iterate over the nodes of original linked list """ # In this "for" loop, the "value" is just a variable whose value will be updated in each iteration for value in linked_list: # create a new node new_node = Node(value) # Make the new_node.next point to the # node created in previous iteration new_node.next = prev_node # This is the last statement of the loop # Mark the current new node as the "prev_node" for next iteration prev_node = new_node # Update the new_list.head to point to the final node that came out of the loop new_list.head = prev_node return new_list ###Output _____no_output_____ ###Markdown Let's test your function ###Code llist = LinkedList() for value in [4,2,5,1,-3,0]: llist.append(value) flipped = reverse(llist) is_correct = list(flipped) == list([0,-3,1,5,2,4]) and list(llist) == list(reverse(flipped)) print("Pass" if is_correct else "Fail") ###Output Pass ###Markdown Show Solution ###Code # Solution # Time complexity O(N) def reverse(linked_list): """ Reverse the inputted linked list Args: linked_list(obj): Linked List to be reversed Returns: obj: Reveresed Linked List """ new_list = LinkedList() prev_node = None """ A simple idea - Pick a node from the original linked list traversing form the beginning, and prepend it to the new linked list. We have to use a loop to iterate over the nodes of original linked list """ # In this "for" loop, the "value" is just a variable whose value will be updated in each iteration for value in linked_list: # create a new node new_node = Node(value) # Make the new_node.next point to the # node created in previous iteration new_node.next = prev_node # This is the last statement of the loop # Mark the current new node as the "prev_node" for next iteration prev_node = new_node # Update the new_list.head to point to the final node that came out of the loop new_list.head = prev_node return new_list ###Output _____no_output_____
solutions/1.ipynb
###Markdown Musterlösung zu Übungsblatt 1 * [Aufgabe 1](Aufgabe-1) * [Aufgabe 2](Aufgabe-2) * [Aufgabe 3](Aufgabe-3) ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt import scipy.stats plt.style.use('ggplot') ###Output _____no_output_____ ###Markdown --- Aufgabe 1Gegeben sei eine parametrische Funktion $y = f(x)$, $y = 1 + a_1x + a_2x^2$ mit Parametern $a_1 = 2.0 ± 0.2$, $a_2 = 1.0 ± 0.1$ und Korrelationskoeffizient $ρ = −0.8$.--- ###Code a1, a1_err = 2.0, 0.2 a2, a2_err = 1.0, 0.1 rho = -0.8 ###Output _____no_output_____ ###Markdown --- 1.1Geben Sie die Kovarianzmatrix von $a_1$ und $a_2$ an.--- Die Kovarianzmatrix von $a_1$ und $a_2$ lässt sich wie folgt berechnen:$$\mathrm{Cov}(a) = \pmatrix{\sigma^2_{a_1} & \rho\sigma_{a_1}\sigma_{a_2} \\ \rho\sigma_{a_1}\sigma_{a_2} & \sigma^2_{a_2}}$$ ###Code c12 = rho * a1_err * a2_err covariance = np.matrix([[a1_err ** 2, c12], [c12, a2_err ** 2]]) covariance ###Output _____no_output_____ ###Markdown --- 1.2Bestimmen Sie analytisch die Unsicherheit von $y$ als Funktion von $x$:--- Dazu bestimmen wir zunaechst die Ableitungen von $y$ nach $a_1$ und $a_2$$$\frac{\partial{}y}{\partial{}a_1} = x \,,\quad \frac{\partial{}y}{\partial{}a_2} = x^2 \,.$$Daraus können wir die Kovarianz von $y$ nach$$\sigma^2_y = \mathrm{Cov}(y) = \sum_{ij}\frac{\partial{}y}{\partial{}a_i}\frac{\partial{}y}{\partial{}a_j}\mathrm{Cov}(a)_{ij}$$bestimmen. Einsetzen der eingangs bestimmten Ableitungen liefert$$\sigma^2_y = c_{11}x^2 + 2c_{12}x^3 + c_{22}x^4$$wobei die $c_{ij}$ die Einträge von $\mathrm{Cov}(a)$ sind. Der Ausdruck lässt sich weiter vereinfachen zu$$\sigma^2_y = x^2\left(\sigma^2_{a_1} + \sigma^2_{a_2}x^2 + 2\rho\sigma_{a_1}\sigma_{a_2}x\right) \,.$$Daraus ergibt sich für die Unsicherheit$$ \sigma_y = \lvert{}x\rvert\sqrt{\sigma^2_{a_1} + \sigma^2_{a_2}x^2 + 2\rho\sigma_{a_1}\sigma_{a_2}x} \,.$$ --- 1.2.1unter Vernachlässigung der Korrelation--- (also für $\rho = 0$) vereinfacht sich der obige Ausdruck zu$$\sigma_y = \lvert{}x\rvert\sqrt{\sigma^2_{a_1} + \sigma^2_{a_2}x^2} \,.$$Mit den eingangs berechneten Werten aus `covariance` ergibt sich also$$\sigma_y = \lvert{}x\rvert\sqrt{0.04 + 0.01x^2} = 0.2\lvert{}x\rvert{}\sqrt{1 + 0.25x^2} \,.$$ ###Code def err_ana_wo(x): return 0.2 * np.abs(x) * np.sqrt(1 + 0.25 * x ** 2) xs = np.linspace(-3, 3, 10000) ys = 1 + a1 * xs + a2 * xs ** 2 errs = err_ana_wo(xs) plt.plot(xs, ys) plt.fill_between(xs, ys - errs, ys + errs, alpha=0.5) plt.show() ###Output _____no_output_____ ###Markdown --- 1.2.2mit Berücksichtigung der Korrelation--- $$\sigma_y = \lvert{}x\rvert\sqrt{0.04 + 0.01x^2 - 0.016x} = 0.2\lvert{}x\rvert\sqrt{1 - 0.8x + 0.25x^2} \,.$$ ###Code def err_ana(x): return 0.2 * np.abs(x) * np.sqrt(1 + 0.25 * x ** 2 - 0.4 * x) errs = err_ana(xs) plt.plot(xs, ys) plt.fill_between(xs, ys - errs, ys + errs, alpha=0.5) plt.show() ###Output _____no_output_____ ###Markdown --- 1.3Bestimmen Sie per Monte Carlo die Unsicherheit von $y$ als Funktion von $x$: 1.3.1Generieren Sie Wertepaare $(a_1, a_2)$ gemäß ihrer Kovarianzmatrix und visualisieren Sie diese, z.B. mit einem Scatter-Plot._Hinweis_: Wenn $x_1$ und $x_2$ zwei gaussverteilte Zufallszahlen mit Mittelwert null und Varianz eins sind, erhält man ein Paar korrelierter gaussverteilter Zufallszahlen $(y_1, y_2)$ mit Mittelwert null und Varianz eins durch $(y_1 = x_1; y_2 = x_1ρ + x_2\sqrt{1 − \rho^2})$.--- ###Code x1s, x2s = np.random.normal(size=(2, 10000)) plt.hist2d(x1s, x2s, bins=40) plt.title('2-dim Normalverteilung') plt.xlabel('$x_1$') plt.ylabel('$x_2$') plt.show() y1s = x1s y2s = x1s * rho + x2s * np.sqrt(1 - rho ** 2) plt.hist2d(y1s, y2s, bins=40) plt.title('2-dim Normalverteilung mit Korrelation') plt.xlabel('$y_1$') plt.ylabel('$y_2$') plt.show() a1s = a1 + y1s * a1_err a2s = a2 + y2s * a2_err plt.hist2d(a1s, a2s, bins=40) plt.title('Verteilung der $a_1$ und $a_2$') plt.xlabel('$a_1$') plt.ylabel('$a_2$') plt.show() ###Output _____no_output_____ ###Markdown --- 1.3.2Bestimmen Sie die Verteilung von $y$ für $x = \{−1, 0, +1\}$ und vergleichen Sie Mittelwert und Varianz (Standardabweichung) mit den Resultaten der analytischen Rechnung.--- ###Code def y(x, a1, a2): return 1 + a1 * x + a2 * x ** 2 def var_analytical(x): xx = x ** 2 #return xx * (covariance[0, 0] + covariance[1, 1] * xx + 2 * covariance[0, 1] * x) return 0.04 * xx * (1 + 0.25 * xx - 0.8 * x) for x in (-1, 0, 1): ys = y(x, a1s, a2s) mean = np.mean(ys) var = np.var(ys) print('〈y({})〉= {:.3f}'.format(x, mean)) print(' σ² = {:.3f}'.format(var)) print(' analytical = {:.3f}'.format(var_analytical(x))) plt.hist(ys, bins=100) plt.xlabel('y({})'.format(x)) plt.ylabel('Absolute Häufigkeit') plt.show() ###Output 〈y(-1)〉= 0.002 σ² = 0.081 analytical = 0.082 ###Markdown Der Fall $x = 0$ ist hier besonders. Da alle Koeffizienten vor Potenzen von $x$ stehen, ergibt sich für den Fall immer $y=0$ unabhängig von den $a_i$. Wir können also keine Aussage über die Varianz treffen. --- Aufgabe 2Betrachten Sie folgende Reparametrisierung von $y = f(x)$: $$y = 1 + \frac{x(1+x)}{b_1} + \frac{x(1-x)}{b_2}$$ 2.1Bestimmen Sie analytisch die transformierten Parameter $b_1$ und $b_2$ und deren Kovarianzmatrix--- Wir lösen die Reparametrisierung nach Koeffizienten von Potenzen von $x$ auf. Dabei können wir den Term $1$ vernachlässigen, weil er in beiden Definitionen gleichermaßen auftritt.\begin{align} a_1 x + a_2 x^2 &= \frac{x(1 + x)}{b_1} + \frac{x(1 - x)}{b_2} \\ &= \frac{x}{b_1} + \frac{x^2}{b_1} + \frac{x}{b_2} - \frac{x^2}{b_2} \\ &= x\left(\frac{1}{b_1} + \frac{1}{b_2}\right) + x^2\left(\frac{1}{b_1} - \frac{1}{b_2}\right)\end{align}Damit ist$$a_1 = \left(\frac{1}{b_1} + \frac{1}{b_2}\right) \quad\text{und}\quad a_2 = \left(\frac{1}{b_1} - \frac{1}{b_2}\right)$$also$$b_1 = \frac{2}{a_1 + a_2} \quad\text{und}\quad b_2 = \frac{2}{a_1 - a_2} \,.$$Für die Jacobimatrix der Transformation ergibt sich\begin{equation} M = \pmatrix{ \frac{-2}{(a_1 + a_2)^2} & \frac{-2}{(a_1 + a_2)^2} \\ \frac{-2}{(a_1 - a_2)^2} & \frac{+2}{(a_1 - a_2)^2} } \quad\text{wobei}\quad m_{ij} = \frac{\partial b_i}{\partial a_j}\end{equation}\begin{equation} M^T = \pmatrix{ \frac{-2}{(a_1 + a_2)^2} & \frac{-2}{(a_1 - a_2)^2} \\ \frac{-2}{(a_1 + a_2)^2} & \frac{+2}{(a_1 - a_2)^2} }\end{equation}Die transformierte Kovarianzmatrix ist dann $\mathrm{Cov}(b) = M\mathrm{Cov}(a)M^T$. ###Code b1 = 2 / (a1 + a2) b2 = 2 / (a1 - a2) denom1 = (a1 + a2) ** 2 denom2 = (a1 - a2) ** 2 M = np.matrix([[-2 / denom1, -2 / denom1], [-2 / denom2, 2 / denom2]]) cov_b = M * covariance * M.T cov_b ###Output _____no_output_____ ###Markdown --- 2.2Bestimmen Sie die Kovarianzmatrix der transformierten Parameter per Monte Carlo--- ###Code b1s = 2 / (a1s + a2s) b2s = 2 / (a1s - a2s) print('b1 = {}'.format(np.mean(b1s))) print('var = {}'.format(np.var(b1s))) print('b2 = {}'.format(np.mean(b2s))) print('var = {}'.format(np.var(b2s))) plt.hist(b1s, bins=100) plt.show() plt.hist(b2s, bins=100) plt.show() ###Output b1 = 0.6681390380773017 var = 0.0009033648632514698 b2 = 2.2516701252806204 var = 1.797015387269852 ###Markdown Dabei tritt das Problem auf, dass für einige Kombinationen von Werten für $a_1$ und $a_2$ der Nenner sehr nah an `0` kommt. Dadurch ergeben sich sehr große (unrealistische) Werte für $b_2$. Wir können dem entgegenwirken, indem wir einen Bereich festlegen in dem wir die Werte für $b_2$ erwarten. ###Code cut = np.logical_and(b2s < 5, b2s > 0) b1s_ = b1s[cut] b2s_ = b2s[cut] print('b2 gefiltert = {}'.format(np.mean(b2s_))) print('var = {}'.format(np.var(b2s_))) plt.hist(b1s_, bins=100) plt.show() plt.hist2d(b1s_, b2s_, bins=100) plt.show() ###Output b2 gefiltert = 2.1462016045455883 var = 0.4459879545199094 ###Markdown Wenn wir uns das Leben etwas erleichtern wollen, koennen wir auch einfach die Funktion `cov` aus Numpy verwenden, die uns fuer zwei Arrays direkt die Kovarianzmatrix ausrechnet ###Code ncov_b = np.cov(b1s_, b2s_) ncov_b ###Output _____no_output_____ ###Markdown --- 2.3Bestimmen Sie analytisch die Unsicherheit von $y$ als Funktion von $x$: 2.3.1unter Verwendung der analytisch bestimmten Kovarianzmatrix von $(b_1, b_2)$--- Zunächst berechnen wir die partiellen Ableitungen von $y$ nach den Koeffizienten $b_1$ und $b_2$ und damit die Jacobimatrix $M$.\begin{equation} M = \pmatrix{\frac{\partial y}{\partial b_1} \\ \frac{\partial y}{\partial b_2}} = \pmatrix{\frac{-x(1+x)}{b_1^2}\\ \frac{-x(1-x)}{b_2^2}}\end{equation}Damit ergibt sich für die Varianz\begin{align} \sigma_y^2 &= M^T \mathrm{Cov}(b) M \\ &= x^2\left[\frac{c_{11}}{b_1^4}(1+x)^2 + \frac{2c_{12}}{b_1^2b_2^2}(1 - x^2) + \frac{c_{22}}{b_2^4}(1-x)^2\right] \\ &= x^2\left(\alpha + \beta x + \gamma x^2 \right)\end{align}mit den Koeffizienten\begin{equation} \alpha = \left(\frac{c_{11}}{b_1^4} + \frac{2c_{12}}{b_1^2b_2^2} + \frac{c_{22}}{b_2^4}\right) \quad,\quad \beta = 2\left(\frac{c_{11}}{b_1^4} - \frac{c_{22}}{b_2^4}\right) \quad\text{und}\quad \gamma = \left(\frac{c_{11}}{b_1^4} - \frac{2c_{12}}{b_1^2b_2^2} + \frac{c_{22}}{b_2^4}\right) \,.\end{equation}In Zahlen ausgedrückt sind die Koeffizienten ###Code c11 = cov_b[0, 0] c12 = cov_b[0, 1] c22 = cov_b[1, 1] print(cov_b) a = c11 / b1 ** 4 b = 2 * c12 / b1 ** 2 / b2 ** 2 c = c22 / b2 ** 4 alpha = a + b + c beta = 2 * (a - c) gamma = a - b + c np.sqrt(alpha), alpha/alpha, beta/alpha, gamma/alpha ###Output [[ 0.00088889 0.01333333] [ 0.01333333 0.328 ]] ###Markdown Es ist also\begin{equation} \sigma_y = 0.2\lvert{}x\rvert\sqrt{1 - 0.8x - 0.25x^2}\end{equation}was exakt das gleiche Ergbnis ist wie für die ursprüngliche Parametrisierung. ###Code def err_ana(x): return np.abs(x) * np.sqrt(alpha + beta * x + gamma * x ** 2) xs = np.linspace(-10, 10, 10000) ys = 1 + xs * (xs + 1) / b1 + xs * (xs - 1) / b2 errs = err_ana(xs) plt.plot(xs, ys) plt.fill_between(xs, ys - errs, ys + errs, alpha=0.5) plt.show() ###Output _____no_output_____ ###Markdown --- 2.3.2unter Verwendung der numerisch bestimmten Kovarianzmatrix von $(b_1, b_2)$--- Die gleiche Rechnung mit den per Monte Carlo bestimmten Werten ###Code c11_n = ncov_b[0, 0] c12_n = ncov_b[0, 1] c22_n = ncov_b[1, 1] a_n = c11_n / b1 ** 4 b_n = 2 * c12_n / b1 ** 2 / b2 ** 2 c_n = c22_n / b2 ** 4 alpha_n = a_n + b_n + c_n beta_n = 2 * (a_n - c_n) gamma_n = a_n - b_n + c_n np.sqrt(alpha_n), alpha_n/alpha_n, beta_n/alpha_n, gamma_n/alpha_n def err_num(x): return np.abs(x) * np.sqrt(alpha_n + beta_n * x + gamma_n * x ** 2) errs = err_num(xs) plt.plot(xs, ys) plt.fill_between(xs, ys - errs, ys + errs, alpha=0.5) plt.show() ###Output _____no_output_____ ###Markdown --- Aufgabe 3Lösen Sie die obigen Teilaufgaben für $y = f(x)$ mit$$y = \ln\left(1 + a_1x + a_2x^2\right) \quad \text{bzw.} \quad y = \ln\left(1 + \frac{x(1+x)}{b_1} + \frac{x(x-1)}{b_2}\right)$$ Im folgenden nennen wir das neue $y$ der Einfachheit halber $z$.\begin{equation} z = \ln(1 + a_1x + a_2x^2) = \ln(y)\end{equation}Für die Unsicherheit von $z$ ergibt sich\begin{align} \sigma_z &= \sqrt{\left(\frac{\partial z}{\partial y}\right)^2\sigma_y^2} \\ &= \sqrt{\left(\frac{1}{y}\right)^2\sigma_y^2} \\ &= \left\lvert\frac{\sigma_y}{y}\right\rvert \\ &= \frac{0.2\lvert x\rvert\sqrt{1 + 0.25x^2 - 0.4x}}{\lvert 1 + a_1 x + a_2 x^2 \rvert} \,.\end{align}Völlig analog ist die Rechnung für die Reparametrisierung. Hier ergibt sich\begin{align} \sigma_z &= \left\lvert\frac{\sigma_y}{y}\right\rvert \\ &= \frac{0.2\lvert x\rvert\sqrt{1 + 0.25x^2 - 0.4x}}{\left\lvert 1 + \frac{x(1+x)}{b_1} + \frac{x(x-1)}{b_2}\right\rvert} \,.\end{align} ###Code def err1(x): return err_ana(x) / np.abs(1 + a1 * x + a2 * x ** 2) def err2(x): return err_ana(x) / np.abs(1 + xs * (xs + 1) / b1 + xs * (xs - 1) / b2) xs = np.linspace(-3, 3, 100) ys1 = 1 + a1 * xs + a2 * xs ** 2 ys2 = np.log(1 + xs * (xs + 1) / b1 + xs * (xs - 1) / b2) errs1 = err1(xs) errs2 = err2(xs) plt.plot(xs, ys1) plt.fill_between(xs, ys1 - errs1, ys1 + errs1, alpha=0.5) plt.show() plt.plot(xs, ys2) plt.fill_between(xs, ys2 - errs2, ys2 + errs2, alpha=0.5) plt.show() ###Output /opt/local/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/ipykernel/__main__.py:2: RuntimeWarning: divide by zero encountered in true_divide from ipykernel import kernelapp as app ###Markdown Musterlösung zu Übungsblatt 1 * [Aufgabe 1](Aufgabe-1) * [Aufgabe 2](Aufgabe-2) * [Aufgabe 3](Aufgabe-3) ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt import scipy.stats plt.style.use('ggplot') ###Output _____no_output_____ ###Markdown --- Aufgabe 1Gegeben sei eine parametrische Funktion $y = f(x)$, $y = 1 + a_1x + a_2x^2$ mit Parametern $a_1 = 2.0 ± 0.2$, $a_2 = 1.0 ± 0.1$ und Korrelationskoeffizient $ρ = −0.8$.--- ###Code a1, a1_err = 2.0, 0.2 a2, a2_err = 1.0, 0.1 rho = -0.8 ###Output _____no_output_____ ###Markdown --- 1.1Geben Sie die Kovarianzmatrix von $a_1$ und $a_2$ an.--- Die Kovarianzmatrix von $a_1$ und $a_2$ lässt sich wie folgt berechnen:$$\mathrm{Cov}(a) = \pmatrix{\sigma^2_{a_1} & \rho\sigma_{a_1}\sigma_{a_2} \\ \rho\sigma_{a_1}\sigma_{a_2} & \sigma^2_{a_2}}$$ ###Code c12 = rho * a1_err * a2_err covariance = np.matrix([[a1_err ** 2, c12], [c12, a2_err ** 2]]) covariance ###Output _____no_output_____ ###Markdown --- 1.2Bestimmen Sie analytisch die Unsicherheit von $y$ als Funktion von $x$:--- Dazu bestimmen wir zunaechst die Ableitungen von $y$ nach $a_1$ und $a_2$$$\frac{\partial{}y}{\partial{}a_1} = x \,,\quad \frac{\partial{}y}{\partial{}a_2} = x^2 \,.$$Daraus können wir die Kovarianz von $y$ nach$$\sigma^2_y = \mathrm{Cov}(y) = \sum_{ij}\frac{\partial{}y}{\partial{}a_i}\frac{\partial{}y}{\partial{}a_j}\mathrm{Cov}(a)_{ij}$$bestimmen. Einsetzen der eingangs bestimmten Ableitungen liefert$$\sigma^2_y = c_{11}x^2 + 2c_{12}x^3 + c_{22}x^4$$wobei die $c_{ij}$ die Einträge von $\mathrm{Cov}(a)$ sind. Der Ausdruck lässt sich weiter vereinfachen zu$$\sigma^2_y = x^2\left(\sigma^2_{a_1} + \sigma^2_{a_2}x^2 + 2\rho\sigma_{a_1}\sigma_{a_2}x\right) \,.$$Daraus ergibt sich für die Unsicherheit$$ \sigma_y = \lvert{}x\rvert\sqrt{\sigma^2_{a_1} + \sigma^2_{a_2}x^2 + 2\rho\sigma_{a_1}\sigma_{a_2}x} \,.$$ --- 1.2.1unter Vernachlässigung der Korrelation--- (also für $\rho = 0$) vereinfacht sich der obige Ausdruck zu$$\sigma_y = \lvert{}x\rvert\sqrt{\sigma^2_{a_1} + \sigma^2_{a_2}x^2} \,.$$Mit den eingangs berechneten Werten aus `covariance` ergibt sich also$$\sigma_y = \lvert{}x\rvert\sqrt{0.04 + 0.01x^2} = 0.2\lvert{}x\rvert{}\sqrt{1 + 0.25x^2} \,.$$ ###Code def err_ana_wo(x): return 0.2 * np.abs(x) * np.sqrt(1 + 0.25 * x ** 2) xs = np.linspace(-10, 10, 10000) ys = 1 + a1 * xs + a2 * xs ** 2 errs = err_ana_wo(xs) plt.plot(xs, ys) plt.fill_between(xs, ys - errs, ys + errs, alpha=0.5) plt.show() ###Output _____no_output_____ ###Markdown --- 1.2.2mit Berücksichtigung der Korrelation--- $$\sigma_y = \lvert{}x\rvert\sqrt{0.04 + 0.01x^2 - 0.016x} = 0.2\lvert{}x\rvert\sqrt{1 + 0.25x^2 - 0.4x} \,.$$ ###Code def err_ana(x): return 0.2 * np.abs(x) * np.sqrt(1 + 0.25 * x ** 2 - 0.4 * x) errs = err_ana(xs) plt.plot(xs, ys) plt.fill_between(xs, ys - errs, ys + errs, alpha=0.5) plt.show() ###Output _____no_output_____ ###Markdown --- 1.3Bestimmen Sie per Monte Carlo die Unsicherheit von $y$ als Funktion von $x$: 1.3.1Generieren Sie Wertepaare $(a_1, a_2)$ gemäß ihrer Kovarianzmatrix und visualisieren Sie diese, z.B. mit einem Scatter-Plot._Hinweis_: Wenn $x_1$ und $x_2$ zwei gaussverteilte Zufallszahlen mit Mittelwert null und Varianz eins sind, erhält man ein Paar korrelierter gaussverteilter Zufallszahlen $(y_1, y_2)$ mit Mittelwert null und Varianz eins durch $(y_1 = x_1; y_2 = x_1ρ + x_2\sqrt{1 − \rho^2})$.--- ###Code x1s, x2s = np.random.normal(size=(2, 10000)) plt.hist2d(x1s, x2s, bins=40) plt.title('2-dim Normalverteilung') plt.xlabel('$x_1$') plt.ylabel('$x_2$') plt.show() y1s = x1s y2s = x1s * rho + x2s * np.sqrt(1 - rho ** 2) plt.hist2d(y1s, y2s, bins=40) plt.title('2-dim Normalverteilung mit Korrelation') plt.xlabel('$y_1$') plt.ylabel('$y_2$') plt.show() a1s = a1 + y1s * a1_err a2s = a2 + y2s * a2_err plt.hist2d(a1s, a2s, bins=40) plt.title('Verteilung der $a_1$ und $a_2$') plt.xlabel('$a_1$') plt.ylabel('$a_2$') plt.show() ###Output _____no_output_____ ###Markdown --- 1.3.2Bestimmen Sie die Verteilung von $y$ für $x = \{−1, 0, +1\}$ und vergleichen Sie Mittelwert und Varianz (Standardabweichung) mit den Resultaten der analytischen Rechnung.--- ###Code def y(x, a1, a2): return 1 + a1 * x + a2 * x ** 2 def var_analytical(x): xx = x ** 2 return 0.04 * xx * (1 + 0.25 * xx - 0.16 * x) for x in (-1, 0, 1): ys = y(x, a1s, a2s) mean = np.mean(ys) var = np.var(ys) print('〈y({})〉= {:.3f}'.format(x, mean)) print(' σ² = {:.3f}'.format(var)) print(' analytical = {:.3f}'.format(var_analytical(x))) plt.hist(ys, bins=100) plt.xlabel('y({})'.format(x)) plt.ylabel('Absolute Häufigkeit') plt.show() ###Output 〈y(-1)〉= -0.002 σ² = 0.083 analytical = 0.056 ###Markdown Der Fall $x = 0$ ist hier besonders. Da alle Koeffizienten vor Potenzen von $x$ stehen, ergibt sich für den Fall immer $y=0$ unabhängig von den $a_i$. Wir können also keine Aussage über die Varianz treffen. --- Aufgabe 2Betrachten Sie folgende Reparametrisierung von $y = f(x)$: $$y = 1 + \frac{x(1+x)}{b_1} + \frac{x(1-x)}{b_2}$$ 2.1Bestimmen Sie analytisch die transformierten Parameter $b_1$ und $b_2$ und deren Kovarianzmatrix--- Wir lösen die Reparametrisierung nach Koeffizienten von Potenzen von $x$ auf. Dabei können wir den Term $1$ vernachlässigen, weil er in beiden Definitionen gleichermaßen auftritt.\begin{align} a_1 x + a_2 x^2 &= \frac{x(1 + x)}{b_1} + \frac{x(1 - x)}{b_2} \\ &= \frac{x}{b_1} + \frac{x^2}{b_1} + \frac{x}{b_2} - \frac{x^2}{b_2} \\ &= x\left(\frac{1}{b_1} + \frac{1}{b_2}\right) + x^2\left(\frac{1}{b_1} - \frac{1}{b_2}\right)\end{align}Damit ist$$a_1 = \left(\frac{1}{b_1} + \frac{1}{b_2}\right) \quad\text{und}\quad a_2 = \left(\frac{1}{b_1} - \frac{1}{b_2}\right)$$also$$b_1 = \frac{2}{a_1 + a_2} \quad\text{und}\quad b_2 = \frac{2}{a_1 - a_2} \,.$$Für die Jacobimatrix der Transformation ergibt sich\begin{equation} M = \pmatrix{ \frac{-2}{(a_1 + a_2)^2} & \frac{-2}{(a_1 + a_2)^2} \\ \frac{-2}{(a_1 - a_2)^2} & \frac{+2}{(a_1 - a_2)^2} } \quad\text{wobei}\quad m_{ij} = \frac{\partial b_i}{\partial a_j}\end{equation}\begin{equation} M^T = \pmatrix{ \frac{-2}{(a_1 + a_2)^2} & \frac{-2}{(a_1 - a_2)^2} \\ \frac{-2}{(a_1 + a_2)^2} & \frac{+2}{(a_1 - a_2)^2} }\end{equation}Die transformierte Kovarianzmatrix ist dann $\mathrm{Cov}(b) = M\mathrm{Cov}(a)M^T$. ###Code b1 = 2 / (a1 + a2) b2 = 2 / (a1 - a2) denom1 = (a1 + a2) ** 2 denom2 = (a1 - a2) ** 2 M = np.matrix([[-2 / denom1, -2 / denom1], [-2 / denom2, 2 / denom2]]) cov_b = M * covariance * M.T cov_b ###Output _____no_output_____ ###Markdown --- 2.2Bestimmen Sie die Kovarianzmatrix der transformierten Parameter per Monte Carlo--- ###Code b1s = 2 / (a1s + a2s) b2s = 2 / (a1s - a2s) print('b1 = {}'.format(np.mean(b1s))) print('var = {}'.format(np.var(b1s))) print('b2 = {}'.format(np.mean(b2s))) print('var = {}'.format(np.var(b2s))) plt.hist(b1s, bins=100) plt.show() plt.hist(b2s, bins=100) plt.show() ###Output b1 = 0.6678076014919944 var = 0.0009033935228944976 b2 = 2.312116036153896 var = 31.364584105291485 ###Markdown Dabei tritt das Problem auf, dass für einige Kombinationen von Werten für $a_1$ und $a_2$ der Nenner sehr nah an `0` kommt. Dadurch ergeben sich sehr große (unrealistische) Werte für $b_2$. Wir können dem entgegenwirken, indem wir einen Bereich festlegen in dem wir die Werte für $b_2$ erwarten. ###Code cut = np.logical_and(b2s < 5, b2s > 0) b1s_ = b1s[cut] b2s_ = b2s[cut] print('b2 gefiltert = {}'.format(np.mean(b2s_))) print('var = {}'.format(np.var(b2s_))) plt.hist(b1s_, bins=100) plt.show() plt.hist2d(b1s_, b2s_, bins=100) plt.show() ###Output b2 gefiltert = 2.1386405127392183 var = 0.4553051901466126 ###Markdown Wenn wir uns das Leben etwas erleichtern wollen, koennen wir auch einfach die Funktion `cov` aus Numpy verwenden, die uns fuer zwei Arrays direkt die Kovarianzmatrix ausrechnet ###Code ncov_b = np.cov(b1s_, b2s_) ncov_b ###Output _____no_output_____ ###Markdown --- 2.3Bestimmen Sie analytisch die Unsicherheit von $y$ als Funktion von $x$: 2.3.1unter Verwendung der analytisch bestimmten Kovarianzmatrix von $(b_1, b_2)$--- Zunächst berechnen wir die partiellen Ableitungen von $y$ nach den Koeffizienten $b_1$ und $b_2$.\begin{equation} \frac{\partial y}{\partial b_1} = \frac{-x(1+x)}{b_1^2} \quad\text{und}{\quad} \frac{\partial y}{\partial b_2} = \frac{-x(1-x)}{b_2^2}\end{equation}Damit ergibt sich für die Varianz\begin{align} \sigma_y^2 &= \sum_{ij}\frac{\partial y}{\partial b_i}\frac{\partial y}{\partial b_j}\mathrm{Cov}(b)_{ij} \\ &= \frac{x^2(1+x)^2}{b_1^4}\sigma_{b_1}^2 + 2\frac{x^2 - x^3}{b_1^2b_2^2}\rho\sigma_{b_1}\sigma_{b_2} + \frac{x^2(1-x)^2}{b_2^4}\sigma_{b_2}^2 \\ &= x^2\left[(\alpha+\beta)x^2 + (2\alpha - 2\beta - \gamma)x + (\alpha + \beta + \gamma)\right] \\ &= x^2\left(c_2x^2 + c_1x + c_0\right)\end{align}mit den Koeffizienten\begin{equation} \alpha = \frac{\sigma_{b_1}^2}{b_1^4} \quad,\quad \beta = \frac{\sigma_{b_2}^2}{b_2^4} \quad\text{und}\quad \gamma = \frac{\rho\sigma_{b_1}\sigma_{b_2}}{b_1^2b_2^2} \,.\end{equation}In Zahlen ausgedrückt sind die Koeffizienten ###Code s2_b1 = cov_b[0, 0] s2_b2 = cov_b[1, 1] rho12 = cov_b[0, 1] alpha = s2_b1 / b1 ** 4 beta = s2_b2 / b2 ** 4 gamma = rho12 / b1 ** 2 / b2 ** 2 c2 = alpha + beta c1 = 2 * c2 - gamma c0 = c2 + gamma c2, c1, c0 ###Output _____no_output_____ ###Markdown Es ist also\begin{equation} \sigma_y^2 = x^2\left(0.025x^2 + 0.0425x + 0.0325\right)\end{equation}und damit\begin{equation} \sigma_y = \lvert{}x\rvert\sqrt{0.025x^2 + 0.0425x + 0.0325}\end{equation} ###Code def err_ana(x): return np.abs(x) * np.sqrt(c2 * x ** 2 + c1 * x + c0) xs = np.linspace(-10, 10, 10000) ys = 1 + xs * (xs + 1) / b1 + xs * (xs - 1) / b2 errs = err_ana(xs) plt.plot(xs, ys) plt.fill_between(xs, ys - errs, ys + errs, alpha=0.5) plt.show() ###Output _____no_output_____ ###Markdown --- 2.3.2unter Verwendung der numerisch bestimmten Kovarianzmatrix von $(b_1, b_2)$--- Die gleiche Rechnung mit den per Monte Carlo bestimmten Werten ###Code s2_b1 = ncov_b[0, 0] s2_b2 = ncov_b[1, 1] rho12 = ncov_b[0, 1] alpha = s2_b1 / b1 ** 4 beta = s2_b2 / b2 ** 4 gamma = rho12 / b1 ** 2 / b2 ** 2 c2n = alpha + beta c1n = 2 * c2 - gamma c0n = c2 + gamma c2n, c1n, c0n ###Output _____no_output_____ ###Markdown liefert also ein etwas überschätzte Ungenauigkeit von\begin{equation} \sigma_y = \lvert{}x\rvert\sqrt{0.0320x^2 + 0.0563x + 0.0398}\end{equation} ###Code def err_num(x): return np.abs(x) * np.sqrt(c2n * x ** 2 + c1n * x + c0n) errs = err_num(xs) plt.plot(xs, ys) plt.fill_between(xs, ys - errs, ys + errs, alpha=0.5) plt.show() ###Output _____no_output_____ ###Markdown --- Aufgabe 3Lösen Sie die obigen Teilaufgaben für $y = f(x)$ mit$$y = \ln\left(1 + a_1x + a_2x^2\right) \quad \text{bzw.} \quad y = \ln\left(1 + \frac{x(1+x)}{b_1} + \frac{x(x-1)}{b_2}\right)$$ Im folgenden nennen wir das neue $y$ der Einfachheit halber $z$.\begin{equation} z = \ln(1 + a_1x + a_2x^2) = \ln(y)\end{equation}Für die Unsicherheit von $z$ ergibt sich\begin{align} \sigma_z &= \sqrt{\left(\frac{\partial z}{\partial y}\right)^2\sigma_y^2} \\ &= \sqrt{\left(\frac{1}{y}\right)^2\sigma_y^2} \\ &= \left\lvert\frac{\sigma_y}{y}\right\rvert \\ &= \frac{0.2\lvert x\rvert\sqrt{1 + 0.25x^2 - 0.4x}}{\lvert 1 + a_1 x + a_2 x^2 \rvert} \,.\end{align}Völlig analog ist die Rechnung für die Reparametrisierung. Hier ergibt sich\begin{align} \sigma_z &= \left\lvert\frac{\sigma_y}{y}\right\rvert \\ &= \frac{\lvert{}x\rvert\sqrt{0.025x^2 + 0.0425x + 0.0325}}{\left\lvert 1 + \frac{x(1+x)}{b_1} + \frac{x(x-1)}{b_2}\right\rvert} \,.\end{align} ###Code def err1(x): return 0.2 * np.abs(x) * np.sqrt(1 + 0.25 * x ** 2 - 0.4 * x) / np.abs(1) def err2(x): return np.abs(x) * np.sqrt(c2 * x ** 2 + c1 * x + c0) / np.abs(1 + xs * (xs + 1) / b1 + xs * (xs - 1) / b2) xs = np.linspace(-10, 10, 10000) ys1 = 1 + a1 * xs + a2 * xs ** 2 ys2 = np.log(1 + xs * (xs + 1) / b1 + xs * (xs - 1) / b2) errs1 = err1(xs) errs2 = err2(xs) plt.plot(xs, ys1) plt.fill_between(xs, ys1 - errs1, ys1 + errs1, alpha=0.5) plt.show() plt.plot(xs, ys2) plt.fill_between(xs, ys2 - errs2, ys2 + errs2, alpha=0.5) plt.show() ###Output _____no_output_____
Quiz/m2_advanced_quants/l3_regression/test_normality.ipynb
###Markdown Testing if a Distribution is Normal Imports ###Code import numpy as np import pandas as pd import scipy.stats as stats import matplotlib.pyplot as plt import quiz_tests # Set plotting options %matplotlib inline plt.rc('figure', figsize=(16, 9)) ###Output _____no_output_____ ###Markdown Create normal and non-normal distributions ###Code # Sample A: Normal distribution sample_a = stats.norm.rvs(loc=0.0, scale=1.0, size=(1000,)) # Sample B: Non-normal distribution sample_b = stats.lognorm.rvs(s=0.5, loc=0.0, scale=1.0, size=(1000,)) ###Output _____no_output_____ ###Markdown Boxplot-Whisker Plot and HistogramWe can visually check if a distribution looks normally distributed. Recall that a box whisker plot lets us check for symmetry around the mean. A histogram lets us see the overall shape. A QQ-plot lets us compare our data distribution with a normal distribution (or any other theoretical "ideal" distribution). ###Code # Sample A: Normal distribution sample_a = stats.norm.rvs(loc=0.0, scale=1.0, size=(1000,)) fig, axes = plt.subplots(2, 1, figsize=(16, 9), sharex=True) axes[0].boxplot(sample_a, vert=False) axes[1].hist(sample_a, bins=50) axes[0].set_title("Boxplot of a Normal Distribution"); # Sample B: Non-normal distribution sample_b = stats.lognorm.rvs(s=0.5, loc=0.0, scale=1.0, size=(1000,)) fig, axes = plt.subplots(2, 1, figsize=(16, 9), sharex=True) axes[0].boxplot(sample_b, vert=False) axes[1].hist(sample_b, bins=50) axes[0].set_title("Boxplot of a Lognormal Distribution"); # Q-Q plot of normally-distributed sample plt.figure(figsize=(10, 10)); plt.axis('equal') stats.probplot(sample_a, dist='norm', plot=plt); # Q-Q plot of non-normally-distributed sample plt.figure(figsize=(10, 10)); plt.axis('equal') stats.probplot(sample_b, dist='norm', plot=plt); ###Output _____no_output_____ ###Markdown Testing for Normality Shapiro-WilkThe Shapiro-Wilk test is available in the scipy library. The null hypothesis assumes that the data distribution is normal. If the p-value is greater than the chosen p-value, we'll assume that it's normal. Otherwise we assume that it's not normal.https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.stats.shapiro.html ###Code def is_normal(sample, test=stats.shapiro, p_level=0.05, **kwargs): """Apply a normality test to check if sample is normally distributed.""" t_stat, p_value = test(sample, **kwargs) print("Test statistic: {}, p-value: {}".format(t_stat, p_value)) print("Is the distribution Likely Normal? {}".format(p_value > p_level)) return p_value > p_level # Using Shapiro-Wilk test (default) print("Sample A:-"); is_normal(sample_a); print("Sample B:-"); is_normal(sample_b); ###Output Sample A:- Test statistic: 0.9979031085968018, p-value: 0.24440579116344452 Is the distribution Likely Normal? True Sample B:- Test statistic: 0.9244282245635986, p-value: 5.385336052987241e-22 Is the distribution Likely Normal? False ###Markdown Kolmogorov-SmirnovThe Kolmogorov-Smirnov is available in the scipy.stats library. The K-S test compares the data distribution with a theoretical distribution. We'll choose the 'norm' (normal) distribution as the theoretical distribution, and we also need to specify the mean and standard deviation of this theoretical distribution. We'll set the mean and stanadard deviation of the theoretical norm with the mean and standard deviation of the data distribution.https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.kstest.html QuizTo use the Kolmogorov-Smirnov test, complete the function `is_normal_ks`.To set the variable normal_args, create a tuple with two values. An example of a tuple is `("apple","banana")`The first is the mean of the sample. The second is the standard deviation of the sample.**hint:** Hint: Numpy has functions np.mean() and np.std() ###Code def is_normal_ks(sample, test=stats.kstest, p_level=0.05, **kwargs): """ sample: a sample distribution test: a function that tests for normality p_level: if the test returns a p-value > than p_level, assume normality return: True if distribution is normal, False otherwise """ normal_args = (np.mean(sample), np.std(sample)) t_stat, p_value = test(sample, 'norm', normal_args, **kwargs) print("Test statistic: {}, p-value: {}".format(t_stat, p_value)) print("Is the distribution Likely Normal? {}".format(p_value > p_level)) return p_value > p_level quiz_tests.test_is_normal_ks(is_normal_ks) # Using Kolmogorov-Smirnov test print("Sample A:-"); is_normal_ks(sample_a); print("Sample B:-"); is_normal_ks(sample_b); ###Output Sample A:- Test statistic: 0.030373810749202035, p-value: 0.30968165602546405 Is the distribution Likely Normal? True Sample B:- Test statistic: 0.08827839855083291, p-value: 3.1345145015536597e-07 Is the distribution Likely Normal? False
TrainingFacenetRemontada.ipynb
###Markdown RE-TRAINING THE FACENET MODEL USING BLACK-FACES The Arcface loss class ###Code !pip install tensorflow==2.4.0 from tensorflow import keras from keras import regularizers # Original paper: https://arxiv.org/pdf/1801.07698.pdf # Original implementation: https://github.com/deepinsight/insightface # Adapted from tensorflow implementation: https://github.com/luckycallor/InsightFace-tensorflow from keras import backend as K from keras.layers import Layer from keras.metrics import categorical_accuracy import tensorflow as tf import math as m class ArcFace(Layer): '''Custom Keras layer implementing ArcFace including: 1. Generation of embeddings 2. Loss function 3. Accuracy function ''' def __init__(self, output_dim, class_num, margin=0.5, scale=64., **kwargs): self.output_dim = output_dim self.class_num = class_num self.margin = margin self.s = scale self.cos_m = tf.math.cos(margin) self.sin_m = tf.math.sin(margin) self.mm = self.sin_m * margin self.threshold = tf.math.cos(tf.constant(m.pi) - margin) super(ArcFace, self).__init__(**kwargs) def build(self, input_shape): # Create a trainable weight variable for this layer. self.kernel = self.add_weight(name='kernel', shape=(input_shape[1], self.class_num), initializer='glorot_normal', trainable=True) super(ArcFace, self).build(input_shape) # Be sure to call this at the end def call(self, x): embeddings = tf.nn.l2_normalize(x, axis=1, name='normed_embeddings') weights = tf.nn.l2_normalize(self.kernel, axis=0, name='normed_weights') cos_t = tf.matmul(embeddings, weights, name='cos_t') return cos_t def get_logits(self, labels, y_pred): cos_t = y_pred cos_t2 = tf.square(cos_t, name='cos_2') sin_t2 = tf.subtract(1., cos_t2, name='sin_2') sin_t = tf.sqrt(sin_t2, name='sin_t') cos_mt = self.s * tf.subtract(tf.multiply(cos_t, self.cos_m), tf.multiply(sin_t, self.sin_m), name='cos_mt') cond_v = cos_t - self.threshold cond = tf.cast(tf.nn.relu(cond_v, name='if_else'), dtype=tf.bool) keep_val = self.s*(cos_t - self.mm) cos_mt_temp = tf.where(cond, cos_mt, keep_val) mask = tf.one_hot(labels, depth=self.class_num, name='one_hot_mask') inv_mask = tf.subtract(1., mask, name='inverse_mask') s_cos_t = tf.multiply(self.s, cos_t, name='scalar_cos_t') output = tf.add(tf.multiply(s_cos_t, inv_mask), tf.multiply(cos_mt_temp, mask), name='arcface_logits') return output def loss(self, y_true, y_pred): labels = K.argmax(y_true, axis=-1) logits = self.get_logits(labels, y_pred) loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits) return loss def accuracy(self, y_true, y_pred): labels = K.argmax(y_true, axis=-1) logits = self.get_logits(labels, y_pred) accuracy = categorical_accuracy(y_true=labels, y_pred=logits) return accuracy def compute_output_shape(self, input_shape): return (input_shape[0], self.output_dim) ###Output _____no_output_____ ###Markdown Alternative implementation of arcface ###Code from keras import backend as K from keras.layers import Layer from keras import regularizers import tensorflow as tf class ArcFace(Layer): def __init__(self, n_classes=10, s=30.0, m=0.50, regularizer=None, **kwargs): super(ArcFace, self).__init__(**kwargs) self.n_classes = n_classes self.s = s self.m = m self.regularizer = regularizers.get(regularizer) def build(self, input_shape): super(ArcFace, self).build(input_shape[0]) self.W = self.add_weight(name='W', shape=(input_shape[0][-1], self.n_classes), initializer='glorot_uniform', trainable=True, regularizer=self.regularizer) def call(self, inputs): x, y = inputs c = K.shape(x)[-1] # normalize feature x = tf.nn.l2_normalize(x, axis=1) # normalize weights W = tf.nn.l2_normalize(self.W, axis=0) # dot product logits = x @ W # add margin # clip logits to prevent zero division when backward theta = tf.acos(K.clip(logits, -1.0 + K.epsilon(), 1.0 - K.epsilon())) target_logits = tf.cos(theta + self.m) # sin = tf.sqrt(1 - logits**2) # cos_m = tf.cos(logits) # sin_m = tf.sin(logits) # target_logits = logits * cos_m - sin * sin_m # logits = logits * (1 - y) + target_logits * y # feature re-scale logits *= self.s out = tf.nn.softmax(logits) return out def compute_output_shape(self, input_shape): return (None, self.n_classes) ###Output _____no_output_____ ###Markdown Importing the necessary packages ###Code import tensorflow as tf from tensorflow import keras from keras.preprocessing import image from keras.preprocessing.image import ImageDataGenerator from keras.optimizers import RMSprop from keras.models import load_model import os from os import listdir path="C:/Users/Diana/Desktop/Extracted/Training/" count=0 for f in listdir(path): count=0 for nome in listdir(path+f): count+=1 print("Name: ",f,"count: ",count) import os from os import listdir path="C:/Users/Diana/Desktop/Extracted/Validation/" count=0 for f in listdir(path): count=0 for nome in listdir(path+f): count+=1 print("Name: ",f,"count: ",count) ###Output Name: Apass count: 9 Name: Becky count: 10 Name: Canary count: 17 Name: Cindy count: 17 Name: Gamzi count: 9 Name: Kabuura count: 8 Name: Katatumba count: 20 Name: KenMugabi count: 14 Name: Lucky count: 16 Name: Nabata count: 22 Name: Raymond count: 6 Name: Renal count: 8 Name: Ruth count: 9 Name: Samson count: 22 Name: Vanny count: 14 ###Markdown Setting up the Image Data Generator API ###Code #Import shutil first, this package deletes ipnb_checkpoints files that create a ghost class import shutil #The next step is to delete every ipynb_checkpoints file created by colab #shutil.rmtree("/tmp/training/.ipynb_checkpoints") #be careful with shutil.rmtree() because it deletes every tree in that path. In other words, do not make mistakes. #shutil.rmtree("/tmp/testing/.ipynb_checkpoints") #specify both the training and validation directories TRAINING_DIR="C:/Users/Diana/Desktop/Extracted/Training/" VALIDATION_DIR="C:/Users/Diana/Desktop/Extracted/Validation/" #Initialize Image Data Generator objects, and rescale the image training_datagen=ImageDataGenerator(rescale=1/255) validation_datagen=ImageDataGenerator(rescale=1/255) #Create the image generators that create the create the classes for all images uploaded training_generator=training_datagen.flow_from_directory(TRAINING_DIR,class_mode='categorical',target_size=(160,160)) validation_generator=validation_datagen.flow_from_directory(VALIDATION_DIR,class_mode='categorical',target_size=(160,160)) #Load the facenet model architecture #model=load_model('/tmp/facenet/facenet_keras.h5') ###Output Found 613 images belonging to 15 classes. Found 201 images belonging to 15 classes. ###Markdown Loading the facenet Model architecture ###Code model=load_model('C:/Users/Diana/Desktop/facenet_keras.h5') #A summary of the model architecture model.summary() print("Number of layers in the base model: ", len(model.layers)) local_weights_file='/tmp/facerecog-facenet/facenet_keras_weights.h5' model.load_weights(local_weights_file) for layer in model.layers[:413]: layer.trainable=False local_weights_file='C:/Users/Diana/Desktop/facenet_keras_weights.h5' model.load_weights(local_weights_file) for layer in model.layers: layer.trainable=False from tensorflow.keras import layers from tensorflow.keras import Model #Specify the last layer from the architecture, that you actually want last_layer=model.get_layer('Bottleneck') last_output=last_layer.output #Code from arcface repo from keras.layers import Input from keras.layers import BatchNormalization #customizable arcface layer #af_layer = ArcFace(output_dim=128, class_num=128, margin=0.5, scale=64.) #Flatten the output layer to one dimension x=layers.Flatten()(last_output) af_layer = ArcFace(n_classes=128)[x] #arcface_output = af_layer(last_output) arcface_output=af_layer print(arcface_output) x=layers.Dense(1024,activation='relu')(arcface_output) x=layers.Dense(512,activation='relu')(x) x=layers.Dense(128,activation='relu')(x) x=layers.Dense(15,activation='softmax')(arcface_output) #We're temporarily adding a classification layer, for training purposes #x=layers.Dense(15,activation='softmax')(x) #x=layers.Dense(1024,activation='relu')(x) #x=layers.Dense(128,activation='relu')(x) #x=layers.Dense(15,activation='softmax')(x) model=Model(model.input,x) #Compiling the model using the RMSprop optimizer and categorical cross entropy loss model.compile(optimizer=RMSprop(lr=0.0001),loss='categorical_crossentropy',metrics=['accuracy']) ###Output _____no_output_____ ###Markdown **Temporary Alternative to the above code cell** ###Code #Code from arcface repo from keras.layers import Input from keras.layers import BatchNormalization from keras.layers import Dropout #customizable arcface layer af_layer = ArcFace(output_dim=128, class_num=128, margin=0.5, scale=64.) arcface_output = af_layer(last_output) x=layers.Flatten()(arcface_output) #print(arcface_output) x = Dropout(rate=0.3)(x) x=layers.Dense(1024,activation='relu')(arcface_output) x=layers.Dense(512,activation='relu')(x) x = Dropout(rate=0.5)(x) x=layers.Dense(128,activation='relu')(x) x=layers.Dense(15,activation='softmax')(arcface_output) model=Model(model.input,x) model.compile(optimizer=RMSprop(lr=0.0001),loss='categorical_crossentropy',metrics=['accuracy',tf.keras.metrics.AUC(multi_label = True)]) #training for 100 epochs history=model.fit(training_generator,validation_data=validation_generator,epochs=100,verbose=2) print(tf.__version__) print(keras.__version__) ###Output 2.4.0 ###Markdown Lets visualize the output of the training phase 413 from 426 ###Code auc=history.history['auc'] val_auc=history.history['val_auc'] acc=history.history['accuracy'] val_acc=history.history['val_accuracy'] loss=history.history['loss'] val_loss=history.history['val_loss'] epochs=range(len(acc)) import matplotlib.pyplot as plt plt.plot(epochs,loss,'bo',label="Training Loss") plt.plot(epochs,val_loss,'r',label="Validation Loss") plt.legend() plt.show() plt.plot(epochs,auc,'bo',label="Training AUC") plt.plot(epochs,val_auc,'r',label="Validation AUC") plt.legend() plt.figure() plt.plot(epochs,acc,'bo',label="Training Accuracy") plt.plot(epochs,val_acc,'r',label="Validation Accuracy") plt.legend() plt.show() model.summary() model2=Model(model.input,model.layers[-3].output) model2.summary() import cv2 import numpy as np def img_to_encoding(image_path, model): img1 = cv2.imread(image_path, 1) img = img1[...,::-1] img = np.around(np.transpose(img, (2,0,1))/255.0, decimals=12) x_train = np.array([img]) embedding = model.predict_on_batch(x_train) return embedding database = {} database["Joelyne"] = img_to_encoding("C:/Users/Diana/Desktop/Model Work/DataCollection/Joelyne1/IMG_3752.jpg", model) database["Diana"] = img_to_encoding("C:/Users/Diana/Desktop/Model Work/CroppedImages/ClusteredImages/Diana/IMG_1995.jpg",model) database["David"] = img_to_encoding("C:/Users/Diana/Desktop/Model Work/CroppedImages/ClusteredImages/David/IMG_1023.jpg", model) database["Dorothy"] = img_to_encoding("C:/Users/Diana/Desktop/Model Work/CroppedImages/ClusteredImages/Dorothy/Dorothy1.jpg", model) database["Gloria"] = img_to_encoding("C:/Users/Diana/Desktop/Model Work/CroppedImages/ClusteredImages/Gloria/Gloria1.jpg", model) database["Denis"] = img_to_encoding("C:/Users/Diana/Desktop/Model Work/CroppedImages/ClusteredImages/Denis/Denis1.jpg", model) database["Melissa"] = img_to_encoding("C:/Users/Diana/Desktop/Model Work/CroppedImages/ClusteredImages/Melissa/Melissa1.jpg", model) database["Mel1"] = img_to_encoding("C:/Users/Diana/Desktop/Model Work/CroppedImages/ClusteredImages/Mel1/Mel1.jpg", model) database["Maggie"] = img_to_encoding("C:/Users/Diana/Desktop/Model Work/CroppedImages/ClusteredImages/Mel2/Maggie1.jpg", model) database["Geog"] = img_to_encoding("C:/Users/Diana/Desktop/Model Work/CroppedImages/ClusteredImages/Mel3/Geog1.jpg", model) database["Ali"] = img_to_encoding("C:/Users/Diana/Desktop/Model Work/CroppedImages/ClusteredImages/Ali/Ali1.jpg", model) database["Adonia"] = img_to_encoding("C:/Users/Diana/Desktop/Model Work/CroppedImages/ClusteredImages/Adonia/Adonia.jpg", model) import tensorflow.compat.v1 as tf tf.disable_v2_behavior() sess=tf.Session() from tensorflow.python.framework import graph_io frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"]) graph_io.write_graph(frozen, '/tmp/session-frozens', 'inference_graph.pb', as_text=False) import tensorflow.compat.v1 as tf tf.disable_v2_behavior() from keras import backend as K from keras.models import Sequential, Model sess=tf.Session() K.set_learning_phase(0) # Set the learning phase to 0 model = model2 config = model2.get_config() #weights = model2.get_weights() #model = Sequential.from_config(config) output_node = model2.output.name.split(':')[0] # We need this in the next step graph_file = "kerasFacenet.pb" ckpt_file = "kerasFacenet.ckpt" saver = tf.train.Saver(sharded=True) tf.train.write_graph(sess.graph_def, '', graph_file) #saver.save(sess, ckpt_file) from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2 tf.saved_model.save(model2, "/tmp/saved-models") # Convert Keras model to ConcreteFunction full_model = tf.function(lambda x: model2(x)) full_model = full_model.get_concrete_function( tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype)) # Get frozen ConcreteFunction frozen_func = convert_variables_to_constants_v2(full_model) frozen_func.graph.as_graph_def() layers = [op.name for op in frozen_func.graph.get_operations()] #print("-" * 50) #print("Frozen model layers: ") for layer in layers: print(layer) #print("-" * 50) #print("Frozen model inputs: ") #print(frozen_func.inputs) #print("Frozen model outputs: ") #print(frozen_func.outputs) # Save frozen graph from frozen ConcreteFunction to hard drive tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir="/tmp/saved-model", name="facenet-Original-LastLayer.pb", as_text=False) ###Output INFO:tensorflow:Assets written to: /tmp/saved-models/assets x model_12/Conv2d_1a_3x3/Conv2D/ReadVariableOp/resource model_12/Conv2d_1a_3x3/Conv2D/ReadVariableOp model_12/Conv2d_1a_3x3/Conv2D model_12/Conv2d_1a_3x3_BatchNorm/scale model_12/Conv2d_1a_3x3_BatchNorm/ReadVariableOp/resource model_12/Conv2d_1a_3x3_BatchNorm/ReadVariableOp model_12/Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3 model_12/Conv2d_1a_3x3_Activation/Relu model_12/Conv2d_2a_3x3/Conv2D/ReadVariableOp/resource model_12/Conv2d_2a_3x3/Conv2D/ReadVariableOp model_12/Conv2d_2a_3x3/Conv2D model_12/Conv2d_2a_3x3_BatchNorm/scale model_12/Conv2d_2a_3x3_BatchNorm/ReadVariableOp/resource model_12/Conv2d_2a_3x3_BatchNorm/ReadVariableOp model_12/Conv2d_2a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Conv2d_2a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Conv2d_2a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Conv2d_2a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Conv2d_2a_3x3_BatchNorm/FusedBatchNormV3 model_12/Conv2d_2a_3x3_Activation/Relu model_12/Conv2d_2b_3x3/Conv2D/ReadVariableOp/resource model_12/Conv2d_2b_3x3/Conv2D/ReadVariableOp model_12/Conv2d_2b_3x3/Conv2D model_12/Conv2d_2b_3x3_BatchNorm/scale model_12/Conv2d_2b_3x3_BatchNorm/ReadVariableOp/resource model_12/Conv2d_2b_3x3_BatchNorm/ReadVariableOp model_12/Conv2d_2b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Conv2d_2b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Conv2d_2b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Conv2d_2b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Conv2d_2b_3x3_BatchNorm/FusedBatchNormV3 model_12/Conv2d_2b_3x3_Activation/Relu model_12/MaxPool_3a_3x3/MaxPool model_12/Conv2d_3b_1x1/Conv2D/ReadVariableOp/resource model_12/Conv2d_3b_1x1/Conv2D/ReadVariableOp model_12/Conv2d_3b_1x1/Conv2D model_12/Conv2d_3b_1x1_BatchNorm/scale model_12/Conv2d_3b_1x1_BatchNorm/ReadVariableOp/resource model_12/Conv2d_3b_1x1_BatchNorm/ReadVariableOp model_12/Conv2d_3b_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Conv2d_3b_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Conv2d_3b_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Conv2d_3b_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Conv2d_3b_1x1_BatchNorm/FusedBatchNormV3 model_12/Conv2d_3b_1x1_Activation/Relu model_12/Conv2d_4a_3x3/Conv2D/ReadVariableOp/resource model_12/Conv2d_4a_3x3/Conv2D/ReadVariableOp model_12/Conv2d_4a_3x3/Conv2D model_12/Conv2d_4a_3x3_BatchNorm/scale model_12/Conv2d_4a_3x3_BatchNorm/ReadVariableOp/resource model_12/Conv2d_4a_3x3_BatchNorm/ReadVariableOp model_12/Conv2d_4a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Conv2d_4a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Conv2d_4a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Conv2d_4a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Conv2d_4a_3x3_BatchNorm/FusedBatchNormV3 model_12/Conv2d_4a_3x3_Activation/Relu model_12/Conv2d_4b_3x3/Conv2D/ReadVariableOp/resource model_12/Conv2d_4b_3x3/Conv2D/ReadVariableOp model_12/Conv2d_4b_3x3/Conv2D model_12/Conv2d_4b_3x3_BatchNorm/scale model_12/Conv2d_4b_3x3_BatchNorm/ReadVariableOp/resource model_12/Conv2d_4b_3x3_BatchNorm/ReadVariableOp model_12/Conv2d_4b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Conv2d_4b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Conv2d_4b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Conv2d_4b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Conv2d_4b_3x3_BatchNorm/FusedBatchNormV3 model_12/Conv2d_4b_3x3_Activation/Relu model_12/Block35_1_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_1_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block35_1_Branch_0_Conv2d_1x1/Conv2D model_12/Block35_1_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block35_1_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_1_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block35_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_1_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block35_1_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_1_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block35_1_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block35_1_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block35_1_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_1_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block35_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_1_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block35_1_Branch_1_Conv2d_0b_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_1_Branch_1_Conv2d_0b_3x3/Conv2D/ReadVariableOp model_12/Block35_1_Branch_1_Conv2d_0b_3x3/Conv2D model_12/Block35_1_Branch_1_Conv2d_0b_3x3_BatchNorm/scale model_12/Block35_1_Branch_1_Conv2d_0b_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_1_Branch_1_Conv2d_0b_3x3_BatchNorm/ReadVariableOp model_12/Block35_1_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_1_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_1_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_1_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_1_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_1_Branch_1_Conv2d_0b_3x3_Activation/Relu model_12/Block35_1_Branch_2_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_1_Branch_2_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block35_1_Branch_2_Conv2d_0a_1x1/Conv2D model_12/Block35_1_Branch_2_Conv2d_0a_1x1_BatchNorm/scale model_12/Block35_1_Branch_2_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_1_Branch_2_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block35_1_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_1_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_1_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_1_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_1_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_1_Branch_2_Conv2d_0a_1x1_Activation/Relu model_12/Block35_1_Branch_2_Conv2d_0b_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_1_Branch_2_Conv2d_0b_3x3/Conv2D/ReadVariableOp model_12/Block35_1_Branch_2_Conv2d_0b_3x3/Conv2D model_12/Block35_1_Branch_2_Conv2d_0b_3x3_BatchNorm/scale model_12/Block35_1_Branch_2_Conv2d_0b_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_1_Branch_2_Conv2d_0b_3x3_BatchNorm/ReadVariableOp model_12/Block35_1_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_1_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_1_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_1_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_1_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_1_Branch_2_Conv2d_0b_3x3_Activation/Relu model_12/Block35_1_Branch_2_Conv2d_0c_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_1_Branch_2_Conv2d_0c_3x3/Conv2D/ReadVariableOp model_12/Block35_1_Branch_2_Conv2d_0c_3x3/Conv2D model_12/Block35_1_Branch_2_Conv2d_0c_3x3_BatchNorm/scale model_12/Block35_1_Branch_2_Conv2d_0c_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_1_Branch_2_Conv2d_0c_3x3_BatchNorm/ReadVariableOp model_12/Block35_1_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_1_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_1_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_1_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_1_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_1_Branch_2_Conv2d_0c_3x3_Activation/Relu model_12/Block35_1_Concatenate/concat/axis model_12/Block35_1_Concatenate/concat model_12/Block35_1_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_1_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block35_1_Conv2d_1x1/Conv2D model_12/Block35_1_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block35_1_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block35_1_Conv2d_1x1/BiasAdd model_12/Block35_1_ScaleSum/mul/y model_12/Block35_1_ScaleSum/mul model_12/Block35_1_ScaleSum/add model_12/Block35_1_Activation/Relu model_12/Block35_2_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_2_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block35_2_Branch_0_Conv2d_1x1/Conv2D model_12/Block35_2_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block35_2_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_2_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block35_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_2_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block35_2_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_2_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block35_2_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block35_2_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block35_2_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_2_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block35_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_2_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block35_2_Branch_1_Conv2d_0b_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_2_Branch_1_Conv2d_0b_3x3/Conv2D/ReadVariableOp model_12/Block35_2_Branch_1_Conv2d_0b_3x3/Conv2D model_12/Block35_2_Branch_1_Conv2d_0b_3x3_BatchNorm/scale model_12/Block35_2_Branch_1_Conv2d_0b_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_2_Branch_1_Conv2d_0b_3x3_BatchNorm/ReadVariableOp model_12/Block35_2_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_2_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_2_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_2_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_2_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_2_Branch_1_Conv2d_0b_3x3_Activation/Relu model_12/Block35_2_Branch_2_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_2_Branch_2_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block35_2_Branch_2_Conv2d_0a_1x1/Conv2D model_12/Block35_2_Branch_2_Conv2d_0a_1x1_BatchNorm/scale model_12/Block35_2_Branch_2_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_2_Branch_2_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block35_2_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_2_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_2_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_2_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_2_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_2_Branch_2_Conv2d_0a_1x1_Activation/Relu model_12/Block35_2_Branch_2_Conv2d_0b_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_2_Branch_2_Conv2d_0b_3x3/Conv2D/ReadVariableOp model_12/Block35_2_Branch_2_Conv2d_0b_3x3/Conv2D model_12/Block35_2_Branch_2_Conv2d_0b_3x3_BatchNorm/scale model_12/Block35_2_Branch_2_Conv2d_0b_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_2_Branch_2_Conv2d_0b_3x3_BatchNorm/ReadVariableOp model_12/Block35_2_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_2_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_2_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_2_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_2_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_2_Branch_2_Conv2d_0b_3x3_Activation/Relu model_12/Block35_2_Branch_2_Conv2d_0c_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_2_Branch_2_Conv2d_0c_3x3/Conv2D/ReadVariableOp model_12/Block35_2_Branch_2_Conv2d_0c_3x3/Conv2D model_12/Block35_2_Branch_2_Conv2d_0c_3x3_BatchNorm/scale model_12/Block35_2_Branch_2_Conv2d_0c_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_2_Branch_2_Conv2d_0c_3x3_BatchNorm/ReadVariableOp model_12/Block35_2_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_2_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_2_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_2_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_2_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_2_Branch_2_Conv2d_0c_3x3_Activation/Relu model_12/Block35_2_Concatenate/concat/axis model_12/Block35_2_Concatenate/concat model_12/Block35_2_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_2_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block35_2_Conv2d_1x1/Conv2D model_12/Block35_2_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block35_2_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block35_2_Conv2d_1x1/BiasAdd model_12/Block35_2_ScaleSum/mul/y model_12/Block35_2_ScaleSum/mul model_12/Block35_2_ScaleSum/add model_12/Block35_2_Activation/Relu model_12/Block35_3_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_3_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block35_3_Branch_0_Conv2d_1x1/Conv2D model_12/Block35_3_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block35_3_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_3_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block35_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_3_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block35_3_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_3_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block35_3_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block35_3_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block35_3_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_3_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block35_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_3_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block35_3_Branch_1_Conv2d_0b_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_3_Branch_1_Conv2d_0b_3x3/Conv2D/ReadVariableOp model_12/Block35_3_Branch_1_Conv2d_0b_3x3/Conv2D model_12/Block35_3_Branch_1_Conv2d_0b_3x3_BatchNorm/scale model_12/Block35_3_Branch_1_Conv2d_0b_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_3_Branch_1_Conv2d_0b_3x3_BatchNorm/ReadVariableOp model_12/Block35_3_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_3_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_3_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_3_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_3_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_3_Branch_1_Conv2d_0b_3x3_Activation/Relu model_12/Block35_3_Branch_2_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_3_Branch_2_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block35_3_Branch_2_Conv2d_0a_1x1/Conv2D model_12/Block35_3_Branch_2_Conv2d_0a_1x1_BatchNorm/scale model_12/Block35_3_Branch_2_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_3_Branch_2_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block35_3_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_3_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_3_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_3_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_3_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_3_Branch_2_Conv2d_0a_1x1_Activation/Relu model_12/Block35_3_Branch_2_Conv2d_0b_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_3_Branch_2_Conv2d_0b_3x3/Conv2D/ReadVariableOp model_12/Block35_3_Branch_2_Conv2d_0b_3x3/Conv2D model_12/Block35_3_Branch_2_Conv2d_0b_3x3_BatchNorm/scale model_12/Block35_3_Branch_2_Conv2d_0b_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_3_Branch_2_Conv2d_0b_3x3_BatchNorm/ReadVariableOp model_12/Block35_3_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_3_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_3_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_3_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_3_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_3_Branch_2_Conv2d_0b_3x3_Activation/Relu model_12/Block35_3_Branch_2_Conv2d_0c_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_3_Branch_2_Conv2d_0c_3x3/Conv2D/ReadVariableOp model_12/Block35_3_Branch_2_Conv2d_0c_3x3/Conv2D model_12/Block35_3_Branch_2_Conv2d_0c_3x3_BatchNorm/scale model_12/Block35_3_Branch_2_Conv2d_0c_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_3_Branch_2_Conv2d_0c_3x3_BatchNorm/ReadVariableOp model_12/Block35_3_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_3_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_3_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_3_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_3_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_3_Branch_2_Conv2d_0c_3x3_Activation/Relu model_12/Block35_3_Concatenate/concat/axis model_12/Block35_3_Concatenate/concat model_12/Block35_3_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_3_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block35_3_Conv2d_1x1/Conv2D model_12/Block35_3_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block35_3_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block35_3_Conv2d_1x1/BiasAdd model_12/Block35_3_ScaleSum/mul/y model_12/Block35_3_ScaleSum/mul model_12/Block35_3_ScaleSum/add model_12/Block35_3_Activation/Relu model_12/Block35_4_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_4_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block35_4_Branch_0_Conv2d_1x1/Conv2D model_12/Block35_4_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block35_4_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_4_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block35_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_4_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block35_4_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_4_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block35_4_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block35_4_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block35_4_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_4_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block35_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_4_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block35_4_Branch_1_Conv2d_0b_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_4_Branch_1_Conv2d_0b_3x3/Conv2D/ReadVariableOp model_12/Block35_4_Branch_1_Conv2d_0b_3x3/Conv2D model_12/Block35_4_Branch_1_Conv2d_0b_3x3_BatchNorm/scale model_12/Block35_4_Branch_1_Conv2d_0b_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_4_Branch_1_Conv2d_0b_3x3_BatchNorm/ReadVariableOp model_12/Block35_4_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_4_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_4_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_4_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_4_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_4_Branch_1_Conv2d_0b_3x3_Activation/Relu model_12/Block35_4_Branch_2_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_4_Branch_2_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block35_4_Branch_2_Conv2d_0a_1x1/Conv2D model_12/Block35_4_Branch_2_Conv2d_0a_1x1_BatchNorm/scale model_12/Block35_4_Branch_2_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_4_Branch_2_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block35_4_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_4_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_4_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_4_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_4_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_4_Branch_2_Conv2d_0a_1x1_Activation/Relu model_12/Block35_4_Branch_2_Conv2d_0b_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_4_Branch_2_Conv2d_0b_3x3/Conv2D/ReadVariableOp model_12/Block35_4_Branch_2_Conv2d_0b_3x3/Conv2D model_12/Block35_4_Branch_2_Conv2d_0b_3x3_BatchNorm/scale model_12/Block35_4_Branch_2_Conv2d_0b_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_4_Branch_2_Conv2d_0b_3x3_BatchNorm/ReadVariableOp model_12/Block35_4_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_4_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_4_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_4_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_4_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_4_Branch_2_Conv2d_0b_3x3_Activation/Relu model_12/Block35_4_Branch_2_Conv2d_0c_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_4_Branch_2_Conv2d_0c_3x3/Conv2D/ReadVariableOp model_12/Block35_4_Branch_2_Conv2d_0c_3x3/Conv2D model_12/Block35_4_Branch_2_Conv2d_0c_3x3_BatchNorm/scale model_12/Block35_4_Branch_2_Conv2d_0c_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_4_Branch_2_Conv2d_0c_3x3_BatchNorm/ReadVariableOp model_12/Block35_4_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_4_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_4_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_4_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_4_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_4_Branch_2_Conv2d_0c_3x3_Activation/Relu model_12/Block35_4_Concatenate/concat/axis model_12/Block35_4_Concatenate/concat model_12/Block35_4_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_4_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block35_4_Conv2d_1x1/Conv2D model_12/Block35_4_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block35_4_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block35_4_Conv2d_1x1/BiasAdd model_12/Block35_4_ScaleSum/mul/y model_12/Block35_4_ScaleSum/mul model_12/Block35_4_ScaleSum/add model_12/Block35_4_Activation/Relu model_12/Block35_5_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_5_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block35_5_Branch_0_Conv2d_1x1/Conv2D model_12/Block35_5_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block35_5_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_5_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block35_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_5_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block35_5_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_5_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block35_5_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block35_5_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block35_5_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_5_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block35_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_5_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block35_5_Branch_1_Conv2d_0b_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_5_Branch_1_Conv2d_0b_3x3/Conv2D/ReadVariableOp model_12/Block35_5_Branch_1_Conv2d_0b_3x3/Conv2D model_12/Block35_5_Branch_1_Conv2d_0b_3x3_BatchNorm/scale model_12/Block35_5_Branch_1_Conv2d_0b_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_5_Branch_1_Conv2d_0b_3x3_BatchNorm/ReadVariableOp model_12/Block35_5_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_5_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_5_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_5_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_5_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_5_Branch_1_Conv2d_0b_3x3_Activation/Relu model_12/Block35_5_Branch_2_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_5_Branch_2_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block35_5_Branch_2_Conv2d_0a_1x1/Conv2D model_12/Block35_5_Branch_2_Conv2d_0a_1x1_BatchNorm/scale model_12/Block35_5_Branch_2_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block35_5_Branch_2_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block35_5_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_5_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_5_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_5_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_5_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block35_5_Branch_2_Conv2d_0a_1x1_Activation/Relu model_12/Block35_5_Branch_2_Conv2d_0b_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_5_Branch_2_Conv2d_0b_3x3/Conv2D/ReadVariableOp model_12/Block35_5_Branch_2_Conv2d_0b_3x3/Conv2D model_12/Block35_5_Branch_2_Conv2d_0b_3x3_BatchNorm/scale model_12/Block35_5_Branch_2_Conv2d_0b_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_5_Branch_2_Conv2d_0b_3x3_BatchNorm/ReadVariableOp model_12/Block35_5_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_5_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_5_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_5_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_5_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_5_Branch_2_Conv2d_0b_3x3_Activation/Relu model_12/Block35_5_Branch_2_Conv2d_0c_3x3/Conv2D/ReadVariableOp/resource model_12/Block35_5_Branch_2_Conv2d_0c_3x3/Conv2D/ReadVariableOp model_12/Block35_5_Branch_2_Conv2d_0c_3x3/Conv2D model_12/Block35_5_Branch_2_Conv2d_0c_3x3_BatchNorm/scale model_12/Block35_5_Branch_2_Conv2d_0c_3x3_BatchNorm/ReadVariableOp/resource model_12/Block35_5_Branch_2_Conv2d_0c_3x3_BatchNorm/ReadVariableOp model_12/Block35_5_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block35_5_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block35_5_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block35_5_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block35_5_Branch_2_Conv2d_0c_3x3_BatchNorm/FusedBatchNormV3 model_12/Block35_5_Branch_2_Conv2d_0c_3x3_Activation/Relu model_12/Block35_5_Concatenate/concat/axis model_12/Block35_5_Concatenate/concat model_12/Block35_5_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block35_5_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block35_5_Conv2d_1x1/Conv2D model_12/Block35_5_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block35_5_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block35_5_Conv2d_1x1/BiasAdd model_12/Block35_5_ScaleSum/mul/y model_12/Block35_5_ScaleSum/mul model_12/Block35_5_ScaleSum/add model_12/Block35_5_Activation/Relu model_12/Mixed_6a_Branch_0_Conv2d_1a_3x3/Conv2D/ReadVariableOp/resource model_12/Mixed_6a_Branch_0_Conv2d_1a_3x3/Conv2D/ReadVariableOp model_12/Mixed_6a_Branch_0_Conv2d_1a_3x3/Conv2D model_12/Mixed_6a_Branch_0_Conv2d_1a_3x3_BatchNorm/scale model_12/Mixed_6a_Branch_0_Conv2d_1a_3x3_BatchNorm/ReadVariableOp/resource model_12/Mixed_6a_Branch_0_Conv2d_1a_3x3_BatchNorm/ReadVariableOp model_12/Mixed_6a_Branch_0_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Mixed_6a_Branch_0_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Mixed_6a_Branch_0_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Mixed_6a_Branch_0_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Mixed_6a_Branch_0_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3 model_12/Mixed_6a_Branch_0_Conv2d_1a_3x3_Activation/Relu model_12/Mixed_6a_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Mixed_6a_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Mixed_6a_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Mixed_6a_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Mixed_6a_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Mixed_6a_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Mixed_6a_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Mixed_6a_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Mixed_6a_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Mixed_6a_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Mixed_6a_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Mixed_6a_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Mixed_6a_Branch_1_Conv2d_0b_3x3/Conv2D/ReadVariableOp/resource model_12/Mixed_6a_Branch_1_Conv2d_0b_3x3/Conv2D/ReadVariableOp model_12/Mixed_6a_Branch_1_Conv2d_0b_3x3/Conv2D model_12/Mixed_6a_Branch_1_Conv2d_0b_3x3_BatchNorm/scale model_12/Mixed_6a_Branch_1_Conv2d_0b_3x3_BatchNorm/ReadVariableOp/resource model_12/Mixed_6a_Branch_1_Conv2d_0b_3x3_BatchNorm/ReadVariableOp model_12/Mixed_6a_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Mixed_6a_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Mixed_6a_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Mixed_6a_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Mixed_6a_Branch_1_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3 model_12/Mixed_6a_Branch_1_Conv2d_0b_3x3_Activation/Relu model_12/Mixed_6a_Branch_1_Conv2d_1a_3x3/Conv2D/ReadVariableOp/resource model_12/Mixed_6a_Branch_1_Conv2d_1a_3x3/Conv2D/ReadVariableOp model_12/Mixed_6a_Branch_1_Conv2d_1a_3x3/Conv2D model_12/Mixed_6a_Branch_1_Conv2d_1a_3x3_BatchNorm/scale model_12/Mixed_6a_Branch_1_Conv2d_1a_3x3_BatchNorm/ReadVariableOp/resource model_12/Mixed_6a_Branch_1_Conv2d_1a_3x3_BatchNorm/ReadVariableOp model_12/Mixed_6a_Branch_1_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Mixed_6a_Branch_1_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Mixed_6a_Branch_1_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Mixed_6a_Branch_1_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Mixed_6a_Branch_1_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3 model_12/Mixed_6a_Branch_1_Conv2d_1a_3x3_Activation/Relu model_12/Mixed_6a_Branch_2_MaxPool_1a_3x3/MaxPool model_12/Mixed_6a/concat/axis model_12/Mixed_6a/concat model_12/Block17_1_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_1_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_1_Branch_0_Conv2d_1x1/Conv2D model_12/Block17_1_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block17_1_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_1_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block17_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_1_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block17_1_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_1_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block17_1_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block17_1_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block17_1_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_1_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block17_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_1_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block17_1_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp/resource model_12/Block17_1_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp model_12/Block17_1_Branch_1_Conv2d_0b_1x7/Conv2D model_12/Block17_1_Branch_1_Conv2d_0b_1x7_BatchNorm/scale model_12/Block17_1_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp/resource model_12/Block17_1_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp model_12/Block17_1_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_1_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_1_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_1_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_1_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3 model_12/Block17_1_Branch_1_Conv2d_0b_1x7_Activation/Relu model_12/Block17_1_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp/resource model_12/Block17_1_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp model_12/Block17_1_Branch_1_Conv2d_0c_7x1/Conv2D model_12/Block17_1_Branch_1_Conv2d_0c_7x1_BatchNorm/scale model_12/Block17_1_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp/resource model_12/Block17_1_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp model_12/Block17_1_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_1_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_1_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_1_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_1_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3 model_12/Block17_1_Branch_1_Conv2d_0c_7x1_Activation/Relu model_12/Block17_1_Concatenate/concat/axis model_12/Block17_1_Concatenate/concat model_12/Block17_1_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_1_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_1_Conv2d_1x1/Conv2D model_12/Block17_1_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block17_1_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block17_1_Conv2d_1x1/BiasAdd model_12/Block17_1_ScaleSum/mul/y model_12/Block17_1_ScaleSum/mul model_12/Block17_1_ScaleSum/add model_12/Block17_1_Activation/Relu model_12/Block17_2_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_2_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_2_Branch_0_Conv2d_1x1/Conv2D model_12/Block17_2_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block17_2_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_2_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block17_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_2_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block17_2_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_2_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block17_2_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block17_2_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block17_2_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_2_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block17_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_2_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block17_2_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp/resource model_12/Block17_2_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp model_12/Block17_2_Branch_1_Conv2d_0b_1x7/Conv2D model_12/Block17_2_Branch_1_Conv2d_0b_1x7_BatchNorm/scale model_12/Block17_2_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp/resource model_12/Block17_2_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp model_12/Block17_2_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_2_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_2_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_2_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_2_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3 model_12/Block17_2_Branch_1_Conv2d_0b_1x7_Activation/Relu model_12/Block17_2_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp/resource model_12/Block17_2_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp model_12/Block17_2_Branch_1_Conv2d_0c_7x1/Conv2D model_12/Block17_2_Branch_1_Conv2d_0c_7x1_BatchNorm/scale model_12/Block17_2_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp/resource model_12/Block17_2_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp model_12/Block17_2_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_2_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_2_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_2_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_2_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3 model_12/Block17_2_Branch_1_Conv2d_0c_7x1_Activation/Relu model_12/Block17_2_Concatenate/concat/axis model_12/Block17_2_Concatenate/concat model_12/Block17_2_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_2_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_2_Conv2d_1x1/Conv2D model_12/Block17_2_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block17_2_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block17_2_Conv2d_1x1/BiasAdd model_12/Block17_2_ScaleSum/mul/y model_12/Block17_2_ScaleSum/mul model_12/Block17_2_ScaleSum/add model_12/Block17_2_Activation/Relu model_12/Block17_3_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_3_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_3_Branch_0_Conv2d_1x1/Conv2D model_12/Block17_3_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block17_3_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_3_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block17_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_3_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block17_3_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_3_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block17_3_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block17_3_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block17_3_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_3_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block17_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_3_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block17_3_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp/resource model_12/Block17_3_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp model_12/Block17_3_Branch_1_Conv2d_0b_1x7/Conv2D model_12/Block17_3_Branch_1_Conv2d_0b_1x7_BatchNorm/scale model_12/Block17_3_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp/resource model_12/Block17_3_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp model_12/Block17_3_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_3_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_3_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_3_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_3_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3 model_12/Block17_3_Branch_1_Conv2d_0b_1x7_Activation/Relu model_12/Block17_3_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp/resource model_12/Block17_3_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp model_12/Block17_3_Branch_1_Conv2d_0c_7x1/Conv2D model_12/Block17_3_Branch_1_Conv2d_0c_7x1_BatchNorm/scale model_12/Block17_3_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp/resource model_12/Block17_3_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp model_12/Block17_3_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_3_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_3_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_3_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_3_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3 model_12/Block17_3_Branch_1_Conv2d_0c_7x1_Activation/Relu model_12/Block17_3_Concatenate/concat/axis model_12/Block17_3_Concatenate/concat model_12/Block17_3_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_3_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_3_Conv2d_1x1/Conv2D model_12/Block17_3_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block17_3_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block17_3_Conv2d_1x1/BiasAdd model_12/Block17_3_ScaleSum/mul/y model_12/Block17_3_ScaleSum/mul model_12/Block17_3_ScaleSum/add model_12/Block17_3_Activation/Relu model_12/Block17_4_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_4_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_4_Branch_0_Conv2d_1x1/Conv2D model_12/Block17_4_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block17_4_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_4_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block17_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_4_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block17_4_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_4_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block17_4_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block17_4_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block17_4_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_4_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block17_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_4_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block17_4_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp/resource model_12/Block17_4_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp model_12/Block17_4_Branch_1_Conv2d_0b_1x7/Conv2D model_12/Block17_4_Branch_1_Conv2d_0b_1x7_BatchNorm/scale model_12/Block17_4_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp/resource model_12/Block17_4_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp model_12/Block17_4_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_4_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_4_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_4_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_4_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3 model_12/Block17_4_Branch_1_Conv2d_0b_1x7_Activation/Relu model_12/Block17_4_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp/resource model_12/Block17_4_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp model_12/Block17_4_Branch_1_Conv2d_0c_7x1/Conv2D model_12/Block17_4_Branch_1_Conv2d_0c_7x1_BatchNorm/scale model_12/Block17_4_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp/resource model_12/Block17_4_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp model_12/Block17_4_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_4_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_4_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_4_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_4_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3 model_12/Block17_4_Branch_1_Conv2d_0c_7x1_Activation/Relu model_12/Block17_4_Concatenate/concat/axis model_12/Block17_4_Concatenate/concat model_12/Block17_4_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_4_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_4_Conv2d_1x1/Conv2D model_12/Block17_4_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block17_4_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block17_4_Conv2d_1x1/BiasAdd model_12/Block17_4_ScaleSum/mul/y model_12/Block17_4_ScaleSum/mul model_12/Block17_4_ScaleSum/add model_12/Block17_4_Activation/Relu model_12/Block17_5_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_5_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_5_Branch_0_Conv2d_1x1/Conv2D model_12/Block17_5_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block17_5_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_5_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block17_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_5_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block17_5_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_5_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block17_5_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block17_5_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block17_5_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_5_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block17_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_5_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block17_5_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp/resource model_12/Block17_5_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp model_12/Block17_5_Branch_1_Conv2d_0b_1x7/Conv2D model_12/Block17_5_Branch_1_Conv2d_0b_1x7_BatchNorm/scale model_12/Block17_5_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp/resource model_12/Block17_5_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp model_12/Block17_5_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_5_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_5_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_5_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_5_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3 model_12/Block17_5_Branch_1_Conv2d_0b_1x7_Activation/Relu model_12/Block17_5_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp/resource model_12/Block17_5_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp model_12/Block17_5_Branch_1_Conv2d_0c_7x1/Conv2D model_12/Block17_5_Branch_1_Conv2d_0c_7x1_BatchNorm/scale model_12/Block17_5_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp/resource model_12/Block17_5_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp model_12/Block17_5_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_5_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_5_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_5_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_5_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3 model_12/Block17_5_Branch_1_Conv2d_0c_7x1_Activation/Relu model_12/Block17_5_Concatenate/concat/axis model_12/Block17_5_Concatenate/concat model_12/Block17_5_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_5_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_5_Conv2d_1x1/Conv2D model_12/Block17_5_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block17_5_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block17_5_Conv2d_1x1/BiasAdd model_12/Block17_5_ScaleSum/mul/y model_12/Block17_5_ScaleSum/mul model_12/Block17_5_ScaleSum/add model_12/Block17_5_Activation/Relu model_12/Block17_6_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_6_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_6_Branch_0_Conv2d_1x1/Conv2D model_12/Block17_6_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block17_6_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_6_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block17_6_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_6_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_6_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_6_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_6_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_6_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block17_6_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_6_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block17_6_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block17_6_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block17_6_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_6_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block17_6_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_6_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_6_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_6_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_6_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_6_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block17_6_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp/resource model_12/Block17_6_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp model_12/Block17_6_Branch_1_Conv2d_0b_1x7/Conv2D model_12/Block17_6_Branch_1_Conv2d_0b_1x7_BatchNorm/scale model_12/Block17_6_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp/resource model_12/Block17_6_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp model_12/Block17_6_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_6_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_6_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_6_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_6_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3 model_12/Block17_6_Branch_1_Conv2d_0b_1x7_Activation/Relu model_12/Block17_6_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp/resource model_12/Block17_6_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp model_12/Block17_6_Branch_1_Conv2d_0c_7x1/Conv2D model_12/Block17_6_Branch_1_Conv2d_0c_7x1_BatchNorm/scale model_12/Block17_6_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp/resource model_12/Block17_6_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp model_12/Block17_6_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_6_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_6_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_6_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_6_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3 model_12/Block17_6_Branch_1_Conv2d_0c_7x1_Activation/Relu model_12/Block17_6_Concatenate/concat/axis model_12/Block17_6_Concatenate/concat model_12/Block17_6_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_6_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_6_Conv2d_1x1/Conv2D model_12/Block17_6_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block17_6_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block17_6_Conv2d_1x1/BiasAdd model_12/Block17_6_ScaleSum/mul/y model_12/Block17_6_ScaleSum/mul model_12/Block17_6_ScaleSum/add model_12/Block17_6_Activation/Relu model_12/Block17_7_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_7_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_7_Branch_0_Conv2d_1x1/Conv2D model_12/Block17_7_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block17_7_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_7_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block17_7_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_7_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_7_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_7_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_7_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_7_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block17_7_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_7_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block17_7_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block17_7_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block17_7_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_7_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block17_7_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_7_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_7_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_7_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_7_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_7_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block17_7_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp/resource model_12/Block17_7_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp model_12/Block17_7_Branch_1_Conv2d_0b_1x7/Conv2D model_12/Block17_7_Branch_1_Conv2d_0b_1x7_BatchNorm/scale model_12/Block17_7_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp/resource model_12/Block17_7_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp model_12/Block17_7_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_7_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_7_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_7_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_7_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3 model_12/Block17_7_Branch_1_Conv2d_0b_1x7_Activation/Relu model_12/Block17_7_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp/resource model_12/Block17_7_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp model_12/Block17_7_Branch_1_Conv2d_0c_7x1/Conv2D model_12/Block17_7_Branch_1_Conv2d_0c_7x1_BatchNorm/scale model_12/Block17_7_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp/resource model_12/Block17_7_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp model_12/Block17_7_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_7_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_7_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_7_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_7_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3 model_12/Block17_7_Branch_1_Conv2d_0c_7x1_Activation/Relu model_12/Block17_7_Concatenate/concat/axis model_12/Block17_7_Concatenate/concat model_12/Block17_7_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_7_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_7_Conv2d_1x1/Conv2D model_12/Block17_7_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block17_7_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block17_7_Conv2d_1x1/BiasAdd model_12/Block17_7_ScaleSum/mul/y model_12/Block17_7_ScaleSum/mul model_12/Block17_7_ScaleSum/add model_12/Block17_7_Activation/Relu model_12/Block17_8_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_8_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_8_Branch_0_Conv2d_1x1/Conv2D model_12/Block17_8_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block17_8_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_8_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block17_8_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_8_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_8_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_8_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_8_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_8_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block17_8_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_8_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block17_8_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block17_8_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block17_8_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_8_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block17_8_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_8_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_8_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_8_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_8_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_8_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block17_8_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp/resource model_12/Block17_8_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp model_12/Block17_8_Branch_1_Conv2d_0b_1x7/Conv2D model_12/Block17_8_Branch_1_Conv2d_0b_1x7_BatchNorm/scale model_12/Block17_8_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp/resource model_12/Block17_8_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp model_12/Block17_8_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_8_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_8_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_8_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_8_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3 model_12/Block17_8_Branch_1_Conv2d_0b_1x7_Activation/Relu model_12/Block17_8_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp/resource model_12/Block17_8_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp model_12/Block17_8_Branch_1_Conv2d_0c_7x1/Conv2D model_12/Block17_8_Branch_1_Conv2d_0c_7x1_BatchNorm/scale model_12/Block17_8_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp/resource model_12/Block17_8_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp model_12/Block17_8_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_8_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_8_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_8_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_8_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3 model_12/Block17_8_Branch_1_Conv2d_0c_7x1_Activation/Relu model_12/Block17_8_Concatenate/concat/axis model_12/Block17_8_Concatenate/concat model_12/Block17_8_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_8_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_8_Conv2d_1x1/Conv2D model_12/Block17_8_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block17_8_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block17_8_Conv2d_1x1/BiasAdd model_12/Block17_8_ScaleSum/mul/y model_12/Block17_8_ScaleSum/mul model_12/Block17_8_ScaleSum/add model_12/Block17_8_Activation/Relu model_12/Block17_9_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_9_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_9_Branch_0_Conv2d_1x1/Conv2D model_12/Block17_9_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block17_9_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_9_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block17_9_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_9_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_9_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_9_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_9_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_9_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block17_9_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_9_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block17_9_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block17_9_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block17_9_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_9_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block17_9_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_9_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_9_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_9_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_9_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_9_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block17_9_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp/resource model_12/Block17_9_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp model_12/Block17_9_Branch_1_Conv2d_0b_1x7/Conv2D model_12/Block17_9_Branch_1_Conv2d_0b_1x7_BatchNorm/scale model_12/Block17_9_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp/resource model_12/Block17_9_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp model_12/Block17_9_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_9_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_9_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_9_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_9_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3 model_12/Block17_9_Branch_1_Conv2d_0b_1x7_Activation/Relu model_12/Block17_9_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp/resource model_12/Block17_9_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp model_12/Block17_9_Branch_1_Conv2d_0c_7x1/Conv2D model_12/Block17_9_Branch_1_Conv2d_0c_7x1_BatchNorm/scale model_12/Block17_9_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp/resource model_12/Block17_9_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp model_12/Block17_9_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_9_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_9_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_9_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_9_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3 model_12/Block17_9_Branch_1_Conv2d_0c_7x1_Activation/Relu model_12/Block17_9_Concatenate/concat/axis model_12/Block17_9_Concatenate/concat model_12/Block17_9_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_9_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_9_Conv2d_1x1/Conv2D model_12/Block17_9_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block17_9_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block17_9_Conv2d_1x1/BiasAdd model_12/Block17_9_ScaleSum/mul/y model_12/Block17_9_ScaleSum/mul model_12/Block17_9_ScaleSum/add model_12/Block17_9_Activation/Relu model_12/Block17_10_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_10_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_10_Branch_0_Conv2d_1x1/Conv2D model_12/Block17_10_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block17_10_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_10_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block17_10_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_10_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_10_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_10_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_10_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_10_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block17_10_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_10_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block17_10_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block17_10_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block17_10_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block17_10_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block17_10_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_10_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_10_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_10_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_10_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block17_10_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block17_10_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp/resource model_12/Block17_10_Branch_1_Conv2d_0b_1x7/Conv2D/ReadVariableOp model_12/Block17_10_Branch_1_Conv2d_0b_1x7/Conv2D model_12/Block17_10_Branch_1_Conv2d_0b_1x7_BatchNorm/scale model_12/Block17_10_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp/resource model_12/Block17_10_Branch_1_Conv2d_0b_1x7_BatchNorm/ReadVariableOp model_12/Block17_10_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_10_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_10_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_10_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_10_Branch_1_Conv2d_0b_1x7_BatchNorm/FusedBatchNormV3 model_12/Block17_10_Branch_1_Conv2d_0b_1x7_Activation/Relu model_12/Block17_10_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp/resource model_12/Block17_10_Branch_1_Conv2d_0c_7x1/Conv2D/ReadVariableOp model_12/Block17_10_Branch_1_Conv2d_0c_7x1/Conv2D model_12/Block17_10_Branch_1_Conv2d_0c_7x1_BatchNorm/scale model_12/Block17_10_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp/resource model_12/Block17_10_Branch_1_Conv2d_0c_7x1_BatchNorm/ReadVariableOp model_12/Block17_10_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block17_10_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block17_10_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block17_10_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block17_10_Branch_1_Conv2d_0c_7x1_BatchNorm/FusedBatchNormV3 model_12/Block17_10_Branch_1_Conv2d_0c_7x1_Activation/Relu model_12/Block17_10_Concatenate/concat/axis model_12/Block17_10_Concatenate/concat model_12/Block17_10_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block17_10_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block17_10_Conv2d_1x1/Conv2D model_12/Block17_10_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block17_10_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block17_10_Conv2d_1x1/BiasAdd model_12/Block17_10_ScaleSum/mul/y model_12/Block17_10_ScaleSum/mul model_12/Block17_10_ScaleSum/add model_12/Block17_10_Activation/Relu model_12/Mixed_7a_Branch_0_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Mixed_7a_Branch_0_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Mixed_7a_Branch_0_Conv2d_0a_1x1/Conv2D model_12/Mixed_7a_Branch_0_Conv2d_0a_1x1_BatchNorm/scale model_12/Mixed_7a_Branch_0_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Mixed_7a_Branch_0_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Mixed_7a_Branch_0_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Mixed_7a_Branch_0_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Mixed_7a_Branch_0_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Mixed_7a_Branch_0_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Mixed_7a_Branch_0_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Mixed_7a_Branch_0_Conv2d_0a_1x1_Activation/Relu model_12/Mixed_7a_Branch_0_Conv2d_1a_3x3/Conv2D/ReadVariableOp/resource model_12/Mixed_7a_Branch_0_Conv2d_1a_3x3/Conv2D/ReadVariableOp model_12/Mixed_7a_Branch_0_Conv2d_1a_3x3/Conv2D model_12/Mixed_7a_Branch_0_Conv2d_1a_3x3_BatchNorm/scale model_12/Mixed_7a_Branch_0_Conv2d_1a_3x3_BatchNorm/ReadVariableOp/resource model_12/Mixed_7a_Branch_0_Conv2d_1a_3x3_BatchNorm/ReadVariableOp model_12/Mixed_7a_Branch_0_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Mixed_7a_Branch_0_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Mixed_7a_Branch_0_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Mixed_7a_Branch_0_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Mixed_7a_Branch_0_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3 model_12/Mixed_7a_Branch_0_Conv2d_1a_3x3_Activation/Relu model_12/Mixed_7a_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Mixed_7a_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Mixed_7a_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Mixed_7a_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Mixed_7a_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Mixed_7a_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Mixed_7a_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Mixed_7a_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Mixed_7a_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Mixed_7a_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Mixed_7a_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Mixed_7a_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Mixed_7a_Branch_1_Conv2d_1a_3x3/Conv2D/ReadVariableOp/resource model_12/Mixed_7a_Branch_1_Conv2d_1a_3x3/Conv2D/ReadVariableOp model_12/Mixed_7a_Branch_1_Conv2d_1a_3x3/Conv2D model_12/Mixed_7a_Branch_1_Conv2d_1a_3x3_BatchNorm/scale model_12/Mixed_7a_Branch_1_Conv2d_1a_3x3_BatchNorm/ReadVariableOp/resource model_12/Mixed_7a_Branch_1_Conv2d_1a_3x3_BatchNorm/ReadVariableOp model_12/Mixed_7a_Branch_1_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Mixed_7a_Branch_1_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Mixed_7a_Branch_1_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Mixed_7a_Branch_1_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Mixed_7a_Branch_1_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3 model_12/Mixed_7a_Branch_1_Conv2d_1a_3x3_Activation/Relu model_12/Mixed_7a_Branch_2_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Mixed_7a_Branch_2_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Mixed_7a_Branch_2_Conv2d_0a_1x1/Conv2D model_12/Mixed_7a_Branch_2_Conv2d_0a_1x1_BatchNorm/scale model_12/Mixed_7a_Branch_2_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Mixed_7a_Branch_2_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Mixed_7a_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Mixed_7a_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Mixed_7a_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Mixed_7a_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Mixed_7a_Branch_2_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Mixed_7a_Branch_2_Conv2d_0a_1x1_Activation/Relu model_12/Mixed_7a_Branch_2_Conv2d_0b_3x3/Conv2D/ReadVariableOp/resource model_12/Mixed_7a_Branch_2_Conv2d_0b_3x3/Conv2D/ReadVariableOp model_12/Mixed_7a_Branch_2_Conv2d_0b_3x3/Conv2D model_12/Mixed_7a_Branch_2_Conv2d_0b_3x3_BatchNorm/scale model_12/Mixed_7a_Branch_2_Conv2d_0b_3x3_BatchNorm/ReadVariableOp/resource model_12/Mixed_7a_Branch_2_Conv2d_0b_3x3_BatchNorm/ReadVariableOp model_12/Mixed_7a_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Mixed_7a_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Mixed_7a_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Mixed_7a_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Mixed_7a_Branch_2_Conv2d_0b_3x3_BatchNorm/FusedBatchNormV3 model_12/Mixed_7a_Branch_2_Conv2d_0b_3x3_Activation/Relu model_12/Mixed_7a_Branch_2_Conv2d_1a_3x3/Conv2D/ReadVariableOp/resource model_12/Mixed_7a_Branch_2_Conv2d_1a_3x3/Conv2D/ReadVariableOp model_12/Mixed_7a_Branch_2_Conv2d_1a_3x3/Conv2D model_12/Mixed_7a_Branch_2_Conv2d_1a_3x3_BatchNorm/scale model_12/Mixed_7a_Branch_2_Conv2d_1a_3x3_BatchNorm/ReadVariableOp/resource model_12/Mixed_7a_Branch_2_Conv2d_1a_3x3_BatchNorm/ReadVariableOp model_12/Mixed_7a_Branch_2_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Mixed_7a_Branch_2_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Mixed_7a_Branch_2_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Mixed_7a_Branch_2_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Mixed_7a_Branch_2_Conv2d_1a_3x3_BatchNorm/FusedBatchNormV3 model_12/Mixed_7a_Branch_2_Conv2d_1a_3x3_Activation/Relu model_12/Mixed_7a_Branch_3_MaxPool_1a_3x3/MaxPool model_12/Mixed_7a/concat/axis model_12/Mixed_7a/concat model_12/Block8_1_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_1_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block8_1_Branch_0_Conv2d_1x1/Conv2D model_12/Block8_1_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block8_1_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block8_1_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block8_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_1_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block8_1_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block8_1_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_1_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block8_1_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block8_1_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block8_1_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block8_1_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block8_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_1_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block8_1_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block8_1_Branch_1_Conv2d_0b_1x3/Conv2D/ReadVariableOp/resource model_12/Block8_1_Branch_1_Conv2d_0b_1x3/Conv2D/ReadVariableOp model_12/Block8_1_Branch_1_Conv2d_0b_1x3/Conv2D model_12/Block8_1_Branch_1_Conv2d_0b_1x3_BatchNorm/scale model_12/Block8_1_Branch_1_Conv2d_0b_1x3_BatchNorm/ReadVariableOp/resource model_12/Block8_1_Branch_1_Conv2d_0b_1x3_BatchNorm/ReadVariableOp model_12/Block8_1_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_1_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_1_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_1_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_1_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3 model_12/Block8_1_Branch_1_Conv2d_0b_1x3_Activation/Relu model_12/Block8_1_Branch_1_Conv2d_0c_3x1/Conv2D/ReadVariableOp/resource model_12/Block8_1_Branch_1_Conv2d_0c_3x1/Conv2D/ReadVariableOp model_12/Block8_1_Branch_1_Conv2d_0c_3x1/Conv2D model_12/Block8_1_Branch_1_Conv2d_0c_3x1_BatchNorm/scale model_12/Block8_1_Branch_1_Conv2d_0c_3x1_BatchNorm/ReadVariableOp/resource model_12/Block8_1_Branch_1_Conv2d_0c_3x1_BatchNorm/ReadVariableOp model_12/Block8_1_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_1_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_1_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_1_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_1_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3 model_12/Block8_1_Branch_1_Conv2d_0c_3x1_Activation/Relu model_12/Block8_1_Concatenate/concat/axis model_12/Block8_1_Concatenate/concat model_12/Block8_1_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_1_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block8_1_Conv2d_1x1/Conv2D model_12/Block8_1_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block8_1_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block8_1_Conv2d_1x1/BiasAdd model_12/Block8_1_ScaleSum/mul/y model_12/Block8_1_ScaleSum/mul model_12/Block8_1_ScaleSum/add model_12/Block8_1_Activation/Relu model_12/Block8_2_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_2_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block8_2_Branch_0_Conv2d_1x1/Conv2D model_12/Block8_2_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block8_2_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block8_2_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block8_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_2_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block8_2_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block8_2_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_2_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block8_2_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block8_2_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block8_2_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block8_2_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block8_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_2_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block8_2_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block8_2_Branch_1_Conv2d_0b_1x3/Conv2D/ReadVariableOp/resource model_12/Block8_2_Branch_1_Conv2d_0b_1x3/Conv2D/ReadVariableOp model_12/Block8_2_Branch_1_Conv2d_0b_1x3/Conv2D model_12/Block8_2_Branch_1_Conv2d_0b_1x3_BatchNorm/scale model_12/Block8_2_Branch_1_Conv2d_0b_1x3_BatchNorm/ReadVariableOp/resource model_12/Block8_2_Branch_1_Conv2d_0b_1x3_BatchNorm/ReadVariableOp model_12/Block8_2_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_2_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_2_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_2_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_2_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3 model_12/Block8_2_Branch_1_Conv2d_0b_1x3_Activation/Relu model_12/Block8_2_Branch_1_Conv2d_0c_3x1/Conv2D/ReadVariableOp/resource model_12/Block8_2_Branch_1_Conv2d_0c_3x1/Conv2D/ReadVariableOp model_12/Block8_2_Branch_1_Conv2d_0c_3x1/Conv2D model_12/Block8_2_Branch_1_Conv2d_0c_3x1_BatchNorm/scale model_12/Block8_2_Branch_1_Conv2d_0c_3x1_BatchNorm/ReadVariableOp/resource model_12/Block8_2_Branch_1_Conv2d_0c_3x1_BatchNorm/ReadVariableOp model_12/Block8_2_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_2_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_2_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_2_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_2_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3 model_12/Block8_2_Branch_1_Conv2d_0c_3x1_Activation/Relu model_12/Block8_2_Concatenate/concat/axis model_12/Block8_2_Concatenate/concat model_12/Block8_2_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_2_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block8_2_Conv2d_1x1/Conv2D model_12/Block8_2_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block8_2_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block8_2_Conv2d_1x1/BiasAdd model_12/Block8_2_ScaleSum/mul/y model_12/Block8_2_ScaleSum/mul model_12/Block8_2_ScaleSum/add model_12/Block8_2_Activation/Relu model_12/Block8_3_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_3_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block8_3_Branch_0_Conv2d_1x1/Conv2D model_12/Block8_3_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block8_3_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block8_3_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block8_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_3_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block8_3_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block8_3_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_3_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block8_3_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block8_3_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block8_3_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block8_3_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block8_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_3_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block8_3_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block8_3_Branch_1_Conv2d_0b_1x3/Conv2D/ReadVariableOp/resource model_12/Block8_3_Branch_1_Conv2d_0b_1x3/Conv2D/ReadVariableOp model_12/Block8_3_Branch_1_Conv2d_0b_1x3/Conv2D model_12/Block8_3_Branch_1_Conv2d_0b_1x3_BatchNorm/scale model_12/Block8_3_Branch_1_Conv2d_0b_1x3_BatchNorm/ReadVariableOp/resource model_12/Block8_3_Branch_1_Conv2d_0b_1x3_BatchNorm/ReadVariableOp model_12/Block8_3_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_3_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_3_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_3_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_3_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3 model_12/Block8_3_Branch_1_Conv2d_0b_1x3_Activation/Relu model_12/Block8_3_Branch_1_Conv2d_0c_3x1/Conv2D/ReadVariableOp/resource model_12/Block8_3_Branch_1_Conv2d_0c_3x1/Conv2D/ReadVariableOp model_12/Block8_3_Branch_1_Conv2d_0c_3x1/Conv2D model_12/Block8_3_Branch_1_Conv2d_0c_3x1_BatchNorm/scale model_12/Block8_3_Branch_1_Conv2d_0c_3x1_BatchNorm/ReadVariableOp/resource model_12/Block8_3_Branch_1_Conv2d_0c_3x1_BatchNorm/ReadVariableOp model_12/Block8_3_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_3_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_3_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_3_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_3_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3 model_12/Block8_3_Branch_1_Conv2d_0c_3x1_Activation/Relu model_12/Block8_3_Concatenate/concat/axis model_12/Block8_3_Concatenate/concat model_12/Block8_3_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_3_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block8_3_Conv2d_1x1/Conv2D model_12/Block8_3_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block8_3_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block8_3_Conv2d_1x1/BiasAdd model_12/Block8_3_ScaleSum/mul/y model_12/Block8_3_ScaleSum/mul model_12/Block8_3_ScaleSum/add model_12/Block8_3_Activation/Relu model_12/Block8_4_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_4_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block8_4_Branch_0_Conv2d_1x1/Conv2D model_12/Block8_4_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block8_4_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block8_4_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block8_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_4_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block8_4_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block8_4_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_4_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block8_4_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block8_4_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block8_4_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block8_4_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block8_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_4_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block8_4_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block8_4_Branch_1_Conv2d_0b_1x3/Conv2D/ReadVariableOp/resource model_12/Block8_4_Branch_1_Conv2d_0b_1x3/Conv2D/ReadVariableOp model_12/Block8_4_Branch_1_Conv2d_0b_1x3/Conv2D model_12/Block8_4_Branch_1_Conv2d_0b_1x3_BatchNorm/scale model_12/Block8_4_Branch_1_Conv2d_0b_1x3_BatchNorm/ReadVariableOp/resource model_12/Block8_4_Branch_1_Conv2d_0b_1x3_BatchNorm/ReadVariableOp model_12/Block8_4_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_4_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_4_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_4_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_4_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3 model_12/Block8_4_Branch_1_Conv2d_0b_1x3_Activation/Relu model_12/Block8_4_Branch_1_Conv2d_0c_3x1/Conv2D/ReadVariableOp/resource model_12/Block8_4_Branch_1_Conv2d_0c_3x1/Conv2D/ReadVariableOp model_12/Block8_4_Branch_1_Conv2d_0c_3x1/Conv2D model_12/Block8_4_Branch_1_Conv2d_0c_3x1_BatchNorm/scale model_12/Block8_4_Branch_1_Conv2d_0c_3x1_BatchNorm/ReadVariableOp/resource model_12/Block8_4_Branch_1_Conv2d_0c_3x1_BatchNorm/ReadVariableOp model_12/Block8_4_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_4_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_4_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_4_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_4_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3 model_12/Block8_4_Branch_1_Conv2d_0c_3x1_Activation/Relu model_12/Block8_4_Concatenate/concat/axis model_12/Block8_4_Concatenate/concat model_12/Block8_4_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_4_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block8_4_Conv2d_1x1/Conv2D model_12/Block8_4_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block8_4_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block8_4_Conv2d_1x1/BiasAdd model_12/Block8_4_ScaleSum/mul/y model_12/Block8_4_ScaleSum/mul model_12/Block8_4_ScaleSum/add model_12/Block8_4_Activation/Relu model_12/Block8_5_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_5_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block8_5_Branch_0_Conv2d_1x1/Conv2D model_12/Block8_5_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block8_5_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block8_5_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block8_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_5_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block8_5_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block8_5_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_5_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block8_5_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block8_5_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block8_5_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block8_5_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block8_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_5_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block8_5_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block8_5_Branch_1_Conv2d_0b_1x3/Conv2D/ReadVariableOp/resource model_12/Block8_5_Branch_1_Conv2d_0b_1x3/Conv2D/ReadVariableOp model_12/Block8_5_Branch_1_Conv2d_0b_1x3/Conv2D model_12/Block8_5_Branch_1_Conv2d_0b_1x3_BatchNorm/scale model_12/Block8_5_Branch_1_Conv2d_0b_1x3_BatchNorm/ReadVariableOp/resource model_12/Block8_5_Branch_1_Conv2d_0b_1x3_BatchNorm/ReadVariableOp model_12/Block8_5_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_5_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_5_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_5_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_5_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3 model_12/Block8_5_Branch_1_Conv2d_0b_1x3_Activation/Relu model_12/Block8_5_Branch_1_Conv2d_0c_3x1/Conv2D/ReadVariableOp/resource model_12/Block8_5_Branch_1_Conv2d_0c_3x1/Conv2D/ReadVariableOp model_12/Block8_5_Branch_1_Conv2d_0c_3x1/Conv2D model_12/Block8_5_Branch_1_Conv2d_0c_3x1_BatchNorm/scale model_12/Block8_5_Branch_1_Conv2d_0c_3x1_BatchNorm/ReadVariableOp/resource model_12/Block8_5_Branch_1_Conv2d_0c_3x1_BatchNorm/ReadVariableOp model_12/Block8_5_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_5_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_5_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_5_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_5_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3 model_12/Block8_5_Branch_1_Conv2d_0c_3x1_Activation/Relu model_12/Block8_5_Concatenate/concat/axis model_12/Block8_5_Concatenate/concat model_12/Block8_5_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_5_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block8_5_Conv2d_1x1/Conv2D model_12/Block8_5_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block8_5_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block8_5_Conv2d_1x1/BiasAdd model_12/Block8_5_ScaleSum/mul/y model_12/Block8_5_ScaleSum/mul model_12/Block8_5_ScaleSum/add model_12/Block8_5_Activation/Relu model_12/Block8_6_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_6_Branch_0_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block8_6_Branch_0_Conv2d_1x1/Conv2D model_12/Block8_6_Branch_0_Conv2d_1x1_BatchNorm/scale model_12/Block8_6_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp/resource model_12/Block8_6_Branch_0_Conv2d_1x1_BatchNorm/ReadVariableOp model_12/Block8_6_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_6_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_6_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_6_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_6_Branch_0_Conv2d_1x1_BatchNorm/FusedBatchNormV3 model_12/Block8_6_Branch_0_Conv2d_1x1_Activation/Relu model_12/Block8_6_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_6_Branch_1_Conv2d_0a_1x1/Conv2D/ReadVariableOp model_12/Block8_6_Branch_1_Conv2d_0a_1x1/Conv2D model_12/Block8_6_Branch_1_Conv2d_0a_1x1_BatchNorm/scale model_12/Block8_6_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp/resource model_12/Block8_6_Branch_1_Conv2d_0a_1x1_BatchNorm/ReadVariableOp model_12/Block8_6_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_6_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_6_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_6_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_6_Branch_1_Conv2d_0a_1x1_BatchNorm/FusedBatchNormV3 model_12/Block8_6_Branch_1_Conv2d_0a_1x1_Activation/Relu model_12/Block8_6_Branch_1_Conv2d_0b_1x3/Conv2D/ReadVariableOp/resource model_12/Block8_6_Branch_1_Conv2d_0b_1x3/Conv2D/ReadVariableOp model_12/Block8_6_Branch_1_Conv2d_0b_1x3/Conv2D model_12/Block8_6_Branch_1_Conv2d_0b_1x3_BatchNorm/scale model_12/Block8_6_Branch_1_Conv2d_0b_1x3_BatchNorm/ReadVariableOp/resource model_12/Block8_6_Branch_1_Conv2d_0b_1x3_BatchNorm/ReadVariableOp model_12/Block8_6_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_6_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_6_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_6_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_6_Branch_1_Conv2d_0b_1x3_BatchNorm/FusedBatchNormV3 model_12/Block8_6_Branch_1_Conv2d_0b_1x3_Activation/Relu model_12/Block8_6_Branch_1_Conv2d_0c_3x1/Conv2D/ReadVariableOp/resource model_12/Block8_6_Branch_1_Conv2d_0c_3x1/Conv2D/ReadVariableOp model_12/Block8_6_Branch_1_Conv2d_0c_3x1/Conv2D model_12/Block8_6_Branch_1_Conv2d_0c_3x1_BatchNorm/scale model_12/Block8_6_Branch_1_Conv2d_0c_3x1_BatchNorm/ReadVariableOp/resource model_12/Block8_6_Branch_1_Conv2d_0c_3x1_BatchNorm/ReadVariableOp model_12/Block8_6_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp/resource model_12/Block8_6_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp model_12/Block8_6_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1/resource model_12/Block8_6_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3/ReadVariableOp_1 model_12/Block8_6_Branch_1_Conv2d_0c_3x1_BatchNorm/FusedBatchNormV3 model_12/Block8_6_Branch_1_Conv2d_0c_3x1_Activation/Relu model_12/Block8_6_Concatenate/concat/axis model_12/Block8_6_Concatenate/concat model_12/Block8_6_Conv2d_1x1/Conv2D/ReadVariableOp/resource model_12/Block8_6_Conv2d_1x1/Conv2D/ReadVariableOp model_12/Block8_6_Conv2d_1x1/Conv2D model_12/Block8_6_Conv2d_1x1/BiasAdd/ReadVariableOp/resource model_12/Block8_6_Conv2d_1x1/BiasAdd/ReadVariableOp model_12/Block8_6_Conv2d_1x1/BiasAdd model_12/Block8_6_ScaleSum/mul/y model_12/Block8_6_ScaleSum/mul model_12/Block8_6_ScaleSum/add model_12/AvgPool/Mean/reduction_indices model_12/AvgPool/Mean model_12/Dropout/Identity model_12/Bottleneck/MatMul/ReadVariableOp/resource model_12/Bottleneck/MatMul/ReadVariableOp model_12/Bottleneck/MatMul Identity
S43_logistics_sensing.ipynb
###Markdown ###Code %pip install -q -U gtbook import numpy as np import plotly.express as px try: import google.colab except: import plotly.io as pio pio.renderers.default = "png" import gtsam from gtbook.display import show from gtbook import logistics N = 5 indices = range(1, N+1) u = {k:gtsam.symbol('u',k) for k in indices[:-1]} # controls u_k x = {k:gtsam.symbol('x',k) for k in indices} # states x_k z = {k:gtsam.symbol('z',k) for k in indices} # measurements z_k ###Output _____no_output_____ ###Markdown Sensor Models with Continuous State> From Gaussian to non-Gaussian sensors. ###Code from gtbook.display import randomImages from IPython.display import display display(randomImages(4, 3, "steampunk", 1)) ###Output _____no_output_____ ###Markdown In the previous chapters, we have used sensors that could be characterized using fairly simple probabilisticmodels, either discrete conditional probability distributions (e.g., conductivity for the trash sorting robotand light sensing for the vacuum cleaning robot), or one-dimensional Gaussian distributions (as for the trashsorting robot's weight sensor). In this section we introduce three more realistic sensors,each of which are similar to sensors that are frequently used in modern robotic systems:- a proximity sensor;- an RFID range sensor, which has a non-linear measurement prediction model- a GPS-like location sensor, which uses a linear-Gaussian conditional density The proximity sensor detects when the robot is near an obstacle. Therefore, this sensor'sresponse depends on the geometry of the environment.Let us extend the example from the last section, a warehouse of 100m x 50m, by adding four sets of shelvesplaced at regular intervalsA base map for this environment is defined in `gtbook.logistics`. Below we plot this `base_map` as an image: ###Code logistics.show_map(logistics.base_map) ###Output _____no_output_____ ###Markdown A proximity sensor> A binary sensor over a continuous space.We consider a sensor that measures the *proximity* of the robot to obstacles. For example, this could be operationalized using small infrared sensor/receiver pairs, or a magnetic sensor that measures proximity to one of the metal structures within the warehouse.This is a *binary* sensor, just like the conductivity sensor we saw in the trash sorting example. However, a big difference is that the measurement $z_k$ of this sensor at time $k$ depends on the *continuous* state $x_k$. In other words, this is a kind of *hybrid*, discrete-continuous sensor model. This sensor can be modeled using a **signed distance function** (SDF), which is a well-known concept from graphics.An SDF measures the distance from any point $x$ to the nearest obstacle, is positive if this location is outside the obstacle, and negative if it is inside the obstacle. Robots will never see negative SDF values.Let $X_{obs}$ denote the set of points on the boundary of the obstacles (the borders of the shelves, and thewalls that enclose the warehouse).For a given point $x \in {\cal D}$, the distance to the nearest obstacle can be defined by$$ d(x) = \min_{x' \in X_{obs}} \|x - x'\|^{1/2}$$We now define the signed distance to be negative for $x$ is inside the obstacle, and positive for pointsoutside the obstacle:$$\mathrm{sdf}(x)= \left\{ \begin{array}{lcr} - d(x) & & x \mathrm{\; inside\; obstacle}\\ + d(x) & & x \mathrm{\; outside\; obstacle}\\\end{array}\right.$$ We can use the function *sdf* to model our proximity sensor by defining a conditional probabilitydistribution $P(z_k=\text{ON}|x_k)$ as a function of $\mathrm{sdf}(x)$.For example, if obstacle detection is very reliable for for $d(x) < d_0$,but degrades rapidly as $d(x)$ increases (i.e., as the robot moves further from obstacles),we might define$$P(z_k=\text{ON}|x_k)= \left\{ \begin{array}{lcr} 1 & & d(x) < d_0 \\ e^{- \alpha d(x)} && \mathrm{otherwise}\\\end{array}\right.$$where the value of $\alpha$ determines how rapidly the probability decreases.Because the conditional probability $P(\cdot | x_k)$ is a valid probability distribution,we can immediately conclude that$P(z_k=\text{OFF}|x_k) = 1 - P(z_k=\text{ON}|x_k)$.Below, we use a simpler model, which assumes that the proximity sensor perfectlydetects when the robot is within distance $d_0$ of an obstacle:$$P(z_k=\text{ON}|x_k)= \left\{ \begin{array}{lcr} 1 & & d(x) < d_0 \\ 0 && \mathrm{otherwise}\\\end{array}\right.$$ Below we show the ON and OFF *likelihood* images for this simple model. Recall that the likelihood $l(x_k;z_k=\text{ON})\propto P(z_k=\text{ON}|z_k)$ is a function *of the state* $x_k$, so we can show it as a map: ###Code logistics.show_map(logistics.proximity_map_on) ###Output _____no_output_____ ###Markdown Note that we denote a likelihood over a continuous variable with lowercase $l()$, in analogy to the lowercase $p()$ we use for densities. At any location, the likelihood $l(x_k;z_k=\text{OFF})$ of $x_k$ given that the sensor is OFF will be the mirror image of the map above: ###Code logistics.show_map(logistics.proximity_map_off) ###Output _____no_output_____ ###Markdown A Range Sensor> A range sensor is *non-linear*.Let us assume that the operator of the warehouse has installed a number of *beacons* throughout the warehouse, at strategically placed locations. The measurement function $h(x_k; b_i)$ for a sensor measuring the range to a beacon at location $b_i$ is non-linear:$$h(x_k;b_i) = \|x_k - b_i\| = \sqrt{(x_k - b_i)^T(x_k - b_i)}$$The energy emitted by an RFID beacon is, of course, finite. Therefore, there is some maximaldistance, say $d_{\max}$, beyond which the sensor is unable to detect the beacon.In this case, the sensor does not return any range measurement, and indicates"no beacon present."This is actually not an unrealistic model;there is a technology called [radio frequency identification (RFID)](https://en.wikipedia.org/wiki/Radio-frequency_identification), which can be detected using a small radio receiver, and some of the more expensive variants allow for the range to the RFID to be measured, if not very accurately. As with the motion model described above, it is typical to assume additive Gaussiannoise for sensors, leading to the measurement model:$$z_k = h(x_k;b_i) + w_k$$in which $w_k$ is the noise term (unrelated to the noise in our motion model).Using this model, we see that the the conditional probability of a range measurement $z_k$ given a continuous state $x_k$ *is* a Gaussian, even though its mean $\mu$ is computed as a nonlinear function of the $x_k$.$$\begin{align*}p(z_k|x_k; b_i) &= \mathcal{N}(z_k;\mu=h(x_k;b_i), \sigma^2) \\&= \frac{1}{\sqrt{2\pi\sigma^2}} \exp\{-\frac{1}{2\sigma^2}(z_k-h(x_k;b_i))^2\}\end{align*}$$Note that the range is one-dimensional, so we again use the simpler univariate Gaussian notation, and we use the variance $\sigma^2$ to characterize the zero-mean Gaussian noise. The equation below for the *likelihood* of the state $x_k$ given a measurement $z_k$ (to beacon $b_i$) seems similar to a Gaussian density:$$L(x_k;z_k, b_i) = \exp\{-\frac{1}{2\sigma^2}(z_k-h(x_k;b_i))^2\}.$$However, because the likelihood is a function of $x_k$, and not a function of $z_k$ (remember, $z_k$ is *given*when we compute the likelihood for state $x_k$),the likelihood does not even remotely look like a Gaussian! In fact, the likelihood will be high where the state agrees with the given range measurement, i.e., in an *annulus* of the right radius around the beacon. The "width" of the annulus will be proportional to the measurement noise $\sigma$. To make this concrete, let us add 8 beacons to the base map, at either side of the shelves, which seems very useful for robot navigation. Again, we defined these in `gtbook.logistics.beacons`, and we show them below on the base map: ###Code logistics.show_map(logistics.base_map, logistics.beacons) ###Output _____no_output_____ ###Markdown We can implement a range function that works with any beacon: ###Code def rfid_range(position, beacon, max_range=8): """return range to given beacon""" range = np.linalg.norm(position-beacon) return float('inf') if range>max_range else range ###Output _____no_output_____ ###Markdown For example, if the robot is at $(20.5, 7.5)$, we are within range of beacon $0$: ###Code state = gtsam.Point2(20.5, 7.5) beacon0 = logistics.beacons[0] zk = rfid_range(state, beacon0) print(f"range to beacon 0 = {zk}") ###Output range to beacon 0 = 4.031128874149275 ###Markdown Now we are in a position to show the annulus-like likelihood images: ###Code dist = np.array([[rfid_range(xy, beacon0) for xy in row] for row in logistics.map_coords]) sigma = 1 # In meters likelihood = np.exp(-1/(2 * sigma**2) * (zk - dist)**2) logistics.show_map(likelihood, logistics.beacons) ###Output _____no_output_____ ###Markdown As you can see, *all* positions at a range of approximately 4 meters have a high likelihood. Negative Information> Negative information is also information!The likelihood model above is useful when a beacon is within sensing range of the RFID reader. But what happens when all beacons are out of range? Let us assume that the RFID reader always returns the range to the closest beacon, along with its identification number, but it returns a special value when all the beacons are out of range. Formally, we can model this as a *pair* of values that is returned by the sensor,$$z_{RFID}\in \bar{N} \times \mathbb{\bar{R}}^+,$$where $\bar{N}$ is the set of integers extended with `None`, and $\mathbb{\bar{R}}^+$ is the set of positive real numbers extended with $\infty$. In code this easy to implement: if all sensors are out of range we return `None` for the identification and `float('inf')` for the range. For example, here is some code that returns the range to the nearest beacon (given in a list `beacons`) and `None, float('inf')` if all are out of range: ###Code def rfid_measurement(position, max_range=8): """Simulate RFID reader that returns nearest RFID range or (None,inf).""" ranges = [rfid_range(position, beacon, max_range) for beacon in logistics.beacons] range = min(ranges) return (np.argmin(ranges), range) if range<=max_range else (None,range) print(rfid_measurement(gtsam.Point2(20,7))) print(rfid_measurement(gtsam.Point2(7,7))) ###Output (0, 4.716990566028302) (None, inf) ###Markdown The special "out of range" measurement conveys a lot of information as well! If we are within range of a sensor, the likelihood function is as above (an annulus). But when we are *out* of range, the likelihood function has a very strange shape indeed: it will be 1.0 for all continuous states out of range of all beacons, and zero for states within range of a beacon. This makes sense: if the robot were within range of a beacon, the sensor would have returnedan actual range, *not* infinity. This in turn tells us that the robot must be somewhere outside the range of all beacons, which is very powerful information. ###Code def out_of_rfid_range(position, max_range=8): id, _ = rfid_measurement(position, max_range) return id == None out_of_bound_map = np.array([[out_of_rfid_range(xy) for xy in row] for row in logistics.map_coords]) logistics.show_map(out_of_bound_map, logistics.beacons) ###Output _____no_output_____ ###Markdown A GPS-Like Location Sensor> A simple conditionally Gaussian measurement.The last sensor we will consider, a GPS-like location sensor, is a bit of a cheat. In fact, there are no great indoor GPS-like sensors.People have tried all kinds of things, like triangulating WiFi signals etc., but in fact a cheap and reliable GPS-like sensor that works indoors is still not available at the time of writing. However, it is a good straw man to introduce a simple, conditionally Gaussian measurement. Also, it can be a good way to model a more complicated sensor, e.g., a camera-based localization system that uses some type of map. The **measurement model** $h(x_k)$, which predicts the measurement $z_k$ given the state $x_k$, will again be a conditional Gaussian. In this example, let us assume the measurement is simply the location of the robot, but measured in *cm*. The measurement model in this case is *linear*, and we again assume additive noise:$${z}_k = h(x_k) + w_k = C x_k + w_k$$The matrix $C$ performs a linear transformation on the state, such as convertingbetween different units.Suppose for example that the state is defined in meters and the sensor measures incentimeters.In this case, we could use the $2\times2$ diagonal matrix $C=\text{diag}(100,100)$ toapply the appropriate scaling factor to convert from meters to centimeters. Under the assumption that $w_k$ is again zero-mean, Gaussian noise, the conditional density $p(z_k|x_k)$ of the measurement $z_k$ given the state $x_k$ is then$$\begin{align*}p(z_k|x_k) &= \mathcal{N}(z_k;\mu=C x_k, \Sigma=R) \\&= \frac{1}{\sqrt{|2\pi R|}} \exp\{-0.5 (z_k-C x_k)^TR^{-1}(z_k-C x_k)\}\end{align*}$$where $R$ is the traditional symbol used for measurement model covariance. Assuming a fairly inaccurate sensor, with 30cm standard deviation, we have $R=\text{diag}(30^2,30^2)$. Note that the $2\times 2$ measurement covariance matrix $R$ is expressed in the units of the measurement, i.e., centimeters, *not* the units of the state. Simulating States *and* Measurements> Dynamic Bayes nets to the rescue, again! We can now extend the Gaussian DBN from the previous section to include the measurements, so that we canencode the joint density $p(X,Z|U)$ on states $X$ *and* measurements $Z$, given the controls $U$:$$p(X,Z|U) = p(x_1)p(z_1|x_1) \prod_{k=2}^N p(x_k|x_{k-1}, u_k)p(z_k|x_k).$$Let us add the measurement conditionals to the DBN from the previous section to get an extended dynamic Bayes net: ###Code gaussianBayesNet = gtsam.GaussianBayesNet() A, B, C = np.eye(2), np.eye(2), 100 * np.eye(2) motion_model_sigma = 0.2 measurement_model_sigma = 30 for k in indices: gaussianBayesNet.push_back(gtsam.GaussianConditional.FromMeanAndStddev( z[k], C, x[k], [0, 0], measurement_model_sigma)) for k in reversed(indices[:-1]): gaussianBayesNet.push_back(gtsam.GaussianConditional.FromMeanAndStddev( x[k+1], A, x[k], B, u[k], [0, 0], motion_model_sigma)) p_x1 = gtsam.GaussianDensity.FromMeanAndStddev(x[1], [20,10], 0.5) gaussianBayesNet.push_back(p_x1) position_hints = {'u': 2, 'x': 1, 'z': 0} show(gaussianBayesNet, hints=position_hints, boxes=set(list(u.values()))) ###Output _____no_output_____ ###Markdown This now allows us to simulate a trajectory and *simultaneously* simulate a set of measurements: ###Code control_tape = gtsam.VectorValues() for k, (ux,uy) in zip(indices[:-1], [(2,0), (2,0), (0,2), (0,2)]): control_tape.insert(u[k], gtsam.Point2(ux,uy)) gaussianBayesNet.sample(control_tape) ###Output _____no_output_____
examples/Stock display.ipynb
###Markdown Stock displays. Code cribbed from [this notebook](http://nbviewer.ipython.org/github/twiecki/financial-analysis-python-tutorial/blob/master/1.%20Pandas%20Basics.ipynb) by [Thomas Wiecki](https://github.com/twiecki). ###Code stock = 'MSFT' # Display names are stored in notebook metadata days_back = 600 import datetime import pandas as pd import pandas_datareader.data as webdata from matplotlib import pyplot as plt %matplotlib inline now = datetime.date.today() start = now - datetime.timedelta(days=days_back) df = webdata.get_data_yahoo(stock, start=start, end=now) df close_px = df['Adj Close'] mavg = close_px.rolling(window=30, center=False).mean() close_px.plot(label=stock) mavg.plot(label='mavg') plt.legend() ###Output _____no_output_____ ###Markdown Stock displays. Code cribbed from [this notebook](http://nbviewer.ipython.org/github/twiecki/financial-analysis-python-tutorial/blob/master/1.%20Pandas%20Basics.ipynb) by [Thomas Wiecki](https://github.com/twiecki). ###Code stock = 'MSFT' # Display names are stored in notebook metadata days_back = 600 import datetime import pandas as pd import pandas_datareader.data as webdata from matplotlib import pyplot as plt %matplotlib inline now = datetime.date.today() start = now - datetime.timedelta(days=days_back) df = webdata.get_data_yahoo(stock, start=start, end=now)['prices'] df close_px = df['Adj Close'] mavg = close_px.rolling(window=30, center=False).mean() close_px.plot(label=stock) mavg.plot(label='mavg') plt.legend() ###Output _____no_output_____
Introduction to Molecular Dynamics.ipynb
###Markdown Introduction to Molecular DynamicsIn classical molecular dynamics (MD), a simulation engine numerically integrates Newton's equations of motion to move particles through time. The general process is similar to the previous simulation example of the ball falling from rest. Since acceleration is the change in velocity over time, and velocity is the change in position over time, we can therefore evolve the position of particles in the system. However, instead of a constant acceleration due to gravity, acceleration results from the interaction between particles in the system following the relationship $F = ma$, where $F$ is the force, $m$ is the mass and $a$ is acceleration. Newton's equations represent a set of 'N' second order differential equations (where 'N' corresponds to the number of particles). There are many ways in which to perform the numerical integration, some better than others in terms of stability and precision. Velocity Verlet algorithmThis method is a commonly used, robust scheme, e.g., used in packages such [LAMMPS](http://lammps.sandia.gov). The Velocity Verlet algorithm updates the position at time $t+\delta t$ based on the velocity ($\vec{v_i}$) and force at the current time $t$. $\vec{r_i}(t+\delta t) = \vec{r_i}(t) + \vec{v_i}(t)\delta t + \frac{1}{2m_i}\vec{F_i}(t)\delta t^2$ The velocity is updated using the following equation:$\vec{v_i}(t+\delta t) = \vec{v_i}(t) + \frac{1}{2m_i}\left[\vec{F_i}(t)+\vec{F_i}(t+\delta t)\right]\delta t$ For more detail information see [David Kofke's lecture slides on the Velocity Verlet algorithm (and other integration methods).](https://www.eng.buffalo.edu/~kofke/ce530/Lectures/Lecture11/sld041.htm) ExerciseModify the ball falling code (included below) to use the Velocity Verlet algorithm to update positions. Here we will assume that the force acting on the ball follows $F=ma$ where we can assume the ball is of mass 'm' (note mass will cancel out in the equation, thus we do not actually need to define it). Also note, since this ball experiences a constant acceleration, the force is constant throughout time. * Use the ```height_array_vv``` to store the position for the Velocity Verlet algorithm and the ```velocity_array_vv``` to store the velocity for the Velocity Verlet algorithm. You can leave the other code in place for comparison.* If you have done this correctly, the output plots on the right hand column (for Velocity Verlet) will be identical to the left hand column (using the kinematic equations). ###Code %matplotlib inline import numpy as np import matplotlib import matplotlib.pyplot as plt matplotlib.style.use('default') g = -9.8 #gravitation constant, m/s^2 dt = 0.0001 #timestep, s timesteps = 22000 #total number of timesteps to consider velocity_i = 5.0 #initial velocity, m/s height_i = 10.0 #initial height, m velocity_array = np.zeros(timesteps) height_array = np.zeros(timesteps) #arrays for the velocity verlet algorithm height_array_vv = np.zeros(timesteps) velocity_array_vv = np.zeros(timesteps) time_array = np.zeros(timesteps) height_array[0] = height_i height_array_vv[0] = height_i velocity_array[0] = velocity_i velocity_array_vv[0] = velocity_i current_height = height_i for i in range (0, timesteps-1): time_array[i+1] = time_array[i] + dt velocity_array[i+1] = velocity_array[i] + (g*dt) height_array[i+1] = height_array[i]+ 0.5*(velocity_array[i]+velocity_array[i+1])*dt #add the velocity verlet code here height_array_vv[i+1] = velocity_array_vv[i+1] = #if we have reached the ground, zero out the position and velocity if height_array[i+1] <= 0: velocity_array[i+1] = 0 height_array[i+1] = 0 #velocity verlet arrays height_array_vv[i+1] = 0 velocity_array_vv[i+1] = 0 ax = plt.subplot(2,2,1) ax.plot(time_array, height_array, c='blue') plt.ylabel('height (m)') ax = plt.subplot(2,2,2) ax.plot(time_array, height_array_vv, c='green') ax = plt.subplot(2,2,3) ax.plot(time_array, velocity_array, c='blue') plt.ylabel('velocity (m/s)') plt.xlabel('time (s)') ax = plt.subplot(2,2,4) ax.plot(time_array, velocity_array_vv, c='green') plt.xlabel('time (s)') plt.show() ###Output _____no_output_____ ###Markdown Interaction potentialAs noted above, we can numerically integrate the motion of particles in a system if we know the forces that act upon them. Typically, we think about the forces acting upon particles through consideration of the interaction potential, $U$, where the force is defined to be the negative gradient of the interaction potential (i.e., $F = -\nabla U$ ). Classical molecular dynamics does not explicitly consider the electronic structure, instead capturing interactions between particles (i.e., $U$) via a set of analytical (or numerical) functions. Force fieldsThe interaction potentials between particles in the system are referred to as a "force field." The force field is typically considered to have two main contributions, non-bonded and bonded:$U_{total} = U_{non-bonded} + U_{bonded}$ For atomistically detailed systems (i.e., where we consider all atoms in the system explicitely) non-bonded interactions consist of a term to describe the van der Waals interactions and a term to descibe the electrostatic (or charge) interactions: $U_{non-bonded} = U_{van der Waals} + U_{charge}$Bonded interactions describe any topological constraints meant to enforce a given structure in a molecule. These included bonds between a pair of connected atoms, angles between three connected atoms, and dihedrals (or torsional) terms between 4 atoms (note dihedrals can take two forms, "proper" and "improper" as shown in the figure below):$U_{bonded} = U_{bond} + U_{angle} + U_{dihedral}$![Bonds, angles, dihedrals, and impropers, from http://www.mbnexplorer.com/users_guide_2.0/users_guide994x.png](img/bondangledihedral.png) Example: the Optimized Potentials for Liquid Simulations (OPLS) forcefield:In general, classical force fields tend to have similar functional forms (although, the subtle differences between force fields is quite important!). As an example, consider the functional form of the commonly used OPLS force field:$U_{van der waals} = 4\epsilon\left[ \left(\frac{\sigma}{r}\right)^{12}- \left(\frac{\sigma}{r}\right)^6 \right]$$U_{charge} = \frac{q_iq_j e^2}{ r}$$U_{bond} = K_r (r-r_0)^2$$U_{angle} = k_\theta (\theta-\theta_0)^2$$U_{dihedral} = \frac {V_1} {2} \left [ 1 + \cos (\phi-\phi_0) \right ] + \frac {V_2} {2} \left [ 1 - \cos (2\phi-\phi_0) \right ] + \frac {V_3} {2} \left [ 1 + \cos (3\phi-\phi_0) \right ] + \frac {V_4} {2} \left [ 1 - \cos (4\phi-\phi_0) \right ] $ Here, $r$ represents the separation between two atoms, $\theta$ the angle between 3 atoms, $\phi$ is between 4 atoms, $q_i$ is the charge on atom "i" and $q_j$ the charge on atom "j". The other variables (e.g., $\epsilon$, $\sigma$, $r_0$, etc.) are parameters used adjust the force field to match a given potential energy landscape. Timescales of interactionsGiven that an MD simulation numerically integrates the equations of motion through time, the simulation timestep is limited by the highest frequency vibration in the system. Recall in the example of the falling ball, if the timestep were too large we would miss the important features of the trajectory; the same general concept applies to MD simulations. A general rule of thumb is that timestep should be ~1/10 the highest frequency. Since C-H bond stretching tends to be the fastest mode ($10^{-14}$ s), a 1 fs timestep is often the limit for atomistic systems. Several methods have been developed to allow larger timesteps. For example, the SHAKE and RATTLE algorithms can be used to effectively keep bond distances fixed and eliminate this high frequency mode, typically allowing for a 2 fs timestep. The rRESPA multi-timescale integrator can also be used, where different timesteps are used for the different interactions (e.g., a smaller timestep for bonded interactions computed in an inner loop, larger timesteps for slower modes in an outer loop). The Lennard-Jones (LJ) PotentialLet us focus on the $U_{van der waals}$ term in the OPLS forcefield, modeled using the 12-6 Lennard-Jones potential. There are many ways the non-bonded van der Waals term can be expressed, although the 12-6 Lennard-Jones potential is the one of the most commonly used function forms. $U(r) = 4\epsilon\left[ \left(\frac{\sigma}{r}\right)^{12}- \left(\frac{\sigma}{r}\right)^6 \right]$In the LJ equation there are two adjustable fitting parameters, $\epsilon$ which dictates the interaction strength, and $\sigma$ which dictates the size of the particle. Note, $r$ is the distance between two particles.>- The short-range $1/r^{12}$ repulsion is used to model the overlap of electron clouds of the atoms. >- The longer-range $1/r^{6}$ attraction is used to model dispersion interactions.>- The minimum of the potential occurs at $r = 2^{(1/6)}\sigma$As previously noted, the force is the negative gradient of the potential. For the LJ equation, this becomes:$F(r) = \frac{48\epsilon}{\sigma}\left[ \left(\frac{\sigma}{r}\right)^{13}- \frac{1}{2}\left(\frac{\sigma}{r}\right)^7 \right]$The LJ potential and force is plotted in the script below. Exercise:> Modify the ```epsilon``` and ```sigma``` variables in the script to see the effect on the potential and force. ###Code %matplotlib inline import numpy as np import matplotlib import matplotlib.pyplot as plt matplotlib.style.use('default') epsilon = 1.0 sigma = 1.0 r_min = 0.9 r_max = 3.0 steps = 1000 r_step = (r_max-r_min)/steps U = np.zeros(steps) F = np.zeros(steps) r = np.zeros(steps) for i in range (0, steps): r[i] = r_min + r_step*i U[i] = 4*epsilon*((sigma/r[i])**12 - (sigma/r[i])**6) F[i] = 48*(epsilon/sigma)*((sigma/r[i])**13 - 0.5*(sigma/r[i])**7) plt.plot(r, U, c='blue', label='U(r)') plt.plot(r, F, c='red', label='F(r)') plt.ylabel('U(r) or F(r)') plt.xlabel('r') plt.ylim(-3, 4) plt.legend(loc='upper right') plt.show() ###Output _____no_output_____ ###Markdown Alternative expressionsFor the CHARMM forcefield, the LJ potential is often expressed as: $U(r) = \epsilon\left[ \left(\frac{r_m}{r}\right)^{12}- 2\left(\frac{r_m}{r}\right)^6 \right]$where $r_m = 2^{(1/6)}\sigma$. Substituting $r_m$ into the equation yields the same functional form as above.The LJ potential is also sometimes written in the "AB" form:$U(r) = \frac{A}{r^{12}} -\frac{B}{r^{6}}$When $A=4\epsilon\sigma^{12}$ and $B= 4\epsilon\sigma^{6}$, we recover the standard 12-6 form. A more general "2n" form is also common, that resembles the CHARMM functional form, but allows for different exponents; when n=6, we recover the 12-6 form. $U(r) = \epsilon\left[ \left(\frac{r_m}{r}\right)^{2n}- 2\left(\frac{r_m}{r}\right)^n \right]$Similarly, the Mie potential is another generalized form that reduces to the LJ potential when n = 12 and m = 6:$U(r) = \left( \frac{n}{n-m}\right)\left(\frac{n}{m}\right)^{m/(n-m)}\epsilon\left[ \left(\frac{\sigma}{r}\right)^{n}- \left(\frac{\sigma}{r}\right)^m \right]$ Other expressions of the van der Waals interactionWhile the $1/r^{12}$ term has been commonly used to model the repulsion of atoms, and is computationally convenient as it is a multiple of $1/r^{6}$, the Buckingham potential (often call the "exponential-6" or "exp-6" potential), is considered more accurate:$U(r) = \gamma\left[ e^{-r/r_0}- \left(\frac{r_0}{r}\right)^6 \right]$The COMPASS force field uses the 6/9 class2 LJ forcefield. This puts the minimum of the potential at $\sigma$ rather than $2^{(1/6)}\sigma$:$U(r) = \epsilon\left[ 2\left(\frac{\sigma}{r}\right)^{9}- 3\left(\frac{\sigma}{r}\right)^6 \right]$ Exercise:> As a simple exercise, add to the LJ plotting code in order to plot: > - the Mie potential and see the effect of the exponents on the shape> - the 6/9 class2 LJ potential. Simulation Conditions Periodic Boundary ConditionsSince, in most cases, MD simulations can only model a very small subset of a real physical system (on the order of nanometers), periodic boundary conditions are typically employed. As a particle "leaves" a simulation box, it re-enters on the opposite side, resulting in an infinite, yet periodic system. This approach avoids artifacts associated with hard boundaries, although may itself introduce artifacts associated with periodicity if the system is too small.Explicit self-interactions are not allowed; that is, you would not calculate the potential between a given particle and its periodic image. The image below on the left presents a cartoon representation of a system with its periodic images. A simple animation of periodic boundary conditions can be found at the follow [youtube link](https://www.youtube.com/watch?v=5qdNafdyaG0). Minimum Image ConventionMD simulations typically employ what is known as the "minimum image convention." > * For example, consider two particles (A and B) in a cubic simulation box of edge length 10. Particle A is located at [1,0,0] and Particle B is located at [8,0,0]. The distance between the particles within the simulation box is therefore 7. However, the distance between Particle A and the periodic image of Particle B is actually 3. Thus, with the minimum image convenction we would consider the distance between these two particles to be 3. > * In more algorithmic terms, if the distance between two particles is greater than half the box length, the length of the box is subtracted from the distance between the particles. A cartoon representation of the minimum image convention is shown below, right. In this example, the dark green particle interacts with the periodic image of the light green particle, rather than the light green particle within the same cell.![2d cartoon of periodic images](img/pbc2.png) Exercise> Let us modify the falling ball example such that we now just have a particle moving at a constant velocity (no gravitational acceleration) and change the code such that it implements periodic boundary conditions with walls at 10 and -10 in the height. * As another simple exercise, considering adding a small random perturbation (on the order of +/- 0.01) to the velocity at each timestep to see the effect. Don't forget to set the seed! * https://docs.python.org/3.6/library/random.html * Note that the floating point random number generate in Python returns values from 0.0 to 1.0. To generate between -1.0 and 1.0 we need to make a very simple modification ```(2.0*random()-1.0)```. ###Code %matplotlib inline import numpy as np import matplotlib import matplotlib.pyplot as plt matplotlib.style.use('default') import random as rand g = 0 #gravitation constant, m/s^2 dt = 0.001 #timestep, s timesteps = 100000 #total number of timesteps to consider velocity_i = 1.0 #initial velocity, m/s height_i = 0.0 #initial height, m #boundaries of the system height_min = -10 height_max = 10 height_length= height_max - height_min velocity_array = np.zeros(timesteps) height_array = np.zeros(timesteps) time_array = np.zeros(timesteps) height_array[0] = height_i velocity_array[0] = velocity_i current_height = height_i for i in range (0, timesteps-1): time_array[i+1] = time_array[i] + dt velocity_array[i+1] = velocity_array[i] + (g*dt) height_array[i+1] = height_array[i]+ 0.5*(velocity_array[i]+velocity_array[i+1])*dt #apply periodic boundary conditions to the system height_array[i+1] = ax = plt.subplot(2,1,1) ax.plot(time_array, height_array, c='blue') plt.ylabel('height (m)') ax = plt.subplot(2,1,2) ax.plot(time_array, velocity_array, c='red') plt.ylabel('velocity (m/s)') plt.xlabel('time (s)') plt.show() ###Output _____no_output_____ ###Markdown Introduction to Molecular DynamicsIn classical molecular dynamics (MD), a simulation engine numerically integrates Newton's equations of motion to move particles through time. The general process is similar to the previous simulation example of the ball falling from rest. Since acceleration is the change in velocity over time, and velocity is the change in position over time, we can therefore evolve the position of particles in the system. However, instead of a constant acceleration due to gravity, acceleration results from the interaction between particles in the system following the relationship $F = ma$, where $F$ is the force, $m$ is the mass and $a$ is acceleration. Newton's equations represent a set of 'N' second order differential equations (where 'N' corresponds to the number of particles). There are many ways in which to perform the numerical integration, some better than others in terms of stability and precision. Velocity Verlet algorithmThis method is a commonly used, robust scheme, e.g., used in packages such [LAMMPS](http://lammps.sandia.gov). The Velocity Verlet algorithm updates the position at time $t+\delta t$ based on the velocity ($\vec{v_i}$) and force at the current time $t$. $\vec{r_i}(t+\delta t) = \vec{r_i}(t) + \vec{v_i}(t)\delta t + \frac{1}{2m_i}\vec{F_i}(t)\delta t^2$ The velocity is updated using the following equation:$\vec{v_i}(t+\delta t) = \vec{v_i}(t) + \frac{1}{2m_i}\left[\vec{F_i}(t)+\vec{F_i}(t+\delta t)\right]\delta t$ For more detail information see [David Kofke's lecture slides on the Velocity Verlet algorithm (and other integration methods).](https://www.eng.buffalo.edu/~kofke/ce530/Lectures/Lecture11/sld041.htm) ExerciseModify the ball falling code (included below) to use the Velocity Verlet algorithm to update positions. Here we will assume that the force acting on the ball follows $F=ma$ where we can assume the ball is of mass 'm' (note mass will cancel out in the equation, thus we do not actually need to define it). Also note, since this ball experiences a constant acceleration, the force is constant throughout time. * Use the ```height_array_vv``` to store the position for the Velocity Verlet algorithm and the ```velocity_array_vv``` to store the velocity for the Velocity Verlet algorithm. You can leave the other code in place for comparison.* If you have done this correctly, the output plots on the right hand column (for Velocity Verlet) will be identical to the left hand column (using the kinematic equations). ###Code %matplotlib inline import numpy as np import matplotlib import matplotlib.pyplot as plt matplotlib.style.use('default') g = -9.8 #gravitation constant, m/s^2 dt = 0.0001 #timestep, s timesteps = 22000 #total number of timesteps to consider velocity_i = 5.0 #initial velocity, m/s height_i = 10.0 #initial height, m velocity_array = np.zeros(timesteps) height_array = np.zeros(timesteps) #arrays for the velocity verlet algorithm height_array_vv = np.zeros(timesteps) velocity_array_vv = np.zeros(timesteps) time_array = np.zeros(timesteps) height_array[0] = height_i height_array_vv[0] = height_i velocity_array[0] = velocity_i velocity_array_vv[0] = velocity_i current_height = height_i for i in range (0, timesteps-1): time_array[i+1] = time_array[i] + dt velocity_array[i+1] = velocity_array[i] + (g*dt) height_array[i+1] = height_array[i]+ 0.5*(velocity_array[i]+velocity_array[i+1])*dt #add the velocity verlet code here height_array_vv[i+1] = velocity_array_vv[i+1] = #if we have reached the ground, zero out the position and velocity if height_array[i+1] <= 0: velocity_array[i+1] = 0 height_array[i+1] = 0 #velocity verlet arrays height_array_vv[i+1] = 0 velocity_array_vv[i+1] = 0 ax = plt.subplot(2,2,1) ax.plot(time_array, height_array, c='blue') plt.ylabel('height (m)') ax = plt.subplot(2,2,2) ax.plot(time_array, height_array_vv, c='green') ax = plt.subplot(2,2,3) ax.plot(time_array, velocity_array, c='blue') plt.ylabel('velocity (m/s)') plt.xlabel('time (s)') ax = plt.subplot(2,2,4) ax.plot(time_array, velocity_array_vv, c='green') plt.xlabel('time (s)') plt.show() ###Output _____no_output_____ ###Markdown Interaction potentialAs noted above, we can numerically integrate the motion of particles in a system if we know the forces that act upon them. Typically, we think about the forces acting upon particles through consideration of the interaction potential, $U$, where the force is defined to be the negative gradient of the interaction potential (i.e., $F = -\nabla U$ ). Classical molecular dynamics does not explicitly consider the electronic structure, instead capturing interactions between particles (i.e., $U$) via a set of analytical (or numerical) functions. Force fieldsThe interaction potentials between particles in the system are referred to as a "force field." The force field is typically considered to have two main contributions, non-bonded and bonded:$U_{total} = U_{non-bonded} + U_{bonded}$ For atomistically detailed systems (i.e., where we consider all atoms in the system explicitely) non-bonded interactions consist of a term to describe the van der Waals interactions and a term to descibe the electrostatic (or charge) interactions: $U_{non-bonded} = U_{van der Waals} + U_{charge}$Bonded interactions describe any topological constraints meant to enforce a given structure in a molecule. These included bonds between a pair of connected atoms, angles between three connected atoms, and dihedrals (or torsional) terms between 4 atoms (note dihedrals can take two forms, "proper" and "improper" as shown in the figure below):$U_{bonded} = U_{bond} + U_{angle} + U_{dihedral}$![Bonds, angles, dihedrals, and impropers, from http://www.mbnexplorer.com/users_guide_2.0/users_guide994x.png](img/bondangledihedral.png) Example: the Optimized Potentials for Liquid Simulations (OPLS) forcefield:In general, classical force fields tend to have similar functional forms (although, the subtle differences between force fields is quite important!). As an example, consider the functional form of the commonly used OPLS force field:$U_{van der waals} = 4\epsilon\left[ \left(\frac{\sigma}{r}\right)^{12}- \left(\frac{\sigma}{r}\right)^6 \right]$$U_{charge} = \frac{q_iq_j e^2}{ r}$$U_{bond} = K_r (r-r_0)^2$$U_{angle} = k_\theta (\theta-\theta_0)^2$$U_{dihedral} = \frac {V_1} {2} \left [ 1 + \cos (\phi-\phi_0) \right ] + \frac {V_2} {2} \left [ 1 - \cos 2(\phi-\phi_0) \right ] + \frac {V_3} {2} \left [ 1 + \cos 3(\phi-\phi_0) \right ] + \frac {V_4} {2} \left [ 1 - \cos 4(\phi-\phi_0) \right ] $ Here, $r$ represents the separation between two atoms, $\theta$ the angle between 3 atoms, $\phi$ is between 4 atoms, $q_i$ is the charge on atom "i" and $q_j$ the charge on atom "j". The other variables (e.g., $\epsilon$, $\sigma$, $r_0$, etc.) are parameters used adjust the force field to match a given potential energy landscape. Timescales of interactionsGiven that an MD simulation numerically integrates the equations of motion through time, the simulation timestep is limited by the highest frequency vibration in the system. Recall in the example of the falling ball, if the timestep were too large we would miss the important features of the trajectory; the same general concept applies to MD simulations. A general rule of thumb is that timestep should be ~1/10 the highest frequency. Since C-H bond stretching tends to be the fastest mode ($10^{-14}$ s), a 1 fs timestep is often the limit for atomistic systems. Several methods have been developed to allow larger timesteps. For example, the SHAKE and RATTLE algorithms can be used to effectively keep bond distances fixed and eliminate this high frequency mode, typically allowing for a 2 fs timestep. The rRESPA multi-timescale integrator can also be used, where different timesteps are used for the different interactions (e.g., a smaller timestep for bonded interactions computed in an inner loop, larger timesteps for slower modes in an outer loop). The Lennard-Jones (LJ) PotentialLet us focus on the $U_{van der waals}$ term in the OPLS forcefield, modeled using the 12-6 Lennard-Jones potential. There are many ways the non-bonded van der Waals term can be expressed, although the 12-6 Lennard-Jones potential is the one of the most commonly used function forms. $U(r) = 4\epsilon\left[ \left(\frac{\sigma}{r}\right)^{12}- \left(\frac{\sigma}{r}\right)^6 \right]$In the LJ equation there are two adjustable fitting parameters, $\epsilon$ which dictates the interaction strength, and $\sigma$ which dictates the size of the particle. Note, $r$ is the distance between two particles.>- The short-range $1/r^{12}$ repulsion is used to model the overlap of electron clouds of the atoms. >- The longer-range $1/r^{6}$ attraction is used to model dispersion interactions.>- The minimum of the potential occurs at $r = 2^{(1/6)}\sigma$As previously noted, the force is the negative gradient of the potential. For the LJ equation, this becomes:$F(r) = \frac{48\epsilon}{\sigma}\left[ \left(\frac{\sigma}{r}\right)^{13}- \frac{1}{2}\left(\frac{\sigma}{r}\right)^7 \right]$The LJ potential and force is plotted in the script below. Exercise:> Modify the ```epsilon``` and ```sigma``` variables in the script to see the effect on the potential and force. ###Code %matplotlib inline import numpy as np import matplotlib import matplotlib.pyplot as plt matplotlib.style.use('default') epsilon = 1.0 sigma = 1.0 r_min = 0.9 r_max = 3.0 steps = 1000 r_step = (r_max-r_min)/steps U = np.zeros(steps) F = np.zeros(steps) r = np.zeros(steps) for i in range (0, steps): r[i] = r_min + r_step*i U[i] = 4*epsilon*((sigma/r[i])**12 - (sigma/r[i])**6) F[i] = 48*(epsilon/sigma)*((sigma/r[i])**13 - 0.5*(sigma/r[i])**7) plt.plot(r, U, c='blue', label='U(r)') plt.plot(r, F, c='red', label='F(r)') plt.ylabel('U(r) or F(r)') plt.xlabel('r') plt.ylim(-3, 4) plt.legend(loc='upper right') plt.show() ###Output _____no_output_____ ###Markdown Alternative expressionsFor the CHARMM forcefield, the LJ potential is often expressed as: $U(r) = \epsilon\left[ \left(\frac{r_m}{r}\right)^{12}- 2\left(\frac{r_m}{r}\right)^6 \right]$where $r_m = 2^{(1/6)}\sigma$. Substituting $r_m$ into the equation yields the same functional form as above.The LJ potential is also sometimes written in the "AB" form:$U(r) = \frac{A}{r^{12}} -\frac{B}{r^{6}}$When $A=4\epsilon\sigma^{12}$ and $B= 4\epsilon\sigma^{6}$, we recover the standard 12-6 form. A more general "2n" form is also common, that resembles the CHARMM functional form, but allows for different exponents; when n=6, we recover the 12-6 form. $U(r) = \epsilon\left[ \left(\frac{r_m}{r}\right)^{2n}- 2\left(\frac{r_m}{r}\right)^n \right]$Similarly, the Mie potential is another generalized form that reduces to the LJ potential when n = 12 and m = 6:$U(r) = \left( \frac{n}{n-m}\right)\left(\frac{n}{m}\right)^{m/(n-m)}\epsilon\left[ \left(\frac{\sigma}{r}\right)^{n}- \left(\frac{\sigma}{r}\right)^m \right]$ Other expressions of the van der Waals interactionWhile the $1/r^{12}$ term has been commonly used to model the repulsion of atoms, and is computationally convenient as it is a multiple of $1/r^{6}$, the Buckingham potential (often call the "exponential-6" or "exp-6" potential), is considered more accurate:$U(r) = \gamma\left[ e^{-r/r_0}- \left(\frac{r_0}{r}\right)^6 \right]$The COMPASS force field uses the 6/9 class2 LJ forcefield. This puts the minimum of the potential at $\sigma$ rather than $2^{(1/6)}\sigma$:$U(r) = \epsilon\left[ 2\left(\frac{\sigma}{r}\right)^{9}- 3\left(\frac{\sigma}{r}\right)^6 \right]$ Exercise:> As a simple exercise, add to the LJ plotting code in order to plot: > - the Mie potential and see the effect of the exponents on the shape> - the 6/9 class2 LJ potential. Simulation Conditions Periodic Boundary ConditionsSince, in most cases, MD simulations can only model a very small subset of a real physical system (on the order of nanometers), periodic boundary conditions are typically employed. As a particle "leaves" a simulation box, it re-enters on the opposite side, resulting in an infinite, yet periodic system. This approach avoids artifacts associated with hard boundaries, although may itself introduce artifacts associated with periodicity if the system is too small.Explicit self-interactions are not allowed; that is, you would not calculate the potential between a given particle and its periodic image. The image below on the left presents a cartoon representation of a system with its periodic images. A simple animation of periodic boundary conditions can be found at the follow [youtube link](https://www.youtube.com/watch?v=5qdNafdyaG0). Minimum Image ConventionMD simulations typically employ what is known as the "minimum image convention." > * For example, consider two particles (A and B) in a cubic simulation box of edge length 10. Particle A is located at [1,0,0] and Particle B is located at [8,0,0]. The distance between the particles within the simulation box is therefore 7. However, the distance between Particle A and the periodic image of Particle B is actually 3. Thus, with the minimum image convenction we would consider the distance between these two particles to be 3. > * In more algorithmic terms, if the distance between two particles is greater than half the box length, the length of the box is subtracted from the distance between the particles. A cartoon representation of the minimum image convention is shown below, right. In this example, the dark green particle interacts with the periodic image of the light green particle, rather than the light green particle within the same cell.![2d cartoon of periodic images](img/pbc2.png) Exercise> Let us modify the falling ball example such that we now just have a particle moving at a constant velocity (no gravitational acceleration) and change the code such that it implements periodic boundary conditions with walls at 10 and -10 in the height. * As another simple exercise, considering adding a small random perturbation (on the order of +/- 0.01) to the velocity at each timestep to see the effect. Don't forget to set the seed! * https://docs.python.org/3.6/library/random.html * Note that the floating point random number generate in Python returns values from 0.0 to 1.0. To generate between -1.0 and 1.0 we need to make a very simple modification ```(2.0*random()-1.0)```. ###Code %matplotlib inline import numpy as np import matplotlib import matplotlib.pyplot as plt matplotlib.style.use('default') import random as rand g = 0 #gravitation constant, m/s^2 dt = 0.001 #timestep, s timesteps = 100000 #total number of timesteps to consider velocity_i = 1.0 #initial velocity, m/s height_i = 0.0 #initial height, m #boundaries of the system height_min = -10 height_max = 10 height_length= height_max - height_min velocity_array = np.zeros(timesteps) height_array = np.zeros(timesteps) time_array = np.zeros(timesteps) height_array[0] = height_i velocity_array[0] = velocity_i current_height = height_i for i in range (0, timesteps-1): time_array[i+1] = time_array[i] + dt velocity_array[i+1] = velocity_array[i] + (g*dt) height_array[i+1] = height_array[i]+ 0.5*(velocity_array[i]+velocity_array[i+1])*dt #apply periodic boundary conditions to the system height_array[i+1] = ax = plt.subplot(2,1,1) ax.plot(time_array, height_array, c='blue') plt.ylabel('height (m)') ax = plt.subplot(2,1,2) ax.plot(time_array, velocity_array, c='red') plt.ylabel('velocity (m/s)') plt.xlabel('time (s)') plt.show() ###Output _____no_output_____
Melbourne House Price Prediction.ipynb
###Markdown Reading the Input File ###Code data=pd.read_csv('melb_data.csv') data.dropna(axis=0) data.head() data.describe() data.columns feature_names=['Landsize','Rooms','Bedroom2','Bathroom','Lattitude','Longtitude'] X=data[feature_names] X.head() y=data.Price ###Output _____no_output_____ ###Markdown Decision Tree Regressor ###Code from sklearn.tree import DecisionTreeRegressor melbourne_model=DecisionTreeRegressor(max_leaf_nodes=100,random_state=1) melbourne_model.fit(X,y) melb_prediction1=melbourne_model.predict(X) val_maedtr=mean_absolute_error(melb_prediction1,y) print('Validation MAE for Decision Tree Regressor is :',val_maedtr) print('Prediction prices for the first five houses are :') print(X.head()) melbourne_model.predict(X.head()) ###Output Prediction prices for the first five houses are : Landsize Rooms Bedroom2 Bathroom Lattitude Longtitude 0 202.0 2 2.0 1.0 -37.7996 144.9984 1 156.0 2 2.0 1.0 -37.8079 144.9934 2 134.0 3 3.0 2.0 -37.8093 144.9944 3 94.0 3 3.0 2.0 -37.7969 144.9969 4 120.0 4 3.0 1.0 -37.8072 144.9941 ###Markdown Random Forest Regressor ###Code from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split train_X,val_X,train_y,val_y=train_test_split(X,y,random_state=1) forest_model=RandomForestRegressor(random_state=1) forest_model.fit(train_X,train_y) melb_predictions=forest_model.predict(val_X) val_maerar=mean_absolute_error(val_y,melb_predictions) print("Validation MAE for RFR is ",val_maerar) print('Prediction prices for the first five houses are :') print(X.head()) melb_predictions ###Output Prediction prices for the first five houses are : Landsize Rooms Bedroom2 Bathroom Lattitude Longtitude 0 202.0 2 2.0 1.0 -37.7996 144.9984 1 156.0 2 2.0 1.0 -37.8079 144.9934 2 134.0 3 3.0 2.0 -37.8093 144.9944 3 94.0 3 3.0 2.0 -37.7969 144.9969 4 120.0 4 3.0 1.0 -37.8072 144.9941 ###Markdown Difference between the two approachesAs we can figure out that the mean value error in the RandomForestRegressor is much lower as compared to that of DecisionTreeRegressor. Therefore, we will use the RandomForestRegressor approach in the final prediction. ###Code print("Mean absolute error - Decision Tree Regressor Approach : ",val_maedtr) print("Mean absolute error - Random Forest Regressor Approach : ",val_maerar) ###Output Mean absolute error - Decision Tree Regressor Approach : 219152.74144897278 Mean absolute error - Random Forest Regressor Approach : 180515.9688541038 ###Markdown Output File ###Code rf_model_on_full_data = RandomForestRegressor(random_state=1) rf_model_on_full_data.fit(train_X,train_y) test_data=pd.read_csv('melb_data.csv') test_X=data[feature_names] test_preds =rf_model_on_full_data.predict(test_X) output = pd.DataFrame({ 'SalePrice': test_preds}) output.to_csv('Predictions.csv', index=False) ###Output _____no_output_____
Assignments/Module-5-Data-Manipulation-Using-Pandas/Pandas Assignment 3.ipynb
###Markdown Module – Data Manipulation Using Pandas Assignment Pandas Assignment 3 Problem Statement:You work in XYZ Corporation as a Data Analyst. Your corporation has told you to analyze the customer_churn dataset with various functions.1. Display the top 100 records from the original data frame.2. Display the last 10 records from the data frame.3. Display the last record from the data frame. ###Code import pandas as pd cus_ch = pd.read_csv (r'P:\DA DS AI\Data Science\Assignments\Module-5-Data-Manipulation-Using-Pandas\customer_churn.csv') cus_ch #Display the top 100 rows cus_ch.head(100) #Display the last 10 rows cus_ch.tail(10) #Display the last row cus_ch.tail(1) ###Output _____no_output_____
t81_558_class_02_3_pandas_grouping.ipynb
###Markdown T81-558: Applications of Deep Neural Networks**Module 2: Python for Machine Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 2 MaterialMain video lecture:* Part 2.1: Introduction to Pandas [[Video]](https://www.youtube.com/watch?v=bN4UuCBdpZc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_1_python_pandas.ipynb)* Part 2.2: Categorical Values [[Video]](https://www.youtube.com/watch?v=4a1odDpG0Ho&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_2_pandas_cat.ipynb)* **Part 2.3: Grouping, Sorting, and Shuffling in Python Pandas** [[Video]](https://www.youtube.com/watch?v=YS4wm5gD8DM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_3_pandas_grouping.ipynb)* Part 2.4: Using Apply and Map in Pandas for Keras [[Video]](https://www.youtube.com/watch?v=XNCEZ4WaPBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_4_pandas_functional.ipynb)* Part 2.5: Feature Engineering in Pandas for Deep Learning in Keras [[Video]](https://www.youtube.com/watch?v=BWPTj4_Mi9E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_5_pandas_features.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Note: not using Google CoLab ###Markdown Part 2.3: Grouping, Sorting, and Shuffling Shuffling a DatasetThe following code is used to shuffle and reindex a data set. A random seed can be used to produce a consistent shuffling of the data set. ###Code import os import pandas as pd import numpy as np df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) #np.random.seed(42) # Uncomment this line to get the same shuffle each time df = df.reindex(np.random.permutation(df.index)) df.reset_index(inplace=True, drop=True) display(df[0:10]) ###Output _____no_output_____ ###Markdown Sorting a Data SetData sets can also be sorted. This code sorts the MPG dataset by name and displays the first car. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) df = df.sort_values(by='name', ascending=True) print(f"The first car is: {df['name'].iloc[0]}") display(df[0:5]) ###Output The first car is: amc ambassador brougham ###Markdown Grouping a Data SetGrouping is a common operation on data sets. In SQL, this operation is referred to as "GROUP BY". Grouping is used to summarize data. Because of this summarization the row could will either stay the same or more likely shrink after a grouping is applied.The Auto MPG dataset is used to demonstrate grouping. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) display(df[0:5]) ###Output _____no_output_____ ###Markdown The above data set can be used with group to perform summaries. For example, the following code will group cylinders by the average (mean). This code will provide the grouping. In addition to mean, other aggregating functions, such as **sum** or **count** can be used. ###Code g = df.groupby('cylinders')['mpg'].mean() g ###Output _____no_output_____ ###Markdown It might be useful to have these **mean** values as a dictionary. ###Code d = g.to_dict() d ###Output _____no_output_____ ###Markdown This allows you to quickly access an individual element, such as to lookup the mean for 6 cylinders. This is used in target encoding, which is presented in this module. ###Code d[6] ###Output _____no_output_____ ###Markdown The code below shows how to count the number of rows that match each cylinder count. ###Code df.groupby('cylinders')['mpg'].count().to_dict() ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 2: Python for Machine Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 2 MaterialMain video lecture:* Part 2.1: Introduction to Pandas [[Video]](https://www.youtube.com/watch?v=bN4UuCBdpZc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_1_python_pandas.ipynb)* Part 2.2: Categorical Values [[Video]](https://www.youtube.com/watch?v=4a1odDpG0Ho&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_2_pandas_cat.ipynb)* **Part 2.3: Grouping, Sorting, and Shuffling in Python Pandas** [[Video]](https://www.youtube.com/watch?v=YS4wm5gD8DM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_3_pandas_grouping.ipynb)* Part 2.4: Using Apply and Map in Pandas for Keras [[Video]](https://www.youtube.com/watch?v=XNCEZ4WaPBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_4_pandas_functional.ipynb)* Part 2.5: Feature Engineering in Pandas for Deep Learning in Keras [[Video]](https://www.youtube.com/watch?v=BWPTj4_Mi9E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_5_pandas_features.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Note: not using Google CoLab ###Markdown Part 2.3: Grouping, Sorting, and Shuffling Now we will take a look at a few ways to affect an entire Pandas data frame. These techniques will allow us to group, sort, and shuffle data sets. These are all essential operations for both data preprocessing and evaluation. Shuffling a DatasetThere may be information lurking in the order of the rows of your dataset. Unless you are dealing with time-series data, the order of the rows should not be significant. Consider if your training set included employees in a company. Perhaps this dataset is ordered by the number of years that the employees were with the company. It is okay to have an individual column that specifies years of service. However, having the data in this order might be problematic. Consider if you were to split the data into training and validation. You could end up with your validation set having only the newer employees and the training set longer-term employees. Separating the data into a k-fold cross validation could have similar problems. Because of these issues, it is important to shuffle the data set.Often shuffling and reindexing are both performed together. Shuffling randomizes the order of the data set. However, it does not change the Pandas row numbers. The following code demonstrates a reshuffle. Notice that the first column, the row indexes, has not been reset. Generally, this will not cause any issues and allows trace back to the original order of the data. However, I usually prefer to reset this index. I reason that I typically do not care about the initial position, and there are a few instances where this unordered index can cause issues. ###Code import os import pandas as pd import numpy as np df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) #np.random.seed(42) # Uncomment this line to get the same shuffle each time df = df.reindex(np.random.permutation(df.index)) pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output _____no_output_____ ###Markdown The following code demonstrates a reindex. Notice how the reindex orders the row indexes. ###Code pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) df.reset_index(inplace=True, drop=True) display(df) ###Output _____no_output_____ ###Markdown Sorting a Data SetWhile it is always a good idea to shuffle a data set before training, during training and preprocessing, you may also wish to sort the data set. Sorting the data set allows you to order the rows in either ascending or descending order for one or more columns. The following code sorts the MPG dataset by name and displays the first car. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) df = df.sort_values(by='name', ascending=True) print(f"The first car is: {df['name'].iloc[0]}") pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output The first car is: amc ambassador brougham ###Markdown Grouping a Data SetGrouping is a typical operation on data sets. Structured Query Language (SQL) calls this operation a "GROUP BY." Programmers use grouping to summarize data. Because of this, the summarization row count will usually shrink, and you cannot undo the grouping. Because of this loss of information, it is essential to keep your original data before the grouping. The Auto MPG dataset is used to demonstrate grouping. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output _____no_output_____ ###Markdown The above data set can be used with the group to perform summaries. For example, the following code will group cylinders by the average (mean). This code will provide the grouping. In addition to **mean**, you can use other aggregating functions, such as **sum** or **count**. ###Code g = df.groupby('cylinders')['mpg'].mean() g ###Output _____no_output_____ ###Markdown It might be useful to have these **mean** values as a dictionary. ###Code d = g.to_dict() d ###Output _____no_output_____ ###Markdown A dictionary allows you to access an individual element quickly. For example, you could quickly look up the mean for six-cylinder cars. You will see that target encoding, introduced later in this module, makes use of this technique. ###Code d[6] ###Output _____no_output_____ ###Markdown The code below shows how to count the number of rows that match each cylinder count. ###Code df.groupby('cylinders')['mpg'].count().to_dict() ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 2: Python for Machine Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 2 MaterialMain video lecture:* Part 2.1: Introduction to Pandas [[Video]](https://www.youtube.com/watch?v=bN4UuCBdpZc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_1_python_pandas.ipynb)* Part 2.2: Categorical Values [[Video]](https://www.youtube.com/watch?v=4a1odDpG0Ho&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_2_pandas_cat.ipynb)* **Part 2.3: Grouping, Sorting, and Shuffling in Python Pandas** [[Video]](https://www.youtube.com/watch?v=YS4wm5gD8DM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_3_pandas_grouping.ipynb)* Part 2.4: Using Apply and Map in Pandas for Keras [[Video]](https://www.youtube.com/watch?v=XNCEZ4WaPBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_4_pandas_functional.ipynb)* Part 2.5: Feature Engineering in Pandas for Deep Learning in Keras [[Video]](https://www.youtube.com/watch?v=BWPTj4_Mi9E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_5_pandas_features.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Note: not using Google CoLab ###Markdown Part 2.3: Grouping, Sorting, and Shuffling Now we will take a look at a few ways to affect an entire Pandas data frame. These techniques will allow us to group, sort, and shuffle data sets. These are all essential operations for both data preprocessing and evaluation. Shuffling a DatasetThere may be information lurking in the order of the rows of your dataset. Unless you are dealing with time-series data, the order of the rows should not be significant. Consider if your training set included employees in a company. Perhaps this dataset is ordered by the number of years that the employees were with the company. It is okay to have an individual column that specifies years of service. However, having the data in this order might be problematic. Consider if you were to split the data into training and validation. You could end up with your validation set having only the newer employees and the training set longer-term employees. Separating the data into a k-fold cross validation could have similar problems. Because of these issues, it is important to shuffle the data set.Often shuffling and reindexing are both performed together. Shuffling randomizes the order of the data set. However, it does not change the Pandas row numbers. The following code demonstrates a reshuffle. Notice that the first column, the row indexes, has not been reset. Generally, this will not cause any issues and allows trace back to the original order of the data. However, I usually prefer to reset this index. I reason that I typically do not care about the initial position, and there are a few instances where this unordered index can cause issues. ###Code import os import pandas as pd import numpy as np df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) #np.random.seed(42) # Uncomment this line to get the same shuffle each time df = df.reindex(np.random.permutation(df.index)) pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output _____no_output_____ ###Markdown The following code demonstrates a reindex. Notice how the reindex orders the row indexes. ###Code pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) df.reset_index(inplace=True, drop=True) display(df) ###Output _____no_output_____ ###Markdown Sorting a Data SetWhile it is always a good idea to shuffle a data set before training, during training and preprocessing, you may also wish to sort the data set. Sorting the data set allows you to order the rows in either ascending or descending order for one or more columns. The following code sorts the MPG dataset by name and displays the first car. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) df = df.sort_values(by='name', ascending=True) print(f"The first car is: {df['name'].iloc[0]}") pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output The first car is: amc ambassador brougham ###Markdown Grouping a Data SetGrouping is a typical operation on data sets. Structured Query Language (SQL) calls this operation a "GROUP BY." Programmers use grouping to summarize data. Because of this, the summarization row count will usually shrink, and you cannot undo the grouping. Because of this loss of information, it is essential to keep your original data before the grouping. The Auto MPG dataset is used to demonstrate grouping. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output _____no_output_____ ###Markdown The above data set can be used with the group to perform summaries. For example, the following code will group cylinders by the average (mean). This code will provide the grouping. In addition to **mean**, you can use other aggregating functions, such as **sum** or **count**. ###Code g = df.groupby('cylinders')['mpg'].mean() g ###Output _____no_output_____ ###Markdown It might be useful to have these **mean** values as a dictionary. ###Code d = g.to_dict() d ###Output _____no_output_____ ###Markdown A dictionary allows you to access an individual element quickly. For example, you could quickly look up the mean for six-cylinder cars. You will see that target encoding, introduced later in this module, makes use of this technique. ###Code d[6] ###Output _____no_output_____ ###Markdown The code below shows how to count the number of rows that match each cylinder count. ###Code df.groupby('cylinders')['mpg'].count().to_dict() ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 2: Python for Machine Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 2 MaterialMain video lecture:* Part 2.1: Introduction to Pandas [[Video]](https://www.youtube.com/watch?v=bN4UuCBdpZc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_1_python_pandas.ipynb)* Part 2.2: Categorical Values [[Video]](https://www.youtube.com/watch?v=4a1odDpG0Ho&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_2_pandas_cat.ipynb)* **Part 2.3: Grouping, Sorting, and Shuffling in Python Pandas** [[Video]](https://www.youtube.com/watch?v=YS4wm5gD8DM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_3_pandas_grouping.ipynb)* Part 2.4: Using Apply and Map in Pandas for Keras [[Video]](https://www.youtube.com/watch?v=XNCEZ4WaPBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_4_pandas_functional.ipynb)* Part 2.5: Feature Engineering in Pandas for Deep Learning in Keras [[Video]](https://www.youtube.com/watch?v=BWPTj4_Mi9E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_5_pandas_features.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Note: not using Google CoLab ###Markdown Part 2.3: Grouping, Sorting, and Shuffling We will take a look at a few ways to affect an entire Pandas data frame. These techniques will allow us to group, sort, and shuffle data sets. These are all essential operations for both data preprocessing and evaluation. Shuffling a DatasetThere may be information lurking in the order of the rows of your dataset. Unless you are dealing with time-series data, the order of the rows should not be significant. Consider if your training set included employees in a company. Perhaps this dataset is ordered by the number of years the employees were with the company. It is okay to have an individual column that specifies years of service. However, having the data in this order might be problematic. Consider if you were to split the data into training and validation. You could end up with your validation set having only the newer employees and the training set longer-term employees. Separating the data into a k-fold cross validation could have similar problems. Because of these issues, it is important to shuffle the data set.Often shuffling and reindexing are both performed together. Shuffling randomizes the order of the data set. However, it does not change the Pandas row numbers. The following code demonstrates a reshuffle. Notice that the program has not reset the row indexes' first column. Generally, this will not cause any issues and allows tracing back to the original order of the data. However, I usually prefer to reset this index. I reason that I typically do not care about the initial position, and there are a few instances where this unordered index can cause issues. ###Code import os import pandas as pd import numpy as np df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) #np.random.seed(42) # Uncomment this line to get the same shuffle each time df = df.reindex(np.random.permutation(df.index)) pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output _____no_output_____ ###Markdown The following code demonstrates a reindex. Notice how the reindex orders the row indexes. ###Code pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) df.reset_index(inplace=True, drop=True) display(df) ###Output _____no_output_____ ###Markdown Sorting a Data SetWhile it is always good to shuffle a data set before training, during training and preprocessing, you may also wish to sort the data set. Sorting the data set allows you to order the rows in either ascending or descending order for one or more columns. The following code sorts the MPG dataset by name and displays the first car. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) df = df.sort_values(by='name', ascending=True) print(f"The first car is: {df['name'].iloc[0]}") pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output The first car is: amc ambassador brougham ###Markdown Grouping a Data SetGrouping is a typical operation on data sets. Structured Query Language (SQL) calls this operation a "GROUP BY." Programmers use grouping to summarize data. Because of this, the summarization row count will usually shrink, and you cannot undo the grouping. Because of this loss of information, it is essential to keep your original data before the grouping. We use the Auto MPG dataset to demonstrate grouping. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output _____no_output_____ ###Markdown You can use the above data set with the group to perform summaries. For example, the following code will group cylinders by the average (mean). This code will provide the grouping. In addition to **mean**, you can use other aggregating functions, such as **sum** or **count**. ###Code g = df.groupby('cylinders')['mpg'].mean() g ###Output _____no_output_____ ###Markdown It might be useful to have these **mean** values as a dictionary. ###Code d = g.to_dict() d ###Output _____no_output_____ ###Markdown A dictionary allows you to access an individual element quickly. For example, you could quickly look up the mean for six-cylinder cars. You will see that target encoding, introduced later in this module, uses this technique. ###Code d[6] ###Output _____no_output_____ ###Markdown The code below shows how to count the number of rows that match each cylinder count. ###Code df.groupby('cylinders')['mpg'].count().to_dict() ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 2: Python for Machine Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 2 MaterialMain video lecture:* Part 2.1: Introduction to Pandas [[Video]](https://www.youtube.com/watch?v=bN4UuCBdpZc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_1_python_pandas.ipynb)* Part 2.2: Categorical Values [[Video]](https://www.youtube.com/watch?v=4a1odDpG0Ho&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_2_pandas_cat.ipynb)* **Part 2.3: Grouping, Sorting, and Shuffling in Python Pandas** [[Video]](https://www.youtube.com/watch?v=YS4wm5gD8DM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_3_pandas_grouping.ipynb)* Part 2.4: Using Apply and Map in Pandas for Keras [[Video]](https://www.youtube.com/watch?v=XNCEZ4WaPBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_4_pandas_functional.ipynb)* Part 2.5: Feature Engineering in Pandas for Deep Learning in Keras [[Video]](https://www.youtube.com/watch?v=BWPTj4_Mi9E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_5_pandas_features.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Note: not using Google CoLab ###Markdown Part 2.3: Grouping, Sorting, and Shuffling Now we will take a look at a few ways to affect an entire Pandas data frame. These techniques will allow us to group, sort, and shuffle data sets. These are all essential operations for both data preprocessing and evaluation. Shuffling a DatasetThere may be information lurking in the order of the rows of your dataset. Unless you are dealing with time-series data, the order of the rows should not be significant. Consider if your training set included employees in a company. Perhaps this dataset is ordered by the number of years that the employees were with the company. It is okay to have an individual column that specifies years of service. However, having the data in this order might be problematic. Consider if you were to split the data into training and validation. You could end up with your validation set having only the newer employees and the training set longer-term employees. Separating the data into a k-fold cross validation could have similar problems. Because of these issues, it is important to shuffle the data set.Often shuffling and reindexing are both performed together. Shuffling randomizes the order of the data set. However, it does not change the Pandas row numbers. The following code demonstrates a reshuffle. Notice that the first column, the row indexes, has not been reset. Generally, this will not cause any issues and allows trace back to the original order of the data. However, I usually prefer to reset this index. I reason that I typically do not care about the initial position, and there are a few instances where this unordered index can cause issues. ###Code import os import pandas as pd import numpy as np df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) #np.random.seed(42) # Uncomment this line to get the same shuffle each time df = df.reindex(np.random.permutation(df.index)) pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output _____no_output_____ ###Markdown The following code demonstrates a reindex. Notice how the reindex orders the row indexes. ###Code pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 0) df.reset_index(inplace=True, drop=True) display(df[0:10]) ###Output _____no_output_____ ###Markdown Sorting a Data SetWhile it is always a good idea to shuffle a data set before training, during training and preprocessing, you may also wish to sort the data set. Sorting the data set allows you to order the rows in either ascending or descending order for one or more columns. The following code sorts the MPG dataset by name and displays the first car. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) df = df.sort_values(by='name', ascending=True) idx = 0 for name in df['name']: print(f"The number {idx + 1} car is: {df['name'].iloc[idx]}") idx += 1 # print(f"The first car is: {df['name'].iloc[0]}") pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 0) display(df) ###Output The number 1 car is: amc ambassador brougham The number 2 car is: amc ambassador dpl The number 3 car is: amc ambassador sst The number 4 car is: amc concord The number 5 car is: amc concord The number 6 car is: amc concord d/l The number 7 car is: amc concord dl The number 8 car is: amc concord dl 6 The number 9 car is: amc gremlin The number 10 car is: amc gremlin The number 11 car is: amc gremlin The number 12 car is: amc gremlin The number 13 car is: amc hornet The number 14 car is: amc hornet The number 15 car is: amc hornet The number 16 car is: amc hornet The number 17 car is: amc hornet sportabout (sw) The number 18 car is: amc matador The number 19 car is: amc matador The number 20 car is: amc matador The number 21 car is: amc matador The number 22 car is: amc matador The number 23 car is: amc matador (sw) The number 24 car is: amc matador (sw) The number 25 car is: amc pacer The number 26 car is: amc pacer d/l The number 27 car is: amc rebel sst The number 28 car is: amc spirit dl The number 29 car is: audi 100 ls The number 30 car is: audi 100ls The number 31 car is: audi 100ls The number 32 car is: audi 4000 The number 33 car is: audi 5000 The number 34 car is: audi 5000s (diesel) The number 35 car is: audi fox The number 36 car is: bmw 2002 The number 37 car is: bmw 320i The number 38 car is: buick century The number 39 car is: buick century The number 40 car is: buick century 350 The number 41 car is: buick century limited The number 42 car is: buick century luxus (sw) The number 43 car is: buick century special The number 44 car is: buick electra 225 custom The number 45 car is: buick estate wagon (sw) The number 46 car is: buick estate wagon (sw) The number 47 car is: buick lesabre custom The number 48 car is: buick opel isuzu deluxe The number 49 car is: buick regal sport coupe (turbo) The number 50 car is: buick skyhawk The number 51 car is: buick skylark The number 52 car is: buick skylark The number 53 car is: buick skylark 320 The number 54 car is: buick skylark limited The number 55 car is: cadillac eldorado The number 56 car is: cadillac seville The number 57 car is: capri ii The number 58 car is: chevroelt chevelle malibu The number 59 car is: chevrolet bel air The number 60 car is: chevrolet camaro The number 61 car is: chevrolet caprice classic The number 62 car is: chevrolet caprice classic The number 63 car is: chevrolet caprice classic The number 64 car is: chevrolet cavalier The number 65 car is: chevrolet cavalier 2-door The number 66 car is: chevrolet cavalier wagon The number 67 car is: chevrolet chevelle concours (sw) The number 68 car is: chevrolet chevelle malibu The number 69 car is: chevrolet chevelle malibu The number 70 car is: chevrolet chevelle malibu classic The number 71 car is: chevrolet chevelle malibu classic The number 72 car is: chevrolet chevette The number 73 car is: chevrolet chevette The number 74 car is: chevrolet chevette The number 75 car is: chevrolet chevette The number 76 car is: chevrolet citation The number 77 car is: chevrolet citation The number 78 car is: chevrolet citation The number 79 car is: chevrolet concours The number 80 car is: chevrolet impala The number 81 car is: chevrolet impala The number 82 car is: chevrolet impala The number 83 car is: chevrolet impala The number 84 car is: chevrolet malibu The number 85 car is: chevrolet malibu The number 86 car is: chevrolet malibu classic (sw) The number 87 car is: chevrolet monte carlo The number 88 car is: chevrolet monte carlo landau The number 89 car is: chevrolet monte carlo landau The number 90 car is: chevrolet monte carlo s The number 91 car is: chevrolet monza 2+2 The number 92 car is: chevrolet nova The number 93 car is: chevrolet nova The number 94 car is: chevrolet nova The number 95 car is: chevrolet nova custom The number 96 car is: chevrolet vega The number 97 car is: chevrolet vega The number 98 car is: chevrolet vega The number 99 car is: chevrolet vega (sw) The number 100 car is: chevrolet vega 2300 The number 101 car is: chevrolet woody The number 102 car is: chevy c10 The number 103 car is: chevy c20 The number 104 car is: chevy s-10 The number 105 car is: chrysler cordoba The number 106 car is: chrysler lebaron medallion The number 107 car is: chrysler lebaron salon The number 108 car is: chrysler lebaron town @ country (sw) The number 109 car is: chrysler new yorker brougham The number 110 car is: chrysler newport royal The number 111 car is: datsun 1200 The number 112 car is: datsun 200-sx The number 113 car is: datsun 200sx The number 114 car is: datsun 210 The number 115 car is: datsun 210 The number 116 car is: datsun 210 mpg The number 117 car is: datsun 280-zx The number 118 car is: datsun 310 The number 119 car is: datsun 310 gx The number 120 car is: datsun 510 The number 121 car is: datsun 510 (sw) The number 122 car is: datsun 510 hatchback The number 123 car is: datsun 610 The number 124 car is: datsun 710 The number 125 car is: datsun 710 The number 126 car is: datsun 810 The number 127 car is: datsun 810 maxima The number 128 car is: datsun b-210 The number 129 car is: datsun b210 The number 130 car is: datsun b210 gx The number 131 car is: datsun f-10 hatchback The number 132 car is: datsun pl510 The number 133 car is: datsun pl510 The number 134 car is: dodge aries se The number 135 car is: dodge aries wagon (sw) The number 136 car is: dodge aspen The number 137 car is: dodge aspen The number 138 car is: dodge aspen 6 The number 139 car is: dodge aspen se The number 140 car is: dodge challenger se The number 141 car is: dodge charger 2.2 The number 142 car is: dodge colt The number 143 car is: dodge colt The number 144 car is: dodge colt The number 145 car is: dodge colt (sw) The number 146 car is: dodge colt hardtop The number 147 car is: dodge colt hatchback custom The number 148 car is: dodge colt m/m The number 149 car is: dodge coronet brougham The number 150 car is: dodge coronet custom The number 151 car is: dodge coronet custom (sw) The number 152 car is: dodge d100 The number 153 car is: dodge d200 The number 154 car is: dodge dart custom The number 155 car is: dodge diplomat The number 156 car is: dodge magnum xe The number 157 car is: dodge monaco (sw) The number 158 car is: dodge monaco brougham The number 159 car is: dodge omni The number 160 car is: dodge rampage The number 161 car is: dodge st. regis The number 162 car is: fiat 124 sport coupe The number 163 car is: fiat 124 tc The number 164 car is: fiat 124b The number 165 car is: fiat 128 The number 166 car is: fiat 128 The number 167 car is: fiat 131 The number 168 car is: fiat strada custom The number 169 car is: fiat x1.9 The number 170 car is: ford country The number 171 car is: ford country squire (sw) The number 172 car is: ford country squire (sw) The number 173 car is: ford escort 2h The number 174 car is: ford escort 4w The number 175 car is: ford f108 The number 176 car is: ford f250 The number 177 car is: ford fairmont The number 178 car is: ford fairmont (auto) The number 179 car is: ford fairmont (man) The number 180 car is: ford fairmont 4 The number 181 car is: ford fairmont futura The number 182 car is: ford fiesta The number 183 car is: ford futura The number 184 car is: ford galaxie 500 The number 185 car is: ford galaxie 500 The number 186 car is: ford galaxie 500 The number 187 car is: ford gran torino The number 188 car is: ford gran torino The number 189 car is: ford gran torino The number 190 car is: ford gran torino (sw) The number 191 car is: ford gran torino (sw) The number 192 car is: ford granada The number 193 car is: ford granada ghia The number 194 car is: ford granada gl The number 195 car is: ford granada l The number 196 car is: ford ltd The number 197 car is: ford ltd The number 198 car is: ford ltd landau The number 199 car is: ford maverick The number 200 car is: ford maverick The number 201 car is: ford maverick The number 202 car is: ford maverick The number 203 car is: ford maverick The number 204 car is: ford mustang The number 205 car is: ford mustang cobra The number 206 car is: ford mustang gl The number 207 car is: ford mustang ii The number 208 car is: ford mustang ii 2+2 The number 209 car is: ford pinto The number 210 car is: ford pinto The number 211 car is: ford pinto The number 212 car is: ford pinto The number 213 car is: ford pinto The number 214 car is: ford pinto The number 215 car is: ford pinto (sw) The number 216 car is: ford pinto runabout The number 217 car is: ford ranger The number 218 car is: ford thunderbird The number 219 car is: ford torino The number 220 car is: ford torino 500 The number 221 car is: hi 1200d The number 222 car is: honda accord The number 223 car is: honda accord The number 224 car is: honda accord cvcc The number 225 car is: honda accord lx The number 226 car is: honda civic The number 227 car is: honda civic The number 228 car is: honda civic The number 229 car is: honda civic (auto) The number 230 car is: honda civic 1300 The number 231 car is: honda civic 1500 gl The number 232 car is: honda civic cvcc The number 233 car is: honda civic cvcc The number 234 car is: honda prelude The number 235 car is: maxda glc deluxe The number 236 car is: maxda rx3 The number 237 car is: mazda 626 The number 238 car is: mazda 626 The number 239 car is: mazda glc The number 240 car is: mazda glc 4 The number 241 car is: mazda glc custom The number 242 car is: mazda glc custom l The number 243 car is: mazda glc deluxe The number 244 car is: mazda rx-4 The number 245 car is: mazda rx-7 gs The number 246 car is: mazda rx2 coupe The number 247 car is: mercedes benz 300d The number 248 car is: mercedes-benz 240d The number 249 car is: mercedes-benz 280s The number 250 car is: mercury capri 2000 The number 251 car is: mercury capri v6 The number 252 car is: mercury cougar brougham The number 253 car is: mercury grand marquis The number 254 car is: mercury lynx l The number 255 car is: mercury marquis The number 256 car is: mercury marquis brougham The number 257 car is: mercury monarch The number 258 car is: mercury monarch ghia The number 259 car is: mercury zephyr The number 260 car is: mercury zephyr 6 The number 261 car is: nissan stanza xe The number 262 car is: oldsmobile cutlass ciera (diesel) The number 263 car is: oldsmobile cutlass ls The number 264 car is: oldsmobile cutlass salon brougham The number 265 car is: oldsmobile cutlass salon brougham The number 266 car is: oldsmobile cutlass supreme The number 267 car is: oldsmobile delta 88 royale The number 268 car is: oldsmobile omega The number 269 car is: oldsmobile omega brougham The number 270 car is: oldsmobile starfire sx The number 271 car is: oldsmobile vista cruiser The number 272 car is: opel 1900 The number 273 car is: opel 1900 The number 274 car is: opel manta The number 275 car is: opel manta The number 276 car is: peugeot 304 The number 277 car is: peugeot 504 The number 278 car is: peugeot 504 The number 279 car is: peugeot 504 The number 280 car is: peugeot 504 The number 281 car is: peugeot 504 (sw) The number 282 car is: peugeot 505s turbo diesel The number 283 car is: peugeot 604sl The number 284 car is: plymouth 'cuda 340 The number 285 car is: plymouth arrow gs The number 286 car is: plymouth champ The number 287 car is: plymouth cricket The number 288 car is: plymouth custom suburb The number 289 car is: plymouth duster The number 290 car is: plymouth duster The number 291 car is: plymouth duster The number 292 car is: plymouth fury The number 293 car is: plymouth fury gran sedan The number 294 car is: plymouth fury iii The number 295 car is: plymouth fury iii The number 296 car is: plymouth fury iii The number 297 car is: plymouth grand fury The number 298 car is: plymouth horizon The number 299 car is: plymouth horizon 4 The number 300 car is: plymouth horizon miser The number 301 car is: plymouth horizon tc3 The number 302 car is: plymouth reliant The number 303 car is: plymouth reliant The number 304 car is: plymouth sapporo The number 305 car is: plymouth satellite The number 306 car is: plymouth satellite custom The number 307 car is: plymouth satellite custom (sw) The number 308 car is: plymouth satellite sebring The number 309 car is: plymouth valiant The number 310 car is: plymouth valiant The number 311 car is: plymouth valiant custom The number 312 car is: plymouth volare The number 313 car is: plymouth volare custom The number 314 car is: plymouth volare premier v8 The number 315 car is: pontiac astro The number 316 car is: pontiac catalina The number 317 car is: pontiac catalina The number 318 car is: pontiac catalina The number 319 car is: pontiac catalina brougham The number 320 car is: pontiac firebird The number 321 car is: pontiac grand prix The number 322 car is: pontiac grand prix lj The number 323 car is: pontiac j2000 se hatchback The number 324 car is: pontiac lemans v6 The number 325 car is: pontiac phoenix The number 326 car is: pontiac phoenix The number 327 car is: pontiac phoenix lj The number 328 car is: pontiac safari (sw) The number 329 car is: pontiac sunbird coupe The number 330 car is: pontiac ventura sj The number 331 car is: renault 12 (sw) The number 332 car is: renault 12tl The number 333 car is: renault 18i The number 334 car is: renault 5 gtl The number 335 car is: renault lecar deluxe The number 336 car is: saab 99e The number 337 car is: saab 99gle The number 338 car is: saab 99le The number 339 car is: saab 99le The number 340 car is: subaru The number 341 car is: subaru The number 342 car is: subaru dl The number 343 car is: subaru dl The number 344 car is: toyota carina The number 345 car is: toyota celica gt The number 346 car is: toyota celica gt liftback The number 347 car is: toyota corolla The number 348 car is: toyota corolla The number 349 car is: toyota corolla The number 350 car is: toyota corolla The number 351 car is: toyota corolla The number 352 car is: toyota corolla 1200 The number 353 car is: toyota corolla 1200 The number 354 car is: toyota corolla 1600 (sw) The number 355 car is: toyota corolla liftback The number 356 car is: toyota corolla tercel The number 357 car is: toyota corona The number 358 car is: toyota corona The number 359 car is: toyota corona The number 360 car is: toyota corona The number 361 car is: toyota corona hardtop The number 362 car is: toyota corona liftback The number 363 car is: toyota corona mark ii The number 364 car is: toyota cressida The number 365 car is: toyota mark ii The number 366 car is: toyota mark ii The number 367 car is: toyota starlet The number 368 car is: toyota tercel The number 369 car is: toyouta corona mark ii (sw) The number 370 car is: triumph tr7 coupe The number 371 car is: vokswagen rabbit The number 372 car is: volkswagen 1131 deluxe sedan The number 373 car is: volkswagen 411 (sw) The number 374 car is: volkswagen dasher The number 375 car is: volkswagen dasher The number 376 car is: volkswagen dasher The number 377 car is: volkswagen jetta The number 378 car is: volkswagen model 111 The number 379 car is: volkswagen rabbit The number 380 car is: volkswagen rabbit The number 381 car is: volkswagen rabbit custom The number 382 car is: volkswagen rabbit custom diesel The number 383 car is: volkswagen rabbit l The number 384 car is: volkswagen scirocco The number 385 car is: volkswagen super beetle The number 386 car is: volkswagen type 3 The number 387 car is: volvo 144ea The number 388 car is: volvo 145e (sw) The number 389 car is: volvo 244dl The number 390 car is: volvo 245 The number 391 car is: volvo 264gl The number 392 car is: volvo diesel The number 393 car is: vw dasher (diesel) The number 394 car is: vw pickup The number 395 car is: vw rabbit The number 396 car is: vw rabbit The number 397 car is: vw rabbit c (diesel) The number 398 car is: vw rabbit custom ###Markdown Grouping a Data SetGrouping is a typical operation on data sets. Structured Query Language (SQL) calls this operation a "GROUP BY." Programmers use grouping to summarize data. Because of this, the summarization row count will usually shrink, and you cannot undo the grouping. Because of this loss of information, it is essential to keep your original data before the grouping. The Auto MPG dataset is used to demonstrate grouping. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output _____no_output_____ ###Markdown The above data set can be used with the group to perform summaries. For example, the following code will group cylinders by the average (mean). This code will provide the grouping. In addition to **mean**, you can use other aggregating functions, such as **sum** or **count**. ###Code g = df.groupby('cylinders')['mpg'].std() g ###Output _____no_output_____ ###Markdown It might be useful to have these **mean** values as a dictionary. ###Code d = g.to_dict() d ###Output _____no_output_____ ###Markdown A dictionary allows you to access an individual element quickly. For example, you could quickly look up the mean for six-cylinder cars. You will see that target encoding, introduced later in this module, makes use of this technique. ###Code d[6] ###Output _____no_output_____ ###Markdown The code below shows how to count the number of rows that match each cylinder count. ###Code df.groupby('cylinders')['mpg'].count().to_dict() ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 2: Python for Machine Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 2 MaterialMain video lecture:* Part 2.1: Introduction to Pandas [[Video]](https://www.youtube.com/watch?v=bN4UuCBdpZc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_1_python_pandas.ipynb)* Part 2.2: Categorical Values [[Video]](https://www.youtube.com/watch?v=4a1odDpG0Ho&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_2_pandas_cat.ipynb)* **Part 2.3: Grouping, Sorting, and Shuffling in Python Pandas** [[Video]](https://www.youtube.com/watch?v=YS4wm5gD8DM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_3_pandas_grouping.ipynb)* Part 2.4: Using Apply and Map in Pandas for Keras [[Video]](https://www.youtube.com/watch?v=XNCEZ4WaPBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_4_pandas_functional.ipynb)* Part 2.5: Feature Engineering in Pandas for Deep Learning in Keras [[Video]](https://www.youtube.com/watch?v=BWPTj4_Mi9E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_5_pandas_features.ipynb) Part 2.3: Grouping, Sorting, and Shuffling Shuffling a DatasetThe following code is used to shuffle and reindex a data set. A random seed can be used to produce a consistent shuffling of the data set. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) #np.random.seed(42) # Uncomment this line to get the same shuffle each time df = df.reindex(np.random.permutation(df.index)) df.reset_index(inplace=True, drop=True) display(df[0:10]) ###Output _____no_output_____ ###Markdown Sorting a Data SetData sets can also be sorted. This code sorts the MPG dataset by name and displays the first car. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) df = df.sort_values(by='name', ascending=True) print(f"The first car is: {df['name'].iloc[0]}") display(df[0:5]) ###Output The first car is: amc ambassador brougham ###Markdown Grouping a Data SetGrouping is a common operation on data sets. In SQL, this operation is referred to as "GROUP BY". Grouping is used to summarize data. Because of this summarization the row could will either stay the same or more likely shrink after a grouping is applied.The Auto MPG dataset is used to demonstrate grouping. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) display(df[0:5]) ###Output _____no_output_____ ###Markdown The above data set can be used with group to perform summaries. For example, the following code will group cylinders by the average (mean). This code will provide the grouping. In addition to mean, other aggregating functions, such as **sum** or **count** can be used. ###Code g = df.groupby('cylinders')['mpg'].mean() g ###Output _____no_output_____ ###Markdown It might be useful to have these **mean** values as a dictionary. ###Code d = g.to_dict() d ###Output _____no_output_____ ###Markdown This allows you to quickly access an individual element, such as to lookup the mean for 6 cylinders. This is used in target encoding, which is presented in this module. ###Code d[6] ###Output _____no_output_____ ###Markdown The code below shows how to count the number of rows that match each cylinder count. ###Code df.groupby('cylinders')['mpg'].count().to_dict() ###Output _____no_output_____ ###Markdown Part 2.4: Apply and Map The **apply** and **map** functions can also be applied to Pandas **dataframes**. Using Map with Dataframes ###Code import os import pandas as pd import numpy as np df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) display(df[0:10]) df['origin_name'] = df['origin'].map({1: 'North America', 2: 'Europe', 3: 'Asia'}) display(df[0:50]) ###Output _____no_output_____ ###Markdown Using Apply with DataframesIf the **apply** function is directly executed on the data frame, the lambda function is called once per column or row, depending on the value of axis. For axis = 1, rows are used. The following code calculates a series called **efficiency** that is the **displacement** divided by **horsepower**. ###Code effi = df.apply(lambda x: x['displacement']/x['horsepower'], axis=1) display(effi[0:10]) ###Output _____no_output_____ ###Markdown Feature Engineering with Apply and Map In this section we will see how to calculate a complex feature using map, apply, and grouping. The data set is the following CSV:* https://www.irs.gov/pub/irs-soi/16zpallagi.csv This is US Government public data for "SOI Tax Stats - Individual Income Tax Statistics". The primary website is here:* https://www.irs.gov/statistics/soi-tax-stats-individual-income-tax-statistics-2016-zip-code-data-soi Documentation describing this data is at the above link.For this feature, we will attempt to estimate the adjusted gross income (AGI) for each of the zipcodes. The data file contains many columns; however, you will only use the following:* STATE - The state (e.g. MO)* zipcode - The zipcode (e.g. 63017)* agi_stub - Six different brackets of annual income (1 through 6) * N1 - The number of tax returns for each of the agi_stubsNote, the file will have 6 rows for each zipcode, for each of the agi_stub brackets. You can skip zipcodes with 0 or 99999.We will create an output CSV with these columns; however, only one row per zip code. Calculate a weighted average of the income brackets. For example, the following 6 rows are present for 63017:|zipcode |agi_stub | N1 ||--|--|-- ||63017 |1 | 4710 ||63017 |2 | 2780 ||63017 |3 | 2130 ||63017 |4 | 2010 ||63017 |5 | 5240 ||63017 |6 | 3510 |We must combine these six rows into one. For privacy reasons, AGI's are broken out into 6 buckets. We need to combine the buckets and estimate the actual AGI of a zipcode. To do this, consider the values for N1:* 1 = \$1 to \$25,000* 2 = \$25,000 to \$50,000* 3 = \$50,000 to \$75,000* 4 = \$75,000 to \$100,000* 5 = \$100,000 to \$200,000* 6 = \$200,000 or moreThe median of each of these ranges is approximately:* 1 = \$12,500* 2 = \$37,500* 3 = \$62,500 * 4 = \$87,500* 5 = \$112,500* 6 = \$212,500Using this you can estimate 63017's average AGI as:```>>> totalCount = 4710 + 2780 + 2130 + 2010 + 5240 + 3510>>> totalAGI = 4710 * 12500 + 2780 * 37500 + 2130 * 62500 + 2010 * 87500 + 5240 * 112500 + 3510 * 212500>>> print(totalAGI / totalCount)88689.89205103042``` ###Code import pandas as pd df=pd.read_csv('https://www.irs.gov/pub/irs-soi/16zpallagi.csv') ###Output _____no_output_____ ###Markdown First, we trim all zipcodes that are either 0 or 99999. We also select the three fields that we need. ###Code df=df.loc[(df['zipcode']!=0) & (df['zipcode']!=99999),['STATE','zipcode','agi_stub','N1']] df ###Output _____no_output_____ ###Markdown We replace all of the **agi_stub** values with the correct median values with the **map** function. ###Code medians = {1:12500,2:37500,3:62500,4:87500,5:112500,6:212500} df['agi_stub']=df.agi_stub.map(medians) df ###Output _____no_output_____ ###Markdown Next the dataframe is grouped by zip code. ###Code groups = df.groupby(by='zipcode') ###Output _____no_output_____ ###Markdown A lambda is applied across the groups and the AGI estimate is calculated. ###Code df = pd.DataFrame(groups.apply(lambda x:sum(x['N1']*x['agi_stub'])/sum(x['N1']))).reset_index() df ###Output _____no_output_____ ###Markdown The new agi_estimate column is renamed. ###Code df.columns = ['zipcode','agi_estimate'] display(df[0:10]) ###Output _____no_output_____ ###Markdown We can also see that our zipcode of 63017 gets the correct value. ###Code df[ df['zipcode']==63017 ] ###Output _____no_output_____ ###Markdown Part 2.5: Feature Engineering Feature engineering is a very important part of machine learning. Later in this course we will see some techniques for automatic feature engineering. Calculated FieldsIt is possible to add new fields to the dataframe that are calculated from the other fields. We can create a new column that gives the weight in kilograms. The equation to calculate a metric weight, given a weight in pounds is:$ m_{(kg)} = m_{(lb)} \times 0.45359237 $This can be used with the following Python code: ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) df.insert(1, 'weight_kg', (df['weight'] * 0.45359237).astype(int)) df ###Output _____no_output_____ ###Markdown Google API KeysSometimes you will use external API's to obtain data. The following examples show how to use the Google API keys to encode addresses for use with neural networks. To use these, you will need your own Google API key. The key I have below is not a real key, you need to put your own in there. Google will ask for a credit card, but unless you use a very large number of lookups, there will be no actual cost. YOU ARE NOT required to get an Google API key for this class, this only shows you how. If you would like to get a Google API key, visit this site and obtain one for **geocode**.[Google API Keys](https://developers.google.com/maps/documentation/embed/get-api-key) ###Code GOOGLE_KEY = 'INSERT_YOUR_KEY' ###Output _____no_output_____ ###Markdown Other Examples: Dealing with AddressesAddresses can be difficult to encode into a neural network. There are many different approaches, and you must consider how you can transform the address into something more meaningful. Map coordinates can be a good approach. [Latitude and longitude](https://en.wikipedia.org/wiki/Geographic_coordinate_system) can be a useful encoding. Thanks to the power of the Internet, it is relatively easy to transform an address into its latitude and longitude values. The following code determines the coordinates of [Washington University](https://wustl.edu/): ###Code import requests address = "1 Brookings Dr, St. Louis, MO 63130" response = requests.get('https://maps.googleapis.com/maps/api/geocode/json?key={}&address={}'.format(GOOGLE_KEY,address)) resp_json_payload = response.json() if 'error_message' in resp_json_payload: print(resp_json_payload['error_message']) else: print(resp_json_payload['results'][0]['geometry']['location']) ###Output {'lat': 38.648238, 'lng': -90.30487459999999} ###Markdown If latitude and longitude are simply fed into the neural network as two features, they might not be overly helpful. These two values would allow your neural network to cluster locations on a map. Sometimes cluster locations on a map can be useful. Consider the percentage of the population that smokes in the USA by state:![Smokers by State](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/class_6_smokers.png "Smokers by State")The above map shows that certain behaviors, like smoking, can be clustered by global region. However, often you will want to transform the coordinates into distances. It is reasonably easy to estimate the distance between any two points on Earth by using the [great circle distance](https://en.wikipedia.org/wiki/Great-circle_distance) between any two points on a sphere:The following code implements this formula:$\Delta\sigma=\arccos\bigl(\sin\phi_1\cdot\sin\phi_2+\cos\phi_1\cdot\cos\phi_2\cdot\cos(\Delta\lambda)\bigr)$$d = r \, \Delta\sigma$ ###Code from math import sin, cos, sqrt, atan2, radians # Distance function def distance_lat_lng(lat1,lng1,lat2,lng2): # approximate radius of earth in km R = 6373.0 # degrees to radians (lat/lon are in degrees) lat1 = radians(lat1) lng1 = radians(lng1) lat2 = radians(lat2) lng2 = radians(lng2) dlng = lng2 - lng1 dlat = lat2 - lat1 a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlng / 2)**2 c = 2 * atan2(sqrt(a), sqrt(1 - a)) return R * c # Find lat lon for address def lookup_lat_lng(address): response = requests.get('https://maps.googleapis.com/maps/api/geocode/json?key={}&address={}'.format(GOOGLE_KEY,address)) json = response.json() if len(json['results']) == 0: print("Can't find: {}".format(address)) return 0,0 map = json['results'][0]['geometry']['location'] return map['lat'],map['lng'] # Distance between two locations import requests address1 = "1 Brookings Dr, St. Louis, MO 63130" address2 = "3301 College Ave, Fort Lauderdale, FL 33314" lat1, lng1 = lookup_lat_lng(address1) lat2, lng2 = lookup_lat_lng(address2) print("Distance, St. Louis, MO to Ft. Lauderdale, FL: {} km".format( distance_lat_lng(lat1,lng1,lat2,lng2))) ###Output Distance, St. Louis, MO to Ft. Lauderdale, FL: 1684.9161446533758 km ###Markdown Distances can be useful to encode addresses as. You must consider what distance might be useful for your dataset. Consider:* Distance to major metropolitan area* Distance to competitor* Distance to distribution center* Distance to retail outletThe following code calculates the distance between 10 universities and washu: ###Code # Encoding other universities by their distance to Washington University schools = [ ["Princeton University, Princeton, NJ 08544", 'Princeton'], ["Massachusetts Hall, Cambridge, MA 02138", 'Harvard'], ["5801 S Ellis Ave, Chicago, IL 60637", 'University of Chicago'], ["Yale, New Haven, CT 06520", 'Yale'], ["116th St & Broadway, New York, NY 10027", 'Columbia University'], ["450 Serra Mall, Stanford, CA 94305", 'Stanford'], ["77 Massachusetts Ave, Cambridge, MA 02139", 'MIT'], ["Duke University, Durham, NC 27708", 'Duke University'], ["University of Pennsylvania, Philadelphia, PA 19104", 'University of Pennsylvania'], ["Johns Hopkins University, Baltimore, MD 21218", 'Johns Hopkins'] ] lat1, lng1 = lookup_lat_lng("1 Brookings Dr, St. Louis, MO 63130") for address, name in schools: lat2,lng2 = lookup_lat_lng(address) dist = distance_lat_lng(lat1,lng1,lat2,lng2) print("School '{}', distance to wustl is: {}".format(name,dist)) ###Output School 'Princeton', distance to wustl is: 1354.4748428037537 School 'Harvard', distance to wustl is: 1670.6348910867227 School 'University of Chicago', distance to wustl is: 418.07123096093096 School 'Yale', distance to wustl is: 1508.209168740192 School 'Columbia University', distance to wustl is: 1418.2846378506144 School 'Stanford', distance to wustl is: 2780.6884662205066 School 'MIT', distance to wustl is: 1672.4354422735219 School 'Duke University', distance to wustl is: 1046.7924543575177 School 'University of Pennsylvania', distance to wustl is: 1307.1873732319766 School 'Johns Hopkins', distance to wustl is: 1184.3754484499111 ###Markdown T81-558: Applications of Deep Neural Networks**Module 2: Python for Machine Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 2 MaterialMain video lecture:* Part 2.1: Introduction to Pandas [[Video]](https://www.youtube.com/watch?v=bN4UuCBdpZc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_1_python_pandas.ipynb)* Part 2.2: Categorical Values [[Video]](https://www.youtube.com/watch?v=4a1odDpG0Ho&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_2_pandas_cat.ipynb)* **Part 2.3: Grouping, Sorting, and Shuffling in Python Pandas** [[Video]](https://www.youtube.com/watch?v=YS4wm5gD8DM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_3_pandas_grouping.ipynb)* Part 2.4: Using Apply and Map in Pandas for Keras [[Video]](https://www.youtube.com/watch?v=XNCEZ4WaPBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_4_pandas_functional.ipynb)* Part 2.5: Feature Engineering in Pandas for Deep Learning in Keras [[Video]](https://www.youtube.com/watch?v=BWPTj4_Mi9E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_5_pandas_features.ipynb) Part 2.3: Grouping, Sorting, and Shuffling Shuffling a DatasetThe following code is used to shuffle and reindex a data set. A random seed can be used to produce a consistent shuffling of the data set. ###Code import os import pandas as pd import numpy as np df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) #np.random.seed(42) # Uncomment this line to get the same shuffle each time df = df.reindex(np.random.permutation(df.index)) df.reset_index(inplace=True, drop=True) display(df[0:10]) ###Output _____no_output_____ ###Markdown Sorting a Data SetData sets can also be sorted. This code sorts the MPG dataset by name and displays the first car. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) df = df.sort_values(by='name', ascending=True) print(f"The first car is: {df['name'].iloc[0]}") display(df[0:5]) ###Output The first car is: amc ambassador brougham ###Markdown Grouping a Data SetGrouping is a common operation on data sets. In SQL, this operation is referred to as "GROUP BY". Grouping is used to summarize data. Because of this summarization the row could will either stay the same or more likely shrink after a grouping is applied.The Auto MPG dataset is used to demonstrate grouping. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) display(df[0:5]) ###Output _____no_output_____ ###Markdown The above data set can be used with group to perform summaries. For example, the following code will group cylinders by the average (mean). This code will provide the grouping. In addition to mean, other aggregating functions, such as **sum** or **count** can be used. ###Code g = df.groupby('cylinders')['mpg'].mean() g ###Output _____no_output_____ ###Markdown It might be useful to have these **mean** values as a dictionary. ###Code d = g.to_dict() d ###Output _____no_output_____ ###Markdown This allows you to quickly access an individual element, such as to lookup the mean for 6 cylinders. This is used in target encoding, which is presented in this module. ###Code d[6] ###Output _____no_output_____ ###Markdown The code below shows how to count the number of rows that match each cylinder count. ###Code df.groupby('cylinders')['mpg'].count().to_dict() ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 2: Python for Machine Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 2 MaterialMain video lecture:* Part 2.1: Introduction to Pandas [[Video]](https://www.youtube.com/watch?v=bN4UuCBdpZc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_1_python_pandas.ipynb)* Part 2.2: Categorical Values [[Video]](https://www.youtube.com/watch?v=4a1odDpG0Ho&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_2_pandas_cat.ipynb)* **Part 2.3: Grouping, Sorting, and Shuffling in Python Pandas** [[Video]](https://www.youtube.com/watch?v=YS4wm5gD8DM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_3_pandas_grouping.ipynb)* Part 2.4: Using Apply and Map in Pandas for Keras [[Video]](https://www.youtube.com/watch?v=XNCEZ4WaPBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_4_pandas_functional.ipynb)* Part 2.5: Feature Engineering in Pandas for Deep Learning in Keras [[Video]](https://www.youtube.com/watch?v=BWPTj4_Mi9E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_5_pandas_features.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Note: not using Google CoLab ###Markdown Part 2.3: Grouping, Sorting, and Shuffling Now we will take a look at a few ways to affect an entire Pandas data frame. These techniques will allow us to group, sort, and shuffle data sets. These are all essential operations for both data preprocessing and evaluation. Shuffling a DatasetThere may be information lurking in the order of the rows of your dataset. Unless you are dealing with time-series data, the order of the rows should not be significant. Consider if your training set included employees in a company. Perhaps this dataset is ordered by the number of years that the employees were with the company. It is okay to have an individual column that specifies years of service. However, having the data in this order might be problematic. Consider if you were to split the data into training and validation. You could end up with your validation set having only the newer employees and the training set longer-term employees. Separating the data into a k-fold cross validation could have similar problems. Because of these issues, it is important to shuffle the data set.Often shuffling and reindexing are both performed together. Shuffling randomizes the order of the data set. However, it does not change the Pandas row numbers. The following code demonstrates a reshuffle. Notice that the first column, the row indexes, has not been reset. Generally, this will not cause any issues and allows trace back to the original order of the data. However, I usually prefer to reset this index. I reason that I typically do not care about the initial position, and there are a few instances where this unordered index can cause issues. ###Code import os import pandas as pd import numpy as np df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) #np.random.seed(42) # Uncomment this line to get the same shuffle each time df = df.reindex(np.random.permutation(df.index)) display(df[0:10]) ###Output _____no_output_____ ###Markdown The following code demonstrates a reindex. Notice how the reindex orders the row indexes. ###Code df.reset_index(inplace=True, drop=True) display(df[0:10]) ###Output _____no_output_____ ###Markdown Sorting a Data SetWhile it is always a good idea to shuffle a data set before training, during training and preprocessing, you may also wish to sort the data set. Sorting the data set allows you to order the rows in either ascending or descending order for one or more columns. The following code sorts the MPG dataset by name and displays the first car. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) df = df.sort_values(by='name', ascending=True) print(f"The first car is: {df['name'].iloc[0]}") display(df[0:5]) ###Output The first car is: amc ambassador brougham ###Markdown Grouping a Data SetGrouping is a typical operation on data sets. Structured Query Language (SQL) calls this operation a "GROUP BY." Programmers use grouping to summarize data. Because of this, the summarization row count will usually shrink, and you cannot undo the grouping. Because of this loss of information, it is essential to keep your original data before the grouping. The Auto MPG dataset is used to demonstrate grouping. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) display(df[0:5]) ###Output _____no_output_____ ###Markdown The above data set can be used with the group to perform summaries. For example, the following code will group cylinders by the average (mean). This code will provide the grouping. In addition to **mean**, you can use other aggregating functions, such as **sum** or **count**. ###Code g = df.groupby('cylinders')['mpg'].mean() g ###Output _____no_output_____ ###Markdown It might be useful to have these **mean** values as a dictionary. ###Code d = g.to_dict() d ###Output _____no_output_____ ###Markdown A dictionary allows you to access an individual element quickly. For example, you could quickly look up the mean for six-cylinder cars. You will see that target encoding, introduced later in this module, makes use of this technique. ###Code d[6] ###Output _____no_output_____ ###Markdown The code below shows how to count the number of rows that match each cylinder count. ###Code df.groupby('cylinders')['mpg'].count().to_dict() ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 2: Python for Machine Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 2 MaterialMain video lecture:* Part 2.1: Introduction to Pandas [[Video]](https://www.youtube.com/watch?v=bN4UuCBdpZc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_1_python_pandas.ipynb)* Part 2.2: Categorical Values [[Video]](https://www.youtube.com/watch?v=4a1odDpG0Ho&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_2_pandas_cat.ipynb)* **Part 2.3: Grouping, Sorting, and Shuffling in Python Pandas** [[Video]](https://www.youtube.com/watch?v=YS4wm5gD8DM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_3_pandas_grouping.ipynb)* Part 2.4: Using Apply and Map in Pandas for Keras [[Video]](https://www.youtube.com/watch?v=XNCEZ4WaPBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_4_pandas_functional.ipynb)* Part 2.5: Feature Engineering in Pandas for Deep Learning in Keras [[Video]](https://www.youtube.com/watch?v=BWPTj4_Mi9E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_5_pandas_features.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Note: not using Google CoLab ###Markdown Part 2.3: Grouping, Sorting, and Shuffling Now we will take a look at a few ways to affect an entire Pandas data frame. These techniques will allow us to group, sort, and shuffle data sets. These are all essential operations for both data preprocessing and evaluation. Shuffling a DatasetThere may be information lurking in the order of the rows of your dataset. Unless you are dealing with time-series data, the order of the rows should not be significant. Consider if your training set included employees in a company. Perhaps this dataset is ordered by the number of years that the employees were with the company. It is okay to have an individual column that specifies years of service. However, having the data in this order might be problematic. Consider if you were to split the data into training and validation. You could end up with your validation set having only the newer employees and the training set longer-term employees. Separating the data into a k-fold cross validation could have similar problems. Because of these issues, it is important to shuffle the data set.Often shuffling and reindexing are both performed together. Shuffling randomizes the order of the data set. However, it does not change the Pandas row numbers. The following code demonstrates a reshuffle. Notice that the first column, the row indexes, has not been reset. Generally, this will not cause any issues and allows trace back to the original order of the data. However, I usually prefer to reset this index. I reason that I typically do not care about the initial position, and there are a few instances where this unordered index can cause issues. ###Code import os import pandas as pd import numpy as np df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) #np.random.seed(42) # Uncomment this line to get the same shuffle each time df = df.reindex(np.random.permutation(df.index)) pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output _____no_output_____ ###Markdown The following code demonstrates a reindex. Notice how the reindex orders the row indexes. ###Code pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) df.reset_index(inplace=True, drop=True) display(df) ###Output _____no_output_____ ###Markdown Sorting a Data SetWhile it is always a good idea to shuffle a data set before training, during training and preprocessing, you may also wish to sort the data set. Sorting the data set allows you to order the rows in either ascending or descending order for one or more columns. The following code sorts the MPG dataset by name and displays the first car. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) df = df.sort_values(by='name', ascending=True) print(f"The first car is: {df['name'].iloc[0]}") pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output The first car is: amc ambassador brougham ###Markdown Grouping a Data SetGrouping is a typical operation on data sets. Structured Query Language (SQL) calls this operation a "GROUP BY." Programmers use grouping to summarize data. Because of this, the summarization row count will usually shrink, and you cannot undo the grouping. Because of this loss of information, it is essential to keep your original data before the grouping. The Auto MPG dataset is used to demonstrate grouping. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output _____no_output_____ ###Markdown The above data set can be used with the group to perform summaries. For example, the following code will group cylinders by the average (mean). This code will provide the grouping. In addition to **mean**, you can use other aggregating functions, such as **sum** or **count**. ###Code g = df.groupby('cylinders')['mpg'].mean() g ###Output _____no_output_____ ###Markdown It might be useful to have these **mean** values as a dictionary. ###Code d = g.to_dict() d ###Output _____no_output_____ ###Markdown A dictionary allows you to access an individual element quickly. For example, you could quickly look up the mean for six-cylinder cars. You will see that target encoding, introduced later in this module, makes use of this technique. ###Code d[6] ###Output _____no_output_____ ###Markdown The code below shows how to count the number of rows that match each cylinder count. ###Code df.groupby('cylinders')['mpg'].count().to_dict() ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 2: Python for Machine Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 2 MaterialMain video lecture:* Part 2.1: Introduction to Pandas [[Video]](https://www.youtube.com/watch?v=bN4UuCBdpZc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_1_python_pandas.ipynb)* Part 2.2: Categorical Values [[Video]](https://www.youtube.com/watch?v=4a1odDpG0Ho&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_2_pandas_cat.ipynb)* **Part 2.3: Grouping, Sorting, and Shuffling in Python Pandas** [[Video]](https://www.youtube.com/watch?v=YS4wm5gD8DM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_3_pandas_grouping.ipynb)* Part 2.4: Using Apply and Map in Pandas for Keras [[Video]](https://www.youtube.com/watch?v=XNCEZ4WaPBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_4_pandas_functional.ipynb)* Part 2.5: Feature Engineering in Pandas for Deep Learning in Keras [[Video]](https://www.youtube.com/watch?v=BWPTj4_Mi9E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_02_5_pandas_features.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Note: not using Google CoLab ###Markdown Part 2.3: Grouping, Sorting, and Shuffling Now we will take a look at a few ways to affect an entire Pandas data frame. These techniques will allow us to group, sort, and shuffle data sets. These are all essential operations for both data preprocessing and evaluation. Shuffling a DatasetThere may be information lurking in the order of the rows of your dataset. Unless you are dealing with time-series data, the order of the rows should not be significant. Consider if your training set included employees in a company. Perhaps this dataset is ordered by the number of years that the employees were with the company. It is okay to have an individual column that specifies years of service. However, having the data in this order might be problematic. Consider if you were to split the data into training and validation. You could end up with your validation set having only the newer employees and the training set longer-term employees. Separating the data into a k-fold cross validation could have similar problems. Because of these issues, it is important to shuffle the data set.Often shuffling and reindexing are both performed together. Shuffling randomizes the order of the data set. However, it does not change the Pandas row numbers. The following code demonstrates a reshuffle. Notice that the first column, the row indexes, has not been reset. Generally, this will not cause any issues and allows trace back to the original order of the data. However, I usually prefer to reset this index. I reason that I typically do not care about the initial position, and there are a few instances where this unordered index can cause issues. ###Code import os import pandas as pd import numpy as np df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) #np.random.seed(42) # Uncomment this line to get the same shuffle each time df = df.reindex(np.random.permutation(df.index)) pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 15) display(df) ###Output _____no_output_____ ###Markdown The following code demonstrates a reindex. Notice how the reindex orders the row indexes. ###Code pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 15) df.reset_index(inplace=True, drop=True) display(df) ###Output _____no_output_____ ###Markdown Sorting a Data SetWhile it is always a good idea to shuffle a data set before training, during training and preprocessing, you may also wish to sort the data set. Sorting the data set allows you to order the rows in either ascending or descending order for one or more columns. The following code sorts the MPG dataset by name and displays the first car. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) df = df.sort_values(by='name', ascending=True) print(f"The first car is: {df['name'].iloc[0]}") pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output The first car is: amc ambassador brougham ###Markdown Grouping a Data SetGrouping is a typical operation on data sets. Structured Query Language (SQL) calls this operation a "GROUP BY." Programmers use grouping to summarize data. Because of this, the summarization row count will usually shrink, and you cannot undo the grouping. Because of this loss of information, it is essential to keep your original data before the grouping. The Auto MPG dataset is used to demonstrate grouping. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output _____no_output_____ ###Markdown The above data set can be used with the group to perform summaries. For example, the following code will group cylinders by the average (mean). This code will provide the grouping. In addition to **mean**, you can use other aggregating functions, such as **sum** or **count**. ###Code g = df.groupby('cylinders')['mpg'].mean() g ###Output _____no_output_____ ###Markdown It might be useful to have these **mean** values as a dictionary. ###Code d = g.to_dict() d ###Output _____no_output_____ ###Markdown A dictionary allows you to access an individual element quickly. For example, you could quickly look up the mean for six-cylinder cars. You will see that target encoding, introduced later in this module, makes use of this technique. ###Code d[6] ###Output _____no_output_____ ###Markdown The code below shows how to count the number of rows that match each cylinder count. ###Code df.groupby('cylinders')['mpg'].count().to_dict() ###Output _____no_output_____ ###Markdown T81-558: Applications of Deep Neural Networks**Module 2: Python for Machine Learning*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 2 MaterialMain video lecture:* Part 2.1: Introduction to Pandas [[Video]](https://www.youtube.com/watch?v=bN4UuCBdpZc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_1_python_pandas.ipynb)* Part 2.2: Categorical Values [[Video]](https://www.youtube.com/watch?v=4a1odDpG0Ho&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_2_pandas_cat.ipynb)* **Part 2.3: Grouping, Sorting, and Shuffling in Python Pandas** [[Video]](https://www.youtube.com/watch?v=YS4wm5gD8DM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_3_pandas_grouping.ipynb)* Part 2.4: Using Apply and Map in Pandas for Keras [[Video]](https://www.youtube.com/watch?v=XNCEZ4WaPBY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_4_pandas_functional.ipynb)* Part 2.5: Feature Engineering in Pandas for Deep Learning in Keras [[Video]](https://www.youtube.com/watch?v=BWPTj4_Mi9E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_02_5_pandas_features.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow. ###Code try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ###Output Note: not using Google CoLab ###Markdown Part 2.3: Grouping, Sorting, and Shuffling Now we will take a look at a few ways to affect an entire Pandas data frame. These techniques will allow us to group, sort, and shuffle data sets. These are all essential operations for both data preprocessing and evaluation. Shuffling a DatasetThere may be information lurking in the order of the rows of your dataset. Unless you are dealing with time-series data, the order of the rows should not be significant. Consider if your training set included employees in a company. Perhaps this dataset is ordered by the number of years that the employees were with the company. It is okay to have an individual column that specifies years of service. However, having the data in this order might be problematic. Consider if you were to split the data into training and validation. You could end up with your validation set having only the newer employees and the training set longer-term employees. Separating the data into a k-fold cross validation could have similar problems. Because of these issues, it is important to shuffle the data set.Often shuffling and reindexing are both performed together. Shuffling randomizes the order of the data set. However, it does not change the Pandas row numbers. The following code demonstrates a reshuffle. Notice that the first column, the row indexes, has not been reset. Generally, this will not cause any issues and allows trace back to the original order of the data. However, I usually prefer to reset this index. I reason that I typically do not care about the initial position, and there are a few instances where this unordered index can cause issues. ###Code import os import pandas as pd import numpy as np df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) #np.random.seed(42) # Uncomment this line to get the same shuffle each time df = df.reindex(np.random.permutation(df.index)) pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output _____no_output_____ ###Markdown The following code demonstrates a reindex. Notice how the reindex orders the row indexes. ###Code pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) df.reset_index(inplace=True, drop=True) display(df) ###Output _____no_output_____ ###Markdown Sorting a Data SetWhile it is always a good idea to shuffle a data set before training, during training and preprocessing, you may also wish to sort the data set. Sorting the data set allows you to order the rows in either ascending or descending order for one or more columns. The following code sorts the MPG dataset by name and displays the first car. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) df = df.sort_values(by='name', ascending=True) print(f"The first car is: {df['name'].iloc[0]}") pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output The first car is: amc ambassador brougham ###Markdown Grouping a Data SetGrouping is a typical operation on data sets. Structured Query Language (SQL) calls this operation a "GROUP BY." Programmers use grouping to summarize data. Because of this, the summarization row count will usually shrink, and you cannot undo the grouping. Because of this loss of information, it is essential to keep your original data before the grouping. The Auto MPG dataset is used to demonstrate grouping. ###Code import os import pandas as pd df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) pd.set_option('display.max_columns', 7) pd.set_option('display.max_rows', 5) display(df) ###Output _____no_output_____ ###Markdown The above data set can be used with the group to perform summaries. For example, the following code will group cylinders by the average (mean). This code will provide the grouping. In addition to **mean**, you can use other aggregating functions, such as **sum** or **count**. ###Code g = df.groupby('cylinders')['mpg'].mean() g ###Output _____no_output_____ ###Markdown It might be useful to have these **mean** values as a dictionary. ###Code d = g.to_dict() d ###Output _____no_output_____ ###Markdown A dictionary allows you to access an individual element quickly. For example, you could quickly look up the mean for six-cylinder cars. You will see that target encoding, introduced later in this module, makes use of this technique. ###Code d[6] ###Output _____no_output_____ ###Markdown The code below shows how to count the number of rows that match each cylinder count. ###Code df.groupby('cylinders')['mpg'].count().to_dict() ###Output _____no_output_____
study_roadmaps/4_image_classification_zoo/Classifier - IEEE Camera model type classification.ipynb
###Markdown Table of contents Install Monk Using pretrained model for classifying source camera model type of images Training a classifier from scratch (Default mode) Install Monk Using pip (Recommended) - colab (gpu) - All bakcends: `pip install -U monk-colab` - kaggle (gpu) - All backends: `pip install -U monk-kaggle` - cuda 10.2 - All backends: `pip install -U monk-cuda102` - Gluon bakcned: `pip install -U monk-gluon-cuda102` - Pytorch backend: `pip install -U monk-pytorch-cuda102` - Keras backend: `pip install -U monk-keras-cuda102` - cuda 10.1 - All backend: `pip install -U monk-cuda101` - Gluon bakcned: `pip install -U monk-gluon-cuda101` - Pytorch backend: `pip install -U monk-pytorch-cuda101` - Keras backend: `pip install -U monk-keras-cuda101` - cuda 10.0 - All backend: `pip install -U monk-cuda100` - Gluon bakcned: `pip install -U monk-gluon-cuda100` - Pytorch backend: `pip install -U monk-pytorch-cuda100` - Keras backend: `pip install -U monk-keras-cuda100` - cuda 9.2 - All backend: `pip install -U monk-cuda92` - Gluon bakcned: `pip install -U monk-gluon-cuda92` - Pytorch backend: `pip install -U monk-pytorch-cuda92` - Keras backend: `pip install -U monk-keras-cuda92` - cuda 9.0 - All backend: `pip install -U monk-cuda90` - Gluon bakcned: `pip install -U monk-gluon-cuda90` - Pytorch backend: `pip install -U monk-pytorch-cuda90` - Keras backend: `pip install -U monk-keras-cuda90` - cpu - All backend: `pip install -U monk-cpu` - Gluon bakcned: `pip install -U monk-gluon-cpu` - Pytorch backend: `pip install -U monk-pytorch-cpu` - Keras backend: `pip install -U monk-keras-cpu` Install Monk Manually (Not recommended) Step 1: Clone the library - git clone https://github.com/Tessellate-Imaging/monk_v1.git Step 2: Install requirements - Linux - Cuda 9.0 - `cd monk_v1/installation/Linux && pip install -r requirements_cu90.txt` - Cuda 9.2 - `cd monk_v1/installation/Linux && pip install -r requirements_cu92.txt` - Cuda 10.0 - `cd monk_v1/installation/Linux && pip install -r requirements_cu100.txt` - Cuda 10.1 - `cd monk_v1/installation/Linux && pip install -r requirements_cu101.txt` - Cuda 10.2 - `cd monk_v1/installation/Linux && pip install -r requirements_cu102.txt` - CPU (Non gpu system) - `cd monk_v1/installation/Linux && pip install -r requirements_cpu.txt` - Windows - Cuda 9.0 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu90.txt` - Cuda 9.2 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu92.txt` - Cuda 10.0 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu100.txt` - Cuda 10.1 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu101.txt` - Cuda 10.2 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu102.txt` - CPU (Non gpu system) - `cd monk_v1/installation/Windows && pip install -r requirements_cpu.txt` - Mac - CPU (Non gpu system) - `cd monk_v1/installation/Mac && pip install -r requirements_cpu.txt` - Misc - Colab (GPU) - `cd monk_v1/installation/Misc && pip install -r requirements_colab.txt` - Kaggle (GPU) - `cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt` Step 3: Add to system path (Required for every terminal or kernel run) - `import sys` - `sys.path.append("monk_v1/");` Used trained classifier for demo ###Code #Using mxnet-gluon backend # When installed using pip from monk.gluon_prototype import prototype # When installed manually (Uncomment the following) #import os #import sys #sys.path.append("monk_v1/"); #sys.path.append("monk_v1/monk/"); #from monk.gluon_prototype import prototype # Download trained weights ! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1rVjR_mY9IDMt3xaXSCYcPGCsS3fABtAy' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1rVjR_mY9IDMt3xaXSCYcPGCsS3fABtAy" -O cls_camera_trained.zip && rm -rf /tmp/cookies.txt ! unzip -qq cls_camera_trained.zip ls workspace/Project-Camera/ # Load project in inference mode gtf = prototype(verbose=1); gtf.Prototype("Project-Camera", "Gluon-Resnet152", eval_infer=True); #Other trained models - uncomment #gtf.Prototype("Project-Camera", "Gluon-Resnet101", eval_infer=True); #gtf.Prototype("Project-Camera", "Gluon-Resnet50", eval_infer=True); #Infer img_name = "workspace/test/1.jpg" predictions = gtf.Infer(img_name=img_name); from IPython.display import Image Image(filename=img_name) img_name = "workspace/test/2.jpg" predictions = gtf.Infer(img_name=img_name); from IPython.display import Image Image(filename=img_name) img_name = "workspace/test/3.jpg" predictions = gtf.Infer(img_name=img_name); from IPython.display import Image Image(filename=img_name) img_name = "workspace/test/4.jpg" predictions = gtf.Infer(img_name=img_name); from IPython.display import Image Image(filename=img_name) ###Output _____no_output_____ ###Markdown Run test for submission ###Code from monk.gluon_prototype import prototype gtf = prototype(verbose=0); gtf.Prototype("Project-Camera", "Gluon-Resnet152", eval_infer=True); import os import cv2 lst = os.listdir("test/test"); combined = []; from tqdm import tqdm for i in tqdm(range(len(lst))): img_name = "test/test/" + lst[i]; img = cv2.imread(img_name); cv2.imwrite("test.jpg", img) img_name = "test.jpg" predictions = gtf.Infer(img_name=img_name); combined.append([lst[i], predictions["predicted_class"]]); import pandas as pd df = pd.DataFrame(combined, columns = ['fname', 'camera']) df.to_csv("submission.csv", index=False) # Submit to the competition - https://www.kaggle.com/c/sp-society-camera-model-identification/submit ###Output _____no_output_____ ###Markdown Train your own network Dataset - Credits: https://www.kaggle.com/c/sp-society-camera-model-identification ###Code ! kaggle competitions download -c sp-society-camera-model-identification ls -a ! unzip -qq sp-society-camera-model-identification.zip ###Output _____no_output_____ ###Markdown Training ###Code from monk.gluon_prototype import prototype gtf = prototype(verbose=1); gtf.Prototype("Project-Camera", "Gluon-Resnet50"); gtf.Default(dataset_path="train/train/", model_name="resnet50_v2", freeze_base_network=False, num_epochs=5); gtf.update_batch_size(64); gtf.update_save_intermediate_models(False); #important to reload post updates gtf.Reload(); gtf.Train(); ###Output _____no_output_____ ###Markdown Running inference on test images ###Code from monk.gluon_prototype import prototype gtf = prototype(verbose=1); gtf.Prototype("Project-Camera", "Gluon-Resnet50", eval_infer=True); import os lst = os.listdir("test/test"); import cv2 img_name = "test/test/" + lst[0]; img = cv2.imread(img_name); cv2.imwrite("test.jpg", img) img_name = "test.jpg" predictions = gtf.Infer(img_name=img_name); #Display from IPython.display import Image Image(filename=img_name) img_name = "test/test/" + lst[1]; img = cv2.imread(img_name); cv2.imwrite("test.jpg", img) img_name = "test.jpg" predictions = gtf.Infer(img_name=img_name); #Display from IPython.display import Image Image(filename=img_name) ###Output _____no_output_____ ###Markdown Run test for submission ###Code from monk.gluon_prototype import prototype gtf = prototype(verbose=0); gtf.Prototype("Project-Camera", "Gluon-Resnet50", eval_infer=True); import os import cv2 lst = os.listdir("test/test"); combined = []; from tqdm import tqdm for i in tqdm(range(len(lst))): img_name = "test/test/" + lst[i]; img = cv2.imread(img_name); cv2.imwrite("test.jpg", img) img_name = "test.jpg" predictions = gtf.Infer(img_name=img_name); combined.append([lst[i], predictions["predicted_class"]]); import pandas as pd df = pd.DataFrame(combined, columns = ['fname', 'camera']) df.to_csv("submission.csv", index=False) # Submit to the competition - https://www.kaggle.com/c/sp-society-camera-model-identification/submit ###Output _____no_output_____
module1-intro-predictive-modeling/intro_predictive_modeling.ipynb
###Markdown _Lambda School Data Science — Linear Models_ Intro to Predictive Modeling Objectives- recognize examples of supervised learning with tabular data- distinguish between regression problems and classification problems- explain why overfitting is a problem and model validation is important- do train/test split- begin with baselines for regression I like Brandon Rohrer’s blog post, [“What questions can machine learning answer?”](https://brohrer.github.io/five_questions_data_science_answers.html)We’ll focus on two of these questions in Unit 2. These are both types of “supervised learning.”- “Is this A or B?” (Classification)- “How Much / How Many?” (Regression)**This unit, you’ll do four supervised learning projects** with “tabular data” (data in tables, like spreadsheets).- Predict New York City apartment rents <-- **Today, we'll start this project!**- Predict which water pumps in Tanzania need repairs- Predict the prices suppliers will quote Caterpillar for industrial parts- Choose your own labeled, tabular dataset, train a predictive model, and publish a blog post or web app with visualizations to explain your model! Predict NYC apartment rent 🏠💸You'll use a real-world data with rent prices for a subset of apartments in New York City!Run this code cell to load the dataset: Install [pandas-profiling](https://github.com/pandas-profiling/pandas-profiling), version >= 2 ###Code #!pip install --upgrade pandas-profiling import pandas_profiling pandas_profiling.__version__ import pandas as pd LOCAL = '../data/nyc/nyc-rent-2016.csv' WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/nyc/nyc-rent-2016.csv' #df = pd.read_csv(WEB) df = pd.read_csv(LOCAL) #for local jupyter instance assert df.shape == (48300, 34) ###Output _____no_output_____ ###Markdown Define the problem- Is this **supervised** learning?- Is this **tabular** data?- Is this **regression** or **classification**? ###Code df.info() df.describe() #df.profile_report() #OR IF YOU WANT TO SAVE PROFILE REPORT TO HTML FILE profile = df.profile_report() profile.to_file(output_file="output.html") df.sample(n=10) ###Output _____no_output_____ ###Markdown Explain why overfitting is a problem and model validation is important Jason Brownlee, [Overfitting and Underfitting With Machine Learning Algorithms](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/)> The goal of a good machine learning model is to **generalize** well from the training data to any data from the problem domain. This allows us to make predictions in the future on data the model has never seen.> The cause of poor performance in machine learning is either overfitting or underfitting the data.> **Overfitting** refers to a model that models the training data too well. Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. > **Underfitting** refers to a model that can neither model the training data nor generalize to new data.> Ideally, you want to select a model at the sweet spot between underfitting and overfitting. Rob Hyndman & George Athanasopoulos, [_Forecasting: Principles and Practice_, Chapter 3.4](https://otexts.com/fpp2/accuracy.html), Evaluating forecast accuracy:> The following points should be noted.> - A model which fits the training data well will not necessarily forecast well.> - A perfect fit can always be obtained by using a model with enough parameters.> - Over-fitting a model to data is just as bad as failing to identify a systematic pattern in the data.> **The accuracy of forecasts can only be determined by considering how well a model performs on new data that were not used when fitting the model.**> When choosing models, it is common practice to separate the available data into two portions, training and test data, where the training data is used to estimate any parameters of a forecasting method and the test data is used to evaluate its accuracy. Because the test data is not used in determining the forecasts, it should provide a reliable indication of how well the model is likely to forecast on new data.![](https://otexts.com/fpp2/fpp_files/figure-html/traintest-1.png)> The size of the test set is typically about 20% of the total sample, although this value depends on how long the sample is and how far ahead you want to forecast. The test set should ideally be at least as large as the maximum forecast horizon required.> Some references describe the test set as the “hold-out set” because these data are “held out” of the data used for fitting. Other references call the training set the “in-sample data” and the test set the “out-of-sample data”. We prefer to use “training data” and “test data” in this book. Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> An all-too-common scenario: a seemingly impressive machine learning model is a complete failure when implemented in production. The fallout includes leaders who are now skeptical of machine learning and reluctant to try it again. How can this happen?> One of the most likely culprits for this disconnect between results in development vs results in production is a poorly chosen validation set (or even worse, no validation set at all). James, Witten, Hastie, Tibshirani, [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 2.2, Assessing Model Accuracy> In general, we do not really care how well the method works training on the training data. Rather, _we are interested in the accuracy of the predictions that we obtain when we apply our method to previously unseen test data._ Why is this what we care about? > Suppose that we are interested test data in developing an algorithm to predict a stock’s price based on previous stock returns. We can train the method using stock returns from the past 6 months. But we don’t really care how well our method predicts last week’s stock price. We instead care about how well it will predict tomorrow’s price or next month’s price. > On a similar note, suppose that we have clinical measurements (e.g. weight, blood pressure, height, age, family history of disease) for a number of patients, as well as information about whether each patient has diabetes. We can use these patients to train a statistical learning method to predict risk of diabetes based on clinical measurements. In practice, we want this method to accurately predict diabetes risk for _future patients_ based on their clinical measurements. We are not very interested in whether or not the method accurately predicts diabetes risk for patients used to train the model, since we already know which of those patients have diabetes. Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions/8)> Good validation is _more important_ than good models. Do train/test splitWe have two options for where we choose to split:- Time- Random This choice depends on your goals. Rachel Thomas explains why you may want to split on time: Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> If your data is a time series, choosing a random subset of the data will be both too easy (you can look at the data both before and after the dates your are trying to predict) and not representative of most business use cases (where you are using historical data to build a model for use in the future). If your data includes the date and you are building a model to use in the future, you will want to choose a continuous section with the latest dates as your validation set For this project, we'll split based on time. - Use data from April & May 2016 to train.- Use data from June 2016 to test.(But in some future projects this unit, we'll do random splits, and explain why.) ###Code df['created'] = pd.to_datetime(df['created'], infer_datetime_format=True) df['created'] df['created'].describe() #Since dt is 'datetime64' we can extract the month and create a column df['month'] = df['created'].dt.month df['month'].describe() #Now we split the dataframe train = df[df['month'] < 6] test = df[df['month'] == 6] assert train.shape[0] + test.shape[0] == df.shape[0] train['created'].describe() test['created'].describe() ###Output _____no_output_____ ###Markdown Begin with baselines for regression Why begin with baselines?[My mentor](https://www.linkedin.com/in/jason-sanchez-62093847/) [taught me](https://youtu.be/0GrciaGYzV0?t=40s):>***Your first goal should always, always, always be getting a generalized prediction as fast as possible.*** You shouldn't spend a lot of time trying to tune your model, trying to add features, trying to engineer features, until you've actually gotten one prediction, at least. > The reason why that's a really good thing is because then ***you'll set a benchmark*** for yourself, and you'll be able to directly see how much effort you put in translates to a better prediction. > What you'll find by working on many models: some effort you put in, actually has very little effect on how well your final model does at predicting new observations. Whereas some very easy changes actually have a lot of effect. And so you get better at allocating your time more effectively.My mentor's advice is echoed and elaborated in several sources:[Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)> Why start with a baseline? A baseline will take you less than 1/10th of the time, and could provide up to 90% of the results. A baseline puts a more complex model into context. Baselines are easy to deploy.[Measure Once, Cut Twice: Moving Towards Iteration in Data Science](https://blog.datarobot.com/measure-once-cut-twice-moving-towards-iteration-in-data-science)> The iterative approach in data science starts with emphasizing the importance of getting to a first model quickly, rather than starting with the variables and features. Once the first model is built, the work then steadily focuses on continual improvement.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> *Consider carefully what would be a reasonable baseline against which to compare model performance.* This is important for the data science team in order to understand whether they indeed are improving performance, and is equally important for demonstrating to stakeholders that mining the data has added value. What does baseline mean?Baseline is an overloaded term, as you can see in the links above. Baseline has multiple meanings: The score you'd get by guessing> A baseline for classification can be the most common class in the training dataset.> A baseline for regression can be the mean of the training labels. > A baseline for time-series regressions can be the value from the previous timestep. —[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488) Fast, first models that beat guessingWhat my mentor was talking about. Complete, tuned "simpler" modelCan be simpler mathematically and computationally. For example, Logistic Regression versus Deep Learning.Or can be simpler for the data scientist, with less work. For example, a model with less feature engineering versus a model with more feature engineering. Minimum performance that "matters"To go to production and get business value. Human-level performance Your goal may to be match, or nearly match, human performance, but with better speed, cost, or consistency.Or your goal may to be exceed human performance. ###Code #USE MEAN AS A BASELINE PREDICTION train['price'].mean() first10 = test[['price']].head(10) first10 #This is a series test['price'].head(10) #Notice storage of data is different if you use 2 brackets -> It's a column instead of series test[['price']].head(10) #Add a column first10['predicted'] = [3432] * 10 first10 first10['error'] = first10['price'] - first10['predicted'] first10 first10['error'].abs().mean() from sklearn.metrics import mean_absolute_error mean_absolute_error(first10['price'],first10['predicted']) y_test = test['price'] y_pred = [train['price'].mean()]*len(y_test) print(len(y_pred)) print(len(y_test)) mean_absolute_error(y_pred, y_test) ###Output _____no_output_____ ###Markdown _Lambda School Data Science — Linear Models_ Intro to Predictive Modeling Objectives- recognize examples of supervised learning with tabular data- distinguish between regression problems and classification problems- explain why overfitting is a problem and model validation is important- do train/test split- begin with baselines for regression I like Brandon Rohrer’s blog post, [“What questions can machine learning answer?”](https://brohrer.github.io/five_questions_data_science_answers.html)We’ll focus on two of these questions in Unit 2. These are both types of “supervised learning.”- “Is this A or B?” (Classification)- “How Much / How Many?” (Regression)**This unit, you’ll do four supervised learning projects** with “tabular data” (data in tables, like spreadsheets).- Predict New York City apartment rents <-- **Today, we'll start this project!**- Predict which water pumps in Tanzania need repairs- Predict the prices suppliers will quote Caterpillar for industrial parts- Choose your own labeled, tabular dataset, train a predictive model, and publish a blog post or web app with visualizations to explain your model! Predict NYC apartment rent 🏠💸You'll use a real-world data with rent prices for a subset of apartments in New York City!Run this code cell to load the dataset: ###Code import pandas as pd LOCAL = '../data/nyc/nyc-rent-2016.csv' WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/nyc/nyc-rent-2016.csv' df = pd.read_csv(WEB) # df = pd.read_csv(LOCAL) assert df.shape == (48300, 34) ###Output _____no_output_____ ###Markdown Install [pandas-profiling](https://github.com/pandas-profiling/pandas-profiling), version >= 2 ###Code !pip install --upgrade pandas-profiling import pandas_profiling pandas_profiling.__version__ ###Output _____no_output_____ ###Markdown Define the problem- Is this **supervised** learning?- Is this **tabular** data?- Is this **regression** or **classification**? ###Code df.info() df.describe() df.sample(n=10) profile = df.profile_report() profile.to_file(output_file='output.html') ###Output _____no_output_____ ###Markdown Explain why overfitting is a problem and model validation is important Jason Brownlee, [Overfitting and Underfitting With Machine Learning Algorithms](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/)> The goal of a good machine learning model is to **generalize** well from the training data to any data from the problem domain. This allows us to make predictions in the future on data the model has never seen.> The cause of poor performance in machine learning is either overfitting or underfitting the data.> **Overfitting** refers to a model that models the training data too well. Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. > **Underfitting** refers to a model that can neither model the training data nor generalize to new data.> Ideally, you want to select a model at the sweet spot between underfitting and overfitting. Rob Hyndman & George Athanasopoulos, [_Forecasting: Principles and Practice_, Chapter 3.4](https://otexts.com/fpp2/accuracy.html), Evaluating forecast accuracy:> The following points should be noted.> - A model which fits the training data well will not necessarily forecast well.> - A perfect fit can always be obtained by using a model with enough parameters.> - Over-fitting a model to data is just as bad as failing to identify a systematic pattern in the data.> **The accuracy of forecasts can only be determined by considering how well a model performs on new data that were not used when fitting the model.**> When choosing models, it is common practice to separate the available data into two portions, training and test data, where the training data is used to estimate any parameters of a forecasting method and the test data is used to evaluate its accuracy. Because the test data is not used in determining the forecasts, it should provide a reliable indication of how well the model is likely to forecast on new data.![](https://otexts.com/fpp2/fpp_files/figure-html/traintest-1.png)> The size of the test set is typically about 20% of the total sample, although this value depends on how long the sample is and how far ahead you want to forecast. The test set should ideally be at least as large as the maximum forecast horizon required.> Some references describe the test set as the “hold-out set” because these data are “held out” of the data used for fitting. Other references call the training set the “in-sample data” and the test set the “out-of-sample data”. We prefer to use “training data” and “test data” in this book. Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> An all-too-common scenario: a seemingly impressive machine learning model is a complete failure when implemented in production. The fallout includes leaders who are now skeptical of machine learning and reluctant to try it again. How can this happen?> One of the most likely culprits for this disconnect between results in development vs results in production is a poorly chosen validation set (or even worse, no validation set at all). James, Witten, Hastie, Tibshirani, [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 2.2, Assessing Model Accuracy> In general, we do not really care how well the method works training on the training data. Rather, _we are interested in the accuracy of the predictions that we obtain when we apply our method to previously unseen test data._ Why is this what we care about? > Suppose that we are interested test data in developing an algorithm to predict a stock’s price based on previous stock returns. We can train the method using stock returns from the past 6 months. But we don’t really care how well our method predicts last week’s stock price. We instead care about how well it will predict tomorrow’s price or next month’s price. > On a similar note, suppose that we have clinical measurements (e.g. weight, blood pressure, height, age, family history of disease) for a number of patients, as well as information about whether each patient has diabetes. We can use these patients to train a statistical learning method to predict risk of diabetes based on clinical measurements. In practice, we want this method to accurately predict diabetes risk for _future patients_ based on their clinical measurements. We are not very interested in whether or not the method accurately predicts diabetes risk for patients used to train the model, since we already know which of those patients have diabetes. Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions/8)> Good validation is _more important_ than good models. Do train/test splitWe have two options for where we choose to split:- Time- Random This choice depends on your goals. Rachel Thomas explains why you may want to split on time: Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> If your data is a time series, choosing a random subset of the data will be both too easy (you can look at the data both before and after the dates your are trying to predict) and not representative of most business use cases (where you are using historical data to build a model for use in the future). If your data includes the date and you are building a model to use in the future, you will want to choose a continuous section with the latest dates as your validation set For this project, we'll split based on time. - Use data from April & May 2016 to train.- Use data from June 2016 to test.(But in some future projects this unit, we'll do random splits, and explain why.) ###Code df['created'] = pd.to_datetime(df['created'], infer_datetime_format=True) df['created'].describe() df['month'] = df['created'].dt.month df['month'].value_counts() train = df[df['month'] < 6] test = df[df['month'] == 6] assert train.shape[0] + test.shape[0] == df.shape[0] train['month'].value_counts() test['month'].value_counts() ###Output _____no_output_____ ###Markdown Begin with baselines for regression Why begin with baselines?[My mentor](https://www.linkedin.com/in/jason-sanchez-62093847/) [taught me](https://youtu.be/0GrciaGYzV0?t=40s):>***Your first goal should always, always, always be getting a generalized prediction as fast as possible.*** You shouldn't spend a lot of time trying to tune your model, trying to add features, trying to engineer features, until you've actually gotten one prediction, at least. > The reason why that's a really good thing is because then ***you'll set a benchmark*** for yourself, and you'll be able to directly see how much effort you put in translates to a better prediction. > What you'll find by working on many models: some effort you put in, actually has very little effect on how well your final model does at predicting new observations. Whereas some very easy changes actually have a lot of effect. And so you get better at allocating your time more effectively.My mentor's advice is echoed and elaborated in several sources:[Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)> Why start with a baseline? A baseline will take you less than 1/10th of the time, and could provide up to 90% of the results. A baseline puts a more complex model into context. Baselines are easy to deploy.[Measure Once, Cut Twice: Moving Towards Iteration in Data Science](https://blog.datarobot.com/measure-once-cut-twice-moving-towards-iteration-in-data-science)> The iterative approach in data science starts with emphasizing the importance of getting to a first model quickly, rather than starting with the variables and features. Once the first model is built, the work then steadily focuses on continual improvement.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> *Consider carefully what would be a reasonable baseline against which to compare model performance.* This is important for the data science team in order to understand whether they indeed are improving performance, and is equally important for demonstrating to stakeholders that mining the data has added value. What does baseline mean?Baseline is an overloaded term, as you can see in the links above. Baseline has multiple meanings: The score you'd get by guessing> A baseline for classification can be the most common class in the training dataset.> A baseline for regression can be the mean of the training labels. > A baseline for time-series regressions can be the value from the previous timestep. —[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488) Fast, first models that beat guessingWhat my mentor was talking about. Complete, tuned "simpler" modelCan be simpler mathematically and computationally. For example, Logistic Regression versus Deep Learning.Or can be simpler for the data scientist, with less work. For example, a model with less feature engineering versus a model with more feature engineering. Minimum performance that "matters"To go to production and get business value. Human-level performance Your goal may to be match, or nearly match, human performance, but with better speed, cost, or consistency.Or your goal may to be exceed human performance. ###Code train['price'].mean() first10 = test[['price']].head(10) first10 first10['predicted'] = 3432 first10 first10['error'] = first10['price'] - first10['predicted'] first10 first10['error'].mean() first10['error'].abs().mean() from sklearn.metrics import mean_absolute_error mean_absolute_error(first10['price'], first10['predicted']) y_test = test['price'] y_pred = [train['price'].mean()]*len(y_test) print(len(y_pred)) print(len(y_test)) mean_absolute_error(y_test, y_pred) ###Output _____no_output_____ ###Markdown _Lambda School Data Science — Linear Models_ Intro to Predictive Modeling Objectives- recognize examples of supervised learning with tabular data- distinguish between regression problems and classification problems- explain why overfitting is a problem and model validation is important- do train/test split- begin with baselines for regression I like Brandon Rohrer’s blog post, [“What questions can machine learning answer?”](https://brohrer.github.io/five_questions_data_science_answers.html)We’ll focus on two of these questions in Unit 2. These are both types of “supervised learning.”- “Is this A or B?” (Classification)- “How Much / How Many?” (Regression)**This unit, you’ll do four supervised learning projects** with “tabular data” (data in tables, like spreadsheets).- Predict New York City apartment rents <-- **Today, we'll start this project!**- Predict which water pumps in Tanzania need repairs- Predict the prices suppliers will quote Caterpillar for industrial parts- Choose your own labeled, tabular dataset, train a predictive model, and publish a blog post or web app with visualizations to explain your model! Predict NYC apartment rent 🏠💸You'll use a real-world data with rent prices for a subset of apartments in New York City!Run this code cell to load the dataset: ###Code #Import pandas and load dta via URL import pandas as pd #LOCAL = '../data/nyc/nyc-rent-2016.csv' WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/nyc/nyc-rent-2016.csv' df = pd.read_csv(WEB) assert df.shape == (48300, 34) ###Output _____no_output_____ ###Markdown Install [pandas-profiling](https://github.com/pandas-profiling/pandas-profiling), version >= 2 ###Code !pip install --upgrade pandas-profiling # import pandas profiling to get a neat web output of the contents of data import pandas_profiling pandas_profiling.__version__ ###Output _____no_output_____ ###Markdown Define the problem- Is this **supervised** learning?- Is this **tabular** data?- Is this **regression** or **classification**? ###Code # Store the output in 'profile' and output into a html which will be saved in same location as this notebook profile = df.profile_report() profile.to_file(output_file = 'output.html') ###Output _____no_output_____ ###Markdown Explain why overfitting is a problem and model validation is important Jason Brownlee, [Overfitting and Underfitting With Machine Learning Algorithms](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/)> The goal of a good machine learning model is to **generalize** well from the training data to any data from the problem domain. This allows us to make predictions in the future on data the model has never seen.> The cause of poor performance in machine learning is either overfitting or underfitting the data.> **Overfitting** refers to a model that models the training data too well. Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. > **Underfitting** refers to a model that can neither model the training data nor generalize to new data.> Ideally, you want to select a model at the sweet spot between underfitting and overfitting. Rob Hyndman & George Athanasopoulos, [_Forecasting: Principles and Practice_, Chapter 3.4](https://otexts.com/fpp2/accuracy.html), Evaluating forecast accuracy:> The following points should be noted.> - A model which fits the training data well will not necessarily forecast well.> - A perfect fit can always be obtained by using a model with enough parameters.> - Over-fitting a model to data is just as bad as failing to identify a systematic pattern in the data.> **The accuracy of forecasts can only be determined by considering how well a model performs on new data that were not used when fitting the model.**> When choosing models, it is common practice to separate the available data into two portions, training and test data, where the training data is used to estimate any parameters of a forecasting method and the test data is used to evaluate its accuracy. Because the test data is not used in determining the forecasts, it should provide a reliable indication of how well the model is likely to forecast on new data.![](https://otexts.com/fpp2/fpp_files/figure-html/traintest-1.png)> The size of the test set is typically about 20% of the total sample, although this value depends on how long the sample is and how far ahead you want to forecast. The test set should ideally be at least as large as the maximum forecast horizon required.> Some references describe the test set as the “hold-out set” because these data are “held out” of the data used for fitting. Other references call the training set the “in-sample data” and the test set the “out-of-sample data”. We prefer to use “training data” and “test data” in this book. Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> An all-too-common scenario: a seemingly impressive machine learning model is a complete failure when implemented in production. The fallout includes leaders who are now skeptical of machine learning and reluctant to try it again. How can this happen?> One of the most likely culprits for this disconnect between results in development vs results in production is a poorly chosen validation set (or even worse, no validation set at all). James, Witten, Hastie, Tibshirani, [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 2.2, Assessing Model Accuracy> In general, we do not really care how well the method works training on the training data. Rather, _we are interested in the accuracy of the predictions that we obtain when we apply our method to previously unseen test data._ Why is this what we care about? > Suppose that we are interested test data in developing an algorithm to predict a stock’s price based on previous stock returns. We can train the method using stock returns from the past 6 months. But we don’t really care how well our method predicts last week’s stock price. We instead care about how well it will predict tomorrow’s price or next month’s price. > On a similar note, suppose that we have clinical measurements (e.g. weight, blood pressure, height, age, family history of disease) for a number of patients, as well as information about whether each patient has diabetes. We can use these patients to train a statistical learning method to predict risk of diabetes based on clinical measurements. In practice, we want this method to accurately predict diabetes risk for _future patients_ based on their clinical measurements. We are not very interested in whether or not the method accurately predicts diabetes risk for patients used to train the model, since we already know which of those patients have diabetes. Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions/8)> Good validation is _more important_ than good models. Do train/test splitWe have two options for where we choose to split:- Time- Random This choice depends on your goals. Rachel Thomas explains why you may want to split on time: Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> If your data is a time series, choosing a random subset of the data will be both too easy (you can look at the data both before and after the dates your are trying to predict) and not representative of most business use cases (where you are using historical data to build a model for use in the future). If your data includes the date and you are building a model to use in the future, you will want to choose a continuous section with the latest dates as your validation set For this project, we'll split based on time. - Use data from April & May 2016 to train.- Use data from June 2016 to test.(But in some future projects this unit, we'll do random splits, and explain why.) ###Code # from the ouput file we see 'created' is a date time object but, its stored as categorical value. #Convert it into date time obj df['created'] = pd.to_datetime(df['created'],infer_datetime_format = True) df['created'].head() # now split our data set into training and test data sets. For this we will consider observations of month of June # as test data and rest as training data # create another feature of month from 'created' ----> This is called "feature engineering" df['month'] = df['created'].dt.month train = df[df['month'] < 6] test = df[df['month'] == 6] # assert to check if number of observations in our test and tarining sets are equal to dataset assert train.shape[0] + test.shape[0] == df.shape[0] ###Output _____no_output_____ ###Markdown Begin with baselines for regression ###Code # to establish a baseline and find by how much it is off to originals data. We use a model to establish a baseline. # just taking mean as our model for now train['price'].mean() #lets first take first 10 values from our test set first10 = test[['price']].head(10) first10 # assume our model predicted mean of training set as predicted value first10['predicted'] = [3432]*10 # find out by how much are we off in our prediction by using absolute error from sklearn.metrics import mean_absolute_error mean_absolute_error(first10['price'], first10['predicted']) # lets apply to entire test data now y_test = test['price'] y_pred = [train['price'].mean()]*len(y_test) mean_absolute_error(y_test, y_pred) ###Output _____no_output_____ ###Markdown _Lambda School Data Science — Linear Models_ Intro to Predictive Modeling Objectives- recognize examples of supervised learning with tabular data- distinguish between regression problems and classification problems- explain why overfitting is a problem and model validation is important- do train/test split- begin with baselines for regression I like Brandon Rohrer’s blog post, [“What questions can machine learning answer?”](https://brohrer.github.io/five_questions_data_science_answers.html)We’ll focus on two of these questions in Unit 2. These are both types of “supervised learning.”- “Is this A or B?” (Classification)- “How Much / How Many?” (Regression)**This unit, you’ll do four supervised learning projects** with “tabular data” (data in tables, like spreadsheets).- Predict New York City apartment rents <-- **Today, we'll start this project!**- Predict which water pumps in Tanzania need repairs- Predict the prices suppliers will quote Caterpillar for industrial parts- Choose your own labeled, tabular dataset, train a predictive model, and publish a blog post or web app with visualizations to explain your model! Predict NYC apartment rent 🏠💸You'll use a real-world data with rent prices for a subset of apartments in New York City!Run this code cell to load the dataset: ###Code LOCAL = '../data/nyc/nyc-rent-2016.csv' WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/nyc/nyc-rent-2016.csv' df = pd.read_csv(WEB) assert df.shape == (48300, 34) ###Output _____no_output_____ ###Markdown Install [pandas-profiling](https://github.com/pandas-profiling/pandas-profiling), version >= 2 ###Code !pip install --upgrade pandas-profiling import pandas_profiling pandas_profiling.__version__ ###Output _____no_output_____ ###Markdown _Lambda School Data Science — Linear Models_ Intro to Predictive Modeling Objectives- recognize examples of supervised learning with tabular data- distinguish between regression problems and classification problems- explain why overfitting is a problem and model validation is important- do train/test split- begin with baselines for regression I like Brandon Rohrer’s blog post, [“What questions can machine learning answer?”](https://brohrer.github.io/five_questions_data_science_answers.html)We’ll focus on two of these questions in Unit 2. These are both types of “supervised learning.”- “Is this A or B?” (Classification)- “How Much / How Many?” (Regression)**This unit, you’ll do four supervised learning projects** with “tabular data” (data in tables, like spreadsheets).- Predict New York City apartment rents <-- **Today, we'll start this project!**- Predict which water pumps in Tanzania need repairs- Predict the prices suppliers will quote Caterpillar for industrial parts- Choose your own labeled, tabular dataset, train a predictive model, and publish a blog post or web app with visualizations to explain your model! Predict NYC apartment rent 🏠💸You'll use a real-world data with rent prices for a subset of apartments in New York City!Run this code cell to load the dataset: ###Code LOCAL = '../data/nyc/nyc-rent-2016.csv' WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/nyc/nyc-rent-2016.csv' import pandas as pd df = pd.read_csv(WEB) assert df.shape == (48300, 34) ###Output _____no_output_____ ###Markdown Install [pandas-profiling](https://github.com/pandas-profiling/pandas-profiling), version >= 2 ###Code !pip install --upgrade pandas-profiling import pandas_profiling pandas_profiling.__version__ ###Output _____no_output_____ ###Markdown Define the problem- Is this **supervised** learning?- Is this **tabular** data?- Is this **regression** or **classification**? ###Code df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 48300 entries, 0 to 48299 Data columns (total 34 columns): bathrooms 48300 non-null float64 bedrooms 48300 non-null int64 created 48300 non-null object description 46879 non-null object display_address 48168 non-null object latitude 48300 non-null float64 longitude 48300 non-null float64 price 48300 non-null int64 street_address 48290 non-null object interest_level 48300 non-null object elevator 48300 non-null int64 cats_allowed 48300 non-null int64 hardwood_floors 48300 non-null int64 dogs_allowed 48300 non-null int64 doorman 48300 non-null int64 dishwasher 48300 non-null int64 no_fee 48300 non-null int64 laundry_in_building 48300 non-null int64 fitness_center 48300 non-null int64 pre-war 48300 non-null int64 laundry_in_unit 48300 non-null int64 roof_deck 48300 non-null int64 outdoor_space 48300 non-null int64 dining_room 48300 non-null int64 high_speed_internet 48300 non-null int64 balcony 48300 non-null int64 swimming_pool 48300 non-null int64 new_construction 48300 non-null int64 exclusive 48300 non-null int64 terrace 48300 non-null int64 loft 48300 non-null int64 garden_patio 48300 non-null int64 common_outdoor_space 48300 non-null int64 wheelchair_access 48300 non-null int64 dtypes: float64(3), int64(26), object(5) memory usage: 12.5+ MB ###Markdown Explain why overfitting is a problem and model validation is important Jason Brownlee, [Overfitting and Underfitting With Machine Learning Algorithms](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/)> The goal of a good machine learning model is to **generalize** well from the training data to any data from the problem domain. This allows us to make predictions in the future on data the model has never seen.> The cause of poor performance in machine learning is either overfitting or underfitting the data.> **Overfitting** refers to a model that models the training data too well. Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. > **Underfitting** refers to a model that can neither model the training data nor generalize to new data.> Ideally, you want to select a model at the sweet spot between underfitting and overfitting. Rob Hyndman & George Athanasopoulos, [_Forecasting: Principles and Practice_, Chapter 3.4](https://otexts.com/fpp2/accuracy.html), Evaluating forecast accuracy:> The following points should be noted.> - A model which fits the training data well will not necessarily forecast well.> - A perfect fit can always be obtained by using a model with enough parameters.> - Over-fitting a model to data is just as bad as failing to identify a systematic pattern in the data.> **The accuracy of forecasts can only be determined by considering how well a model performs on new data that were not used when fitting the model.**> When choosing models, it is common practice to separate the available data into two portions, training and test data, where the training data is used to estimate any parameters of a forecasting method and the test data is used to evaluate its accuracy. Because the test data is not used in determining the forecasts, it should provide a reliable indication of how well the model is likely to forecast on new data.![](https://otexts.com/fpp2/fpp_files/figure-html/traintest-1.png)> The size of the test set is typically about 20% of the total sample, although this value depends on how long the sample is and how far ahead you want to forecast. The test set should ideally be at least as large as the maximum forecast horizon required.> Some references describe the test set as the “hold-out set” because these data are “held out” of the data used for fitting. Other references call the training set the “in-sample data” and the test set the “out-of-sample data”. We prefer to use “training data” and “test data” in this book. Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> An all-too-common scenario: a seemingly impressive machine learning model is a complete failure when implemented in production. The fallout includes leaders who are now skeptical of machine learning and reluctant to try it again. How can this happen?> One of the most likely culprits for this disconnect between results in development vs results in production is a poorly chosen validation set (or even worse, no validation set at all). James, Witten, Hastie, Tibshirani, [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 2.2, Assessing Model Accuracy> In general, we do not really care how well the method works training on the training data. Rather, _we are interested in the accuracy of the predictions that we obtain when we apply our method to previously unseen test data._ Why is this what we care about? > Suppose that we are interested test data in developing an algorithm to predict a stock’s price based on previous stock returns. We can train the method using stock returns from the past 6 months. But we don’t really care how well our method predicts last week’s stock price. We instead care about how well it will predict tomorrow’s price or next month’s price. > On a similar note, suppose that we have clinical measurements (e.g. weight, blood pressure, height, age, family history of disease) for a number of patients, as well as information about whether each patient has diabetes. We can use these patients to train a statistical learning method to predict risk of diabetes based on clinical measurements. In practice, we want this method to accurately predict diabetes risk for _future patients_ based on their clinical measurements. We are not very interested in whether or not the method accurately predicts diabetes risk for patients used to train the model, since we already know which of those patients have diabetes. Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions/8)> Good validation is _more important_ than good models. Do train/test splitWe have two options for where we choose to split:- Time- Random This choice depends on your goals. Rachel Thomas explains why you may want to split on time: Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> If your data is a time series, choosing a random subset of the data will be both too easy (you can look at the data both before and after the dates your are trying to predict) and not representative of most business use cases (where you are using historical data to build a model for use in the future). If your data includes the date and you are building a model to use in the future, you will want to choose a continuous section with the latest dates as your validation set For this project, we'll split based on time. - Use data from April & May 2016 to train.- Use data from June 2016 to test.(But in some future projects this unit, we'll do random splits, and explain why.) ###Code df['created'] = pd.to_datetime(df['created'], infer_datetime_format=True) df['month'] = df['created'].dt.month train = df.query('month < 6') test = df.query('month == 6') train.describe() test.describe() ###Output _____no_output_____ ###Markdown Begin with baselines for regression Why begin with baselines?[My mentor](https://www.linkedin.com/in/jason-sanchez-62093847/) [taught me](https://youtu.be/0GrciaGYzV0?t=40s):>***Your first goal should always, always, always be getting a generalized prediction as fast as possible.*** You shouldn't spend a lot of time trying to tune your model, trying to add features, trying to engineer features, until you've actually gotten one prediction, at least. > The reason why that's a really good thing is because then ***you'll set a benchmark*** for yourself, and you'll be able to directly see how much effort you put in translates to a better prediction. > What you'll find by working on many models: some effort you put in, actually has very little effect on how well your final model does at predicting new observations. Whereas some very easy changes actually have a lot of effect. And so you get better at allocating your time more effectively.My mentor's advice is echoed and elaborated in several sources:[Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)> Why start with a baseline? A baseline will take you less than 1/10th of the time, and could provide up to 90% of the results. A baseline puts a more complex model into context. Baselines are easy to deploy.[Measure Once, Cut Twice: Moving Towards Iteration in Data Science](https://blog.datarobot.com/measure-once-cut-twice-moving-towards-iteration-in-data-science)> The iterative approach in data science starts with emphasizing the importance of getting to a first model quickly, rather than starting with the variables and features. Once the first model is built, the work then steadily focuses on continual improvement.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> *Consider carefully what would be a reasonable baseline against which to compare model performance.* This is important for the data science team in order to understand whether they indeed are improving performance, and is equally important for demonstrating to stakeholders that mining the data has added value. What does baseline mean?Baseline is an overloaded term, as you can see in the links above. Baseline has multiple meanings: The score you'd get by guessing> A baseline for classification can be the most common class in the training dataset.> A baseline for regression can be the mean of the training labels. > A baseline for time-series regressions can be the value from the previous timestep. —[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488) Fast, first models that beat guessingWhat my mentor was talking about. Complete, tuned "simpler" modelCan be simpler mathematically and computationally. For example, Logistic Regression versus Deep Learning.Or can be simpler for the data scientist, with less work. For example, a model with less feature engineering versus a model with more feature engineering. Minimum performance that "matters"To go to production and get business value. Human-level performance Your goal may to be match, or nearly match, human performance, but with better speed, cost, or consistency.Or your goal may to be exceed human performance. ###Code train.head() features = ['bedrooms', 'hardwood_floors', 'swimming_pool', 'new_construction', 'elevator', 'bathrooms', 'exclusive', 'latitude', 'longitude'] from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(train[features], train['price']) test['predictions'] = model.predict(test[features]) test['abs_error'] = (test['predictions'] - test['price']).abs() test.head() test['abs_error'].mean() ###Output _____no_output_____ ###Markdown _Lambda School Data Science — Linear Models_ Intro to Predictive Modeling Objectives- recognize examples of supervised learning with tabular data- distinguish between regression problems and classification problems- explain why overfitting is a problem and model validation is important- do train/test split- begin with baselines for regression I like Brandon Rohrer’s blog post, [“What questions can machine learning answer?”](https://brohrer.github.io/five_questions_data_science_answers.html)We’ll focus on two of these questions in Unit 2. These are both types of “supervised learning.”- “Is this A or B?” (Classification)- “How Much / How Many?” (Regression)**This unit, you’ll do four supervised learning projects** with “tabular data” (data in tables, like spreadsheets).- Predict New York City apartment rents <-- **Today, we'll start this project!**- Predict which water pumps in Tanzania need repairs- Predict the prices suppliers will quote Caterpillar for industrial parts- Choose your own labeled, tabular dataset, train a predictive model, and publish a blog post or web app with visualizations to explain your model! Predict NYC apartment rent 🏠💸You'll use a real-world data with rent prices for a subset of apartments in New York City!Run this code cell to load the dataset: Install [pandas-profiling](https://github.com/pandas-profiling/pandas-profiling), version >= 2 ###Code import pandas as pd LOCAL = '../data/nyc/nyc-rent-2016.csv' WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/nyc/nyc-rent-2016.csv' df = pd.read_csv(WEB) assert df.shape == (48300, 34) !pip install --upgrade pandas-profiling import pandas_profiling pandas_profiling.__version__ ###Output _____no_output_____ ###Markdown _Lambda School Data Science — Linear Models_ Intro to Predictive Modeling Objectives- recognize examples of supervised learning with tabular data- distinguish between regression problems and classification problems- explain why overfitting is a problem and model validation is important- do train/test split- begin with baselines for regression I like Brandon Rohrer’s blog post, [“What questions can machine learning answer?”](https://brohrer.github.io/five_questions_data_science_answers.html)We’ll focus on two of these questions in Unit 2. These are both types of “supervised learning.”- “Is this A or B?” (Classification)- “How Much / How Many?” (Regression)**This unit, you’ll do four supervised learning projects** with “tabular data” (data in tables, like spreadsheets).- Predict New York City apartment rents <-- **Today, we'll start this project!**- Predict which water pumps in Tanzania need repairs- Predict the prices suppliers will quote Caterpillar for industrial parts- Choose your own labeled, tabular dataset, train a predictive model, and publish a blog post or web app with visualizations to explain your model! Predict NYC apartment rent 🏠💸You'll use a real-world data with rent prices for a subset of apartments in New York City!Run this code cell to load the dataset: ###Code import pandas as pd LOCAL = '../data/nyc/nyc-rent-2016.csv' WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/nyc/nyc-rent-2016.csv' df = pd.read_csv(LOCAL) assert df.shape == (48300, 34) ###Output _____no_output_____ ###Markdown Install [pandas-profiling](https://github.com/pandas-profiling/pandas-profiling), version >= 2 ###Code !pip install --upgrade pandas-profiling import pandas_profiling pandas_profiling.__version__ ###Output _____no_output_____ ###Markdown Define the problem- Is this **supervised** learning?- Is this **tabular** data?- Is this **regression** or **classification**? ###Code df.info() df.describe() df.sample(n=10) profile = df.profile_report() profile.to_file(output_file="output.html") ###Output _____no_output_____ ###Markdown Explain why overfitting is a problem and model validation is important Jason Brownlee, [Overfitting and Underfitting With Machine Learning Algorithms](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/)> The goal of a good machine learning model is to **generalize** well from the training data to any data from the problem domain. This allows us to make predictions in the future on data the model has never seen.> The cause of poor performance in machine learning is either overfitting or underfitting the data.> **Overfitting** refers to a model that models the training data too well. Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. > **Underfitting** refers to a model that can neither model the training data nor generalize to new data.> Ideally, you want to select a model at the sweet spot between underfitting and overfitting. Rob Hyndman & George Athanasopoulos, [_Forecasting: Principles and Practice_, Chapter 3.4](https://otexts.com/fpp2/accuracy.html), Evaluating forecast accuracy:> The following points should be noted.> - A model which fits the training data well will not necessarily forecast well.> - A perfect fit can always be obtained by using a model with enough parameters.> - Over-fitting a model to data is just as bad as failing to identify a systematic pattern in the data.> **The accuracy of forecasts can only be determined by considering how well a model performs on new data that were not used when fitting the model.**> When choosing models, it is common practice to separate the available data into two portions, training and test data, where the training data is used to estimate any parameters of a forecasting method and the test data is used to evaluate its accuracy. Because the test data is not used in determining the forecasts, it should provide a reliable indication of how well the model is likely to forecast on new data.![](https://otexts.com/fpp2/fpp_files/figure-html/traintest-1.png)> The size of the test set is typically about 20% of the total sample, although this value depends on how long the sample is and how far ahead you want to forecast. The test set should ideally be at least as large as the maximum forecast horizon required.> Some references describe the test set as the “hold-out set” because these data are “held out” of the data used for fitting. Other references call the training set the “in-sample data” and the test set the “out-of-sample data”. We prefer to use “training data” and “test data” in this book. Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> An all-too-common scenario: a seemingly impressive machine learning model is a complete failure when implemented in production. The fallout includes leaders who are now skeptical of machine learning and reluctant to try it again. How can this happen?> One of the most likely culprits for this disconnect between results in development vs results in production is a poorly chosen validation set (or even worse, no validation set at all). James, Witten, Hastie, Tibshirani, [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 2.2, Assessing Model Accuracy> In general, we do not really care how well the method works training on the training data. Rather, _we are interested in the accuracy of the predictions that we obtain when we apply our method to previously unseen test data._ Why is this what we care about? > Suppose that we are interested test data in developing an algorithm to predict a stock’s price based on previous stock returns. We can train the method using stock returns from the past 6 months. But we don’t really care how well our method predicts last week’s stock price. We instead care about how well it will predict tomorrow’s price or next month’s price. > On a similar note, suppose that we have clinical measurements (e.g. weight, blood pressure, height, age, family history of disease) for a number of patients, as well as information about whether each patient has diabetes. We can use these patients to train a statistical learning method to predict risk of diabetes based on clinical measurements. In practice, we want this method to accurately predict diabetes risk for _future patients_ based on their clinical measurements. We are not very interested in whether or not the method accurately predicts diabetes risk for patients used to train the model, since we already know which of those patients have diabetes. Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions/8)> Good validation is _more important_ than good models. Do train/test splitWe have two options for where we choose to split:- Time- Random This choice depends on your goals. Rachel Thomas explains why you may want to split on time: Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> If your data is a time series, choosing a random subset of the data will be both too easy (you can look at the data both before and after the dates your are trying to predict) and not representative of most business use cases (where you are using historical data to build a model for use in the future). If your data includes the date and you are building a model to use in the future, you will want to choose a continuous section with the latest dates as your validation set For this project, we'll split based on time. - Use data from April & May 2016 to train.- Use data from June 2016 to test.(But in some future projects this unit, we'll do random splits, and explain why.) ###Code df['created'] = pd.to_datetime(df['created'], infer_datetime_format=True) df['created'] df['created'].describe() df['month'] = df['created'].dt.month df['month'].describe() train = df[df['month'] < 6] test = df[df['month'] == 6] assert train.shape[0] + test.shape[0] == df.shape[0] train['created'].describe() test['created'].describe() ###Output _____no_output_____ ###Markdown Begin with baselines for regression Why begin with baselines?[My mentor](https://www.linkedin.com/in/jason-sanchez-62093847/) [taught me](https://youtu.be/0GrciaGYzV0?t=40s):>***Your first goal should always, always, always be getting a generalized prediction as fast as possible.*** You shouldn't spend a lot of time trying to tune your model, trying to add features, trying to engineer features, until you've actually gotten one prediction, at least. > The reason why that's a really good thing is because then ***you'll set a benchmark*** for yourself, and you'll be able to directly see how much effort you put in translates to a better prediction. > What you'll find by working on many models: some effort you put in, actually has very little effect on how well your final model does at predicting new observations. Whereas some very easy changes actually have a lot of effect. And so you get better at allocating your time more effectively.My mentor's advice is echoed and elaborated in several sources:[Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)> Why start with a baseline? A baseline will take you less than 1/10th of the time, and could provide up to 90% of the results. A baseline puts a more complex model into context. Baselines are easy to deploy.[Measure Once, Cut Twice: Moving Towards Iteration in Data Science](https://blog.datarobot.com/measure-once-cut-twice-moving-towards-iteration-in-data-science)> The iterative approach in data science starts with emphasizing the importance of getting to a first model quickly, rather than starting with the variables and features. Once the first model is built, the work then steadily focuses on continual improvement.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> *Consider carefully what would be a reasonable baseline against which to compare model performance.* This is important for the data science team in order to understand whether they indeed are improving performance, and is equally important for demonstrating to stakeholders that mining the data has added value. What does baseline mean?Baseline is an overloaded term, as you can see in the links above. Baseline has multiple meanings: The score you'd get by guessing> A baseline for classification can be the most common class in the training dataset.> A baseline for regression can be the mean of the training labels. > A baseline for time-series regressions can be the value from the previous timestep. —[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488) Fast, first models that beat guessingWhat my mentor was talking about. Complete, tuned "simpler" modelCan be simpler mathematically and computationally. For example, Logistic Regression versus Deep Learning.Or can be simpler for the data scientist, with less work. For example, a model with less feature engineering versus a model with more feature engineering. Minimum performance that "matters"To go to production and get business value. Human-level performance Your goal may to be match, or nearly match, human performance, but with better speed, cost, or consistency.Or your goal may to be exceed human performance. ###Code train['price'].mean() first10 = test[['price']].head(10) first10 first10['predicted'] = [3432]*10 first10 first10['error'] = first10['price'] - first10['predicted'] first10 first10['error'].abs().mean() from sklearn.metrics import mean_absolute_error mean_absolute_error(first10['price'], first10['predicted']) y_test = test['price'] y_pred = [train['price'].mean()] * len(y_test) print(len(y_test)) print(len(y_pred)) mean_absolute_error(y_pred, y_test) mean_absolute_error(y_test, y_pred) df.head() features4 = ['cats_allowed', 'dogs_allowed'] ###Output _____no_output_____ ###Markdown _Lambda School Data Science — Linear Models_ Intro to Predictive Modeling Objectives- recognize examples of supervised learning with tabular data- distinguish between regression problems and classification problems- explain why overfitting is a problem and model validation is important- do train/test split- begin with baselines for regression I like Brandon Rohrer’s blog post, [“What questions can machine learning answer?”](https://brohrer.github.io/five_questions_data_science_answers.html)We’ll focus on two of these questions in Unit 2. These are both types of “supervised learning.”- “Is this A or B?” (Classification)- “How Much / How Many?” (Regression)**This unit, you’ll do four supervised learning projects** with “tabular data” (data in tables, like spreadsheets).- Predict New York City apartment rents <-- **Today, we'll start this project!**- Predict which water pumps in Tanzania need repairs- Predict the prices suppliers will quote Caterpillar for industrial parts- Choose your own labeled, tabular dataset, train a predictive model, and publish a blog post or web app with visualizations to explain your model! Predict NYC apartment rent 🏠💸You'll use a real-world data with rent prices for a subset of apartments in New York City!Run this code cell to load the dataset: ###Code LOCAL = '../data/nyc/nyc-rent-2016.csv' WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/nyc/nyc-rent-2016.csv' df = pd.read_csv(WEB) assert df.shape == (48300, 34) ###Output _____no_output_____ ###Markdown Install [pandas-profiling](https://github.com/pandas-profiling/pandas-profiling), version >= 2 ###Code !pip install --upgrade pandas-profiling import pandas_profiling pandas_profiling.__version__ ###Output _____no_output_____ ###Markdown Define the problem- Is this **supervised** learning?- Is this **tabular** data?- Is this **regression** or **classification**? ###Code ###Output _____no_output_____ ###Markdown Explain why overfitting is a problem and model validation is important Jason Brownlee, [Overfitting and Underfitting With Machine Learning Algorithms](https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/)> The goal of a good machine learning model is to **generalize** well from the training data to any data from the problem domain. This allows us to make predictions in the future on data the model has never seen.> The cause of poor performance in machine learning is either overfitting or underfitting the data.> **Overfitting** refers to a model that models the training data too well. Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. > **Underfitting** refers to a model that can neither model the training data nor generalize to new data.> Ideally, you want to select a model at the sweet spot between underfitting and overfitting. Rob Hyndman & George Athanasopoulos, [_Forecasting: Principles and Practice_, Chapter 3.4](https://otexts.com/fpp2/accuracy.html), Evaluating forecast accuracy:> The following points should be noted.> - A model which fits the training data well will not necessarily forecast well.> - A perfect fit can always be obtained by using a model with enough parameters.> - Over-fitting a model to data is just as bad as failing to identify a systematic pattern in the data.> **The accuracy of forecasts can only be determined by considering how well a model performs on new data that were not used when fitting the model.**> When choosing models, it is common practice to separate the available data into two portions, training and test data, where the training data is used to estimate any parameters of a forecasting method and the test data is used to evaluate its accuracy. Because the test data is not used in determining the forecasts, it should provide a reliable indication of how well the model is likely to forecast on new data.![](https://otexts.com/fpp2/fpp_files/figure-html/traintest-1.png)> The size of the test set is typically about 20% of the total sample, although this value depends on how long the sample is and how far ahead you want to forecast. The test set should ideally be at least as large as the maximum forecast horizon required.> Some references describe the test set as the “hold-out set” because these data are “held out” of the data used for fitting. Other references call the training set the “in-sample data” and the test set the “out-of-sample data”. We prefer to use “training data” and “test data” in this book. Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> An all-too-common scenario: a seemingly impressive machine learning model is a complete failure when implemented in production. The fallout includes leaders who are now skeptical of machine learning and reluctant to try it again. How can this happen?> One of the most likely culprits for this disconnect between results in development vs results in production is a poorly chosen validation set (or even worse, no validation set at all). James, Witten, Hastie, Tibshirani, [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 2.2, Assessing Model Accuracy> In general, we do not really care how well the method works training on the training data. Rather, _we are interested in the accuracy of the predictions that we obtain when we apply our method to previously unseen test data._ Why is this what we care about? > Suppose that we are interested test data in developing an algorithm to predict a stock’s price based on previous stock returns. We can train the method using stock returns from the past 6 months. But we don’t really care how well our method predicts last week’s stock price. We instead care about how well it will predict tomorrow’s price or next month’s price. > On a similar note, suppose that we have clinical measurements (e.g. weight, blood pressure, height, age, family history of disease) for a number of patients, as well as information about whether each patient has diabetes. We can use these patients to train a statistical learning method to predict risk of diabetes based on clinical measurements. In practice, we want this method to accurately predict diabetes risk for _future patients_ based on their clinical measurements. We are not very interested in whether or not the method accurately predicts diabetes risk for patients used to train the model, since we already know which of those patients have diabetes. Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions/8)> Good validation is _more important_ than good models. Do train/test splitWe have two options for where we choose to split:- Time- Random This choice depends on your goals. Rachel Thomas explains why you may want to split on time: Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> If your data is a time series, choosing a random subset of the data will be both too easy (you can look at the data both before and after the dates your are trying to predict) and not representative of most business use cases (where you are using historical data to build a model for use in the future). If your data includes the date and you are building a model to use in the future, you will want to choose a continuous section with the latest dates as your validation set For this project, we'll split based on time. - Use data from April & May 2016 to train.- Use data from June 2016 to test.(But in some future projects this unit, we'll do random splits, and explain why.) ###Code ###Output _____no_output_____ ###Markdown Begin with baselines for regression Why begin with baselines?[My mentor](https://www.linkedin.com/in/jason-sanchez-62093847/) [taught me](https://youtu.be/0GrciaGYzV0?t=40s):>***Your first goal should always, always, always be getting a generalized prediction as fast as possible.*** You shouldn't spend a lot of time trying to tune your model, trying to add features, trying to engineer features, until you've actually gotten one prediction, at least. > The reason why that's a really good thing is because then ***you'll set a benchmark*** for yourself, and you'll be able to directly see how much effort you put in translates to a better prediction. > What you'll find by working on many models: some effort you put in, actually has very little effect on how well your final model does at predicting new observations. Whereas some very easy changes actually have a lot of effect. And so you get better at allocating your time more effectively.My mentor's advice is echoed and elaborated in several sources:[Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)> Why start with a baseline? A baseline will take you less than 1/10th of the time, and could provide up to 90% of the results. A baseline puts a more complex model into context. Baselines are easy to deploy.[Measure Once, Cut Twice: Moving Towards Iteration in Data Science](https://blog.datarobot.com/measure-once-cut-twice-moving-towards-iteration-in-data-science)> The iterative approach in data science starts with emphasizing the importance of getting to a first model quickly, rather than starting with the variables and features. Once the first model is built, the work then steadily focuses on continual improvement.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> *Consider carefully what would be a reasonable baseline against which to compare model performance.* This is important for the data science team in order to understand whether they indeed are improving performance, and is equally important for demonstrating to stakeholders that mining the data has added value. What does baseline mean?Baseline is an overloaded term, as you can see in the links above. Baseline has multiple meanings: The score you'd get by guessing> A baseline for classification can be the most common class in the training dataset.> A baseline for regression can be the mean of the training labels. > A baseline for time-series regressions can be the value from the previous timestep. —[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488) Fast, first models that beat guessingWhat my mentor was talking about. Complete, tuned "simpler" modelCan be simpler mathematically and computationally. For example, Logistic Regression versus Deep Learning.Or can be simpler for the data scientist, with less work. For example, a model with less feature engineering versus a model with more feature engineering. Minimum performance that "matters"To go to production and get business value. Human-level performance Your goal may to be match, or nearly match, human performance, but with better speed, cost, or consistency.Or your goal may to be exceed human performance. ###Code ###Output _____no_output_____
docs/source/example_notebooks/dowhy_simple_example.ipynb
###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified) treatment variable on a (pre-specified) outcome variable.First, let us load all required packages. ###Code import numpy as np import pandas as pd from dowhy import CausalModel import dowhy.datasets # Avoid printing dataconversion warnings from sklearn and numpy import warnings from sklearn.exceptions import DataConversionWarning warnings.filterwarnings(action='ignore', category=DataConversionWarning) warnings.filterwarnings(action='ignore', category=FutureWarning) # Config dict to set the logging level import logging import logging.config DEFAULT_LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'loggers': { '': { 'level': 'WARN', }, } } logging.config.dictConfig(DEFAULT_LOGGING) logging.info("Getting started with DoWhy. Running notebook...") ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=5000, treatment_is_binary=True, stddev_treatment_noise=10, num_discrete_common_causes=1) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output _____no_output_____ ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format.To create the causal graph for your dataset, you can use a tool like [DAGitty](http://dagitty.net/dags.html) that provides a GUI to construct the graph. You can export the graph string that it generates. The graph string is very close to the DOT format: just rename `dag` to `digraph`, remove newlines and add a semicolon after every line, to convert it to the DOT format and input to DoWhy. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. DoWhy philosophy: Keep identification and estimation separateIdentification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps. Identification ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output _____no_output_____ ###Markdown Note the parameter flag *proceed\_when\_unidentifiable*. It needs to be set to *True* to convey the assumption that we are ignoring any unobserved confounding. The default behavior is to prompt the user to double-check that the unobserved confounders can be ignored. Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output _____no_output_____ ###Markdown You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify "effect modifiers" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. ###Code # Causal effect on the control group (ATC) causal_estimate_att = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification", target_units = "atc") print(causal_estimate_att) print("Causal Estimate is " + str(causal_estimate_att.value)) ###Output _____no_output_____ ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"]) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. Identification ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output _____no_output_____ ###Markdown Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output _____no_output_____ ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Refutation methods provide tests that every correct estimator should pass. So if an estimator fails the refutation test (p-value is <0.05), then it means that there is some problem with the estimator. Note that we cannot verify that the estimate is correct, but we can reject it if it violates certain expected behavior (this is analogous to scientific theories that can be falsified but not proven true). The below refutation tests are based on either 1) **Invariant transformations**: changes in the data that should not change the estimate. Any estimator whose result varies significantly between the original data and the modified data fails the test; a) Random Common Cause b) Data Subset 2) **Nullifying transformations**: after the data change, the causal true estimate is zero. Any estimator whose result varies significantly from zero on the new data fails the test. a) Placebo Treatment Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output _____no_output_____ ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output _____no_output_____ ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output _____no_output_____ ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output _____no_output_____ ###Markdown Adding an unobserved common cause variableThis refutation does not return a p-value. Instead, it provides a _sensitivity_ test on how quickly the estimate changes if the identifying assumptions (used in `identify_effect`) are not valid. Specifically, it checks sensitivity to violation of the backdoor assumption: that all common causes are observed. To do so, it creates a new dataset with an additional common cause between treatment and outcome. To capture the effect of the common cause, the method takes as input the strength of common cause's effect on treatment and outcome. Based on these inputs on the common cause's effects, it changes the treatment and outcome values and then reruns the estimator. The hope is that the new estimate does not change drastically with a small effect of the unobserved common cause, indicating a robustness to any unobserved confounding.Another equivalent way of interpreting this procedure is to assume that there was already unobserved confounding present in the input data. The change in treatment and outcome values _removes_ the effect of whatever unobserved common cause was present in the original data. Then rerunning the estimator on this modified data provides the correct identified estimate and we hope that the difference between the new estimate and the original estimate is not too high, for some bounded value of the unobserved common cause's effect.**Importance of domain knowledge**: This test requires _domain knowledge_ to set plausible input values of the effect of unobserved confounding. We first show the result for a single value of confounder's effect on treatment and outcome. ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output _____no_output_____ ###Markdown It is often more useful to inspect the trend as the effect of unobserved confounding is increased. For that, we can provide an array of hypothesized confounders' effects. The output is the *(min, max)* range of the estimated effects under different unobserved confounding. ###Code res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=np.array([0.001, 0.005, 0.01, 0.02]), effect_strength_on_outcome=0.01) print(res_unobserved_range) ###Output _____no_output_____ ###Markdown The above plot shows how the estimate decreases as the hypothesized confounding on treatment increases. By domain knowledge, we may know the maximum plausible confounding effect on treatment. Since we see that the effect does not go beyond zero, we can safely conclude that the causal effect of treatment `v0` is positive.We can also vary the confounding effect on both treatment and outcome. We obtain a heatmap. ###Code res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=[0.001, 0.005, 0.01, 0.02], effect_strength_on_outcome=[0.001, 0.005, 0.01,0.02]) print(res_unobserved_range) ###Output _____no_output_____ ###Markdown **Automatically inferring effect strength parameters.** Finally, DoWhy supports automatic selection of the effect strength parameters. This is based on an assumption that the effect of the unobserved confounder on treatment or outcome cannot be stronger than that of any observed confounder. That is, we have collected data at least for the most relevant confounder. If that is the case, then we can bound the range of `effect_strength_on_treatment` and `effect_strength_on_outcome` by the effect strength of observed confounders. There is an additional optional parameter signifying whether the effect strength of unobserved confounder should be as high as the highest observed, or a fraction of it. You can set it using the optional `effect_fraction_on_treatment` and `effect_fraction_on_outcome` parameters. By default, these two parameters are 1. ###Code res_unobserved_auto = model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear") print(res_unobserved_auto) ###Output _____no_output_____ ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.First, let us load all required packages. ###Code import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets # Avoid printing dataconversion warnings from sklearn import warnings from sklearn.exceptions import DataConversionWarning warnings.filterwarnings(action='ignore', category=DataConversionWarning) # Config dict to set the logging level import logging.config DEFAULT_LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'loggers': { '': { 'level': 'WARN', }, } } logging.config.dictConfig(DEFAULT_LOGGING) ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=20000, treatment_is_binary=True, num_discrete_common_causes=1) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output X0 Z0 Z1 W0 W1 W2 W3 W4 v0 \ 0 1.300684 1.0 0.745564 0.546110 1.147038 0.281075 0.428693 0 True 1 -0.744317 1.0 0.987650 -2.330280 -1.482876 -0.041076 -0.315462 0 True 2 2.002841 1.0 0.871926 0.422410 -1.149994 0.016466 -0.395341 1 True 3 2.171081 0.0 0.505475 -1.969983 -1.212644 -0.704051 0.923119 3 True 4 -0.524249 1.0 0.254717 -0.784000 -1.000395 0.931530 0.957233 1 True y 0 20.632923 1 -6.180148 2 15.717277 3 15.982810 4 6.391087 digraph { U[label="Unobserved Confounders"]; U->y;v0->y;U->v0;W0-> v0; W1-> v0; W2-> v0; W3-> v0; W4-> v0;Z0-> v0; Z1-> v0;W0-> y; W1-> y; W2-> y; W3-> y; W4-> y;X0-> y;} graph[directed 1node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "Unobserved Confounders" target "y"]node[ id "W0" label "W0"] node[ id "W1" label "W1"] node[ id "W2" label "W2"] node[ id "W3" label "W3"] node[ id "W4" label "W4"]node[ id "Z0" label "Z0"] node[ id "Z1" label "Z1"]node[ id "v0" label "v0"]edge[source "Unobserved Confounders" target "v0"]edge[source "v0" target "y"]edge[ source "W0" target "v0"] edge[ source "W1" target "v0"] edge[ source "W2" target "v0"] edge[ source "W3" target "v0"] edge[ source "W4" target "v0"]edge[ source "Z0" target "v0"] edge[ source "Z1" target "v0"]edge[ source "W0" target "y"] edge[ source "W1" target "y"] edge[ source "W2" target "y"] edge[ source "W3" target "y"] edge[ source "W4" target "y"]node[ id "X0" label "X0"] edge[ source "X0" target "y"]] ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format.To create the causal graph for your dataset, you can use a tool like [DAGitty](http://dagitty.net/dags.html) that provides a GUI to construct the graph. You can export the graph string that it generates. The graph string is very close to the DOT format: just rename `dag` to `digraph`, remove newlines and add a semicolon after every line, to convert it to the DOT format and input to DoWhy. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. **DoWhy philosophy: Keep identification and estimation separate**Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps.* Identification ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|W3,W2,X0,Z0,W1,Z1,W4,W0)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,W3,W2,X0,Z0,W1,Z1,W4,W0,U) = P(y|v0,W3,W2,X0,Z0,W1,Z1,W4,W0) ### Estimand : 2 Estimand name: iv Estimand expression: Expectation(Derivative(y, [Z0, Z1])*Derivative([v0], [Z0, Z1])**(-1)) Estimand assumption 1, As-if-random: If U→→y then ¬(U →→{Z0,Z1}) Estimand assumption 2, Exclusion: If we remove {Z0,Z1}→{v0}, then ¬({Z0,Z1}→y) ### Estimand : 3 Estimand name: frontdoor No such variable found! ###Markdown Note the parameter flag *proceed\_when\_unidentifiable*. It needs to be set to *True* to convey the assumption that we are ignoring any unobserved confounding. The default behavior is to prompt the user to double-check that the unobserved confounders can be ignored. * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output *** Causal Estimate *** ## Identified estimand Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|W3,W2,X0,Z0,W1,Z1,W4,W0)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,W3,W2,X0,Z0,W1,Z1,W4,W0,U) = P(y|v0,W3,W2,X0,Z0,W1,Z1,W4,W0) ## Realized estimand b: y~v0+W3+W2+X0+Z0+W1+Z1+W4+W0 Target units: ate ## Estimate Mean value: 11.328895529046541 Causal Estimate is 11.328895529046541 ###Markdown You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify "effect modifiers" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. ###Code # Causal effect on the control group (ATC) causal_estimate_att = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification", target_units = "atc") print(causal_estimate_att) print("Causal Estimate is " + str(causal_estimate_att.value)) ###Output *** Causal Estimate *** ## Identified estimand Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|W3,W2,X0,Z0,W1,Z1,W4,W0)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,W3,W2,X0,Z0,W1,Z1,W4,W0,U) = P(y|v0,W3,W2,X0,Z0,W1,Z1,W4,W0) ## Realized estimand b: y~v0+W3+W2+X0+Z0+W1+Z1+W4+W0 Target units: atc ## Estimate Mean value: 11.126211726969387 Causal Estimate is 11.126211726969387 ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"]) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output _____no_output_____ ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output *** Causal Estimate *** ## Identified estimand Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|W3,W2,W1,W4,W0)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,W3,W2,W1,W4,W0,U) = P(y|v0,W3,W2,W1,W4,W0) ## Realized estimand b: y~v0+W3+W2+W1+W4+W0 Target units: ate ## Estimate Mean value: 11.855198037895043 Causal Estimate is 11.855198037895043 ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output Refute: Add a Random Common Cause Estimated effect:11.855198037895043 New effect:11.863629792500776 ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output Refute: Add an Unobserved Common Cause Estimated effect:11.855198037895043 New effect:9.596483835729474 ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output Refute: Use a Placebo Treatment Estimated effect:11.855198037895043 New effect:-0.028187686947292626 p value:0.43000000000000005 ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output Refute: Use a subset of data Estimated effect:11.855198037895043 New effect:11.870077917506594 p value:0.5 ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output Refute: Use a subset of data Estimated effect:11.855198037895043 New effect:11.870584658084935 p value:0.44 ###Markdown DoWhy:一个简单例子这是DoWhy因果推理库的快速介绍。我们将 load in a sample dataset,并估计the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.首先,让我们为Python添加所需的路径以找到DoWhy代码并加载所有必需的软件包。 ###Code import os, sys sys.path.append(os.path.abspath("../../../")) # Let's check the python version. print(sys.version) import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets ###Output _____no_output_____ ###Markdown 现在,让我们加载一个数据集。为简单起见,我们模拟了一个数据集,该数据集具有 linear relationships between common causes and treatment, and common causes and outcome. Beta是真正的因果效应。 ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=10000, treatment_is_binary=True) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output X0 Z0 Z1 W0 W1 W2 W3 W4 \ 0 0.298611 1.0 0.472086 1.968652 0.008722 1.085433 -0.999968 1.389409 1 -0.048842 1.0 0.584457 2.898602 -0.904939 -0.745294 -0.980058 0.969095 2 -0.123133 1.0 0.138142 -0.802696 -0.790802 1.029180 0.010684 -0.205064 3 -0.248771 0.0 0.098777 1.297670 -1.027000 0.792586 1.247469 -0.007736 4 -0.583826 1.0 0.924724 -1.341020 -1.295737 -0.612708 -2.955439 -0.674400 v0 y 0 True 9.440576 1 True 7.356542 2 True 10.143119 3 True 17.301749 4 False -15.930469 digraph { U[label="Unobserved Confounders"]; U->y;v0->y; U->v0;W0-> v0; W1-> v0; W2-> v0; W3-> v0; W4-> v0;Z0-> v0; Z1-> v0;W0-> y; W1-> y; W2-> y; W3-> y; W4-> y;X0-> y;} graph[directed 1node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "Unobserved Confounders" target "y"]node[ id "W0" label "W0"] node[ id "W1" label "W1"] node[ id "W2" label "W2"] node[ id "W3" label "W3"] node[ id "W4" label "W4"]node[ id "Z0" label "Z0"] node[ id "Z1" label "Z1"]node[ id "v0" label "v0"]edge[source "v0" target "y"]edge[source "Unobserved Confounders" target "v0"]edge[ source "W0" target "v0"] edge[ source "W1" target "v0"] edge[ source "W2" target "v0"] edge[ source "W3" target "v0"] edge[ source "W4" target "v0"]edge[ source "Z0" target "v0"] edge[ source "Z1" target "v0"]edge[ source "W0" target "y"] edge[ source "W1" target "y"] edge[ source "W2" target "y"] edge[ source "W3" target "y"] edge[ source "W4" target "y"]node[ id "X0" label "X0"] edge[ source "X0" target "y"]] ###Markdown Note that we are using a pandas dataframe to load the data. 目前,DoWhy 仅支持 pandas 数据框作为输入。 建立因果模型有两种方式来指定因果模型中的因果图,包括直接输入因果图和指定 Common causes and IVs。 Interface 1: 输入因果图(recommended) 现在,我们以GML图格式输入因果图(推荐)。您也可以使用DOT格式。 ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"], proceed_when_unidentifiable=True ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown 上面的因果图显示了因果模型中编码的因果关系假设。现在,我们可以使用此图首先 identify 因果效应 (go from a causal estimand to a probability expression),然后估计因果效应。 **DoWhy 的哲学: 把识别和估计分开**Identification 问题仅仅需要直到因果图,而不需要直到数据就可以回答。 This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step. 把 Identification 和 Estimation 分开是一件重要的事情。* Identification ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['W3', 'Unobserved Confounders', 'W1', 'W4', 'W2', 'W0'] WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:['Z1', 'Z0'] ###Markdown If you want to disable the warning for ignoring unobserved confounders, you can add a parameter flag ( *proceed\_when\_unidentifiable* ). The same parameter can also be added when instantiating the CausalModel object. ###Code # identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) # print(identified_estimand) ###Output _____no_output_____ ###Markdown * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W4+W2+W0 /Users/gong/opt/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning) /Users/gong/opt/anaconda3/lib/python3.7/site-packages/sklearn/utils/validation.py:724: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). y = column_or_1d(y, warn=True) INFO:numexpr.utils:NumExpr defaulting to 4 threads. ###Markdown 您可以将额外参数输入到 estimate_effect 方法中。 例如, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. 您还可以指定 "effect modifiers" 来估计 heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. ###Code # Causal effect on the control group (ATC) causal_estimate_att = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification", target_units = "atc") print(causal_estimate_att) print("Causal Estimate is " + str(causal_estimate_att.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W4+W2+W0 ###Markdown Interface 2: 指定共同原因和工具变量另外一种建立因果模型的办法是指定共同原因和工具变量,DoWhy 会把其他协变量自动理解成 Confounders. ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"], proceed_when_unidentifiable=True) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown 我们得到相同的因果图。Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['U', 'W3', 'W1', 'W4', 'W2', 'W0'] WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[] ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W4+W2+W0 ###Markdown 稳健性分析我们通过多种方法来 Refuting the estimate obtained. Adding a random common cause variable增加一个随机的公共原因之后,因果效应应该变化不大。 ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W4+W2+W0+w_random ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W4+W2+W0 ###Markdown Replacing treatment with a random (placebo) variable 用随机的变量来替代 treatment 之后,因果效应应该接近于零。 ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W4+W2+W0 ###Markdown Removing a random subset of the data随机去掉部分数据之后,因果效应应该差别不大。 ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W4+W2+W0 ###Markdown 如您所见,propensity score stratification estimator 对于反驳具有相当强的鲁棒性。为了重现性,您可以向任何反驳方法中添加参数“ random_seed”,如下所示。 ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W4+W2+W0 /Users/gong/opt/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning) /Users/gong/opt/anaconda3/lib/python3.7/site-packages/sklearn/utils/validation.py:724: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). y = column_or_1d(y, warn=True) ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.First, let us add the required path for Python to find the DoWhy code and load all required packages. ###Code import os, sys sys.path.append(os.path.abspath("../../../")) ###Output _____no_output_____ ###Markdown Let's check the python version. ###Code print(sys.version) import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=10000, treatment_is_binary=True, num_discrete_common_causes=1) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output X0 Z0 Z1 W0 W1 W2 W3 W4 v0 \ 0 1.418641 0.0 0.251127 0.237823 1.294957 -2.197657 0.671354 1 True 1 0.050088 0.0 0.041706 -1.199278 3.143332 -1.738985 -2.766051 1 False 2 -0.480051 0.0 0.974275 -1.957273 -0.065116 0.175567 -1.829176 1 True 3 0.338169 1.0 0.727792 -0.245409 0.099252 0.998839 -0.870295 0 True 4 -1.026205 0.0 0.983040 -0.147827 1.538178 0.441017 0.343857 2 True y 0 19.477585 1 -5.857091 2 2.179680 3 7.307447 4 15.688496 digraph { U[label="Unobserved Confounders"]; U->y;v0->y; U->v0;W0-> v0; W1-> v0; W2-> v0; W3-> v0; W4-> v0;Z0-> v0; Z1-> v0;W0-> y; W1-> y; W2-> y; W3-> y; W4-> y;X0-> y;} graph[directed 1node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "Unobserved Confounders" target "y"]node[ id "W0" label "W0"] node[ id "W1" label "W1"] node[ id "W2" label "W2"] node[ id "W3" label "W3"] node[ id "W4" label "W4"]node[ id "Z0" label "Z0"] node[ id "Z1" label "Z1"]node[ id "v0" label "v0"]edge[source "v0" target "y"]edge[source "Unobserved Confounders" target "v0"]edge[ source "W0" target "v0"] edge[ source "W1" target "v0"] edge[ source "W2" target "v0"] edge[ source "W3" target "v0"] edge[ source "W4" target "v0"]edge[ source "Z0" target "v0"] edge[ source "Z1" target "v0"]edge[ source "W0" target "y"] edge[ source "W1" target "y"] edge[ source "W2" target "y"] edge[ source "W3" target "y"] edge[ source "W4" target "y"]node[ id "X0" label "X0"] edge[ source "X0" target "y"]] ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. **DoWhy philosophy: Keep identification and estimation separate**Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps.* Identification ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. ###Markdown If you want to disable the warning for ignoring unobserved confounders, you can add a parameter flag ( *proceed\_when\_unidentifiable* ). The same parameter can also be added when instantiating the CausalModel object. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:['Z0', 'Z1'] ###Markdown * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W2+W3+W0+W4+X0 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify "effect modifiers" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. ###Code # Causal effect on the control group (ATC) causal_estimate_att = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification", target_units = "atc") print(causal_estimate_att) print("Causal Estimate is " + str(causal_estimate_att.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W2+W3+W0+W4+X0 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"]) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[] ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4+w_random /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output INFO:dowhy.causal_refuters.placebo_treatment_refuter:Refutation over 100 simulated datasets of permute treatment INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output INFO:dowhy.causal_refuters.data_subset_refuter:Refutation over 0.9 simulated datasets of size 9000.0 each INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output INFO:dowhy.causal_refuters.data_subset_refuter:Refutation over 0.9 simulated datasets of size 9000.0 each INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W3+W0+W1+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.First, let us load all required packages. ###Code import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets # Avoid printing dataconversion warnings from sklearn import warnings from sklearn.exceptions import DataConversionWarning warnings.filterwarnings(action='ignore', category=DataConversionWarning) ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=20000, treatment_is_binary=True, num_discrete_common_causes=1) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output X0 Z0 Z1 W0 W1 W2 W3 W4 v0 \ 0 -0.483662 0.0 0.668529 -1.834217 0.899764 -0.027259 -1.162514 2 True 1 -0.939733 0.0 0.893616 -1.211697 -0.507831 0.524436 2.010319 3 True 2 -0.601658 0.0 0.134469 -0.567716 -0.923255 0.220309 0.452579 2 False 3 0.144248 0.0 0.793546 -0.263509 -1.295114 1.636785 -0.414075 2 True 4 1.044430 0.0 0.152600 -0.561016 -0.493955 1.239707 0.058385 0 True y 0 1.998400 1 14.928353 2 0.423244 3 9.299319 4 12.149923 digraph { U[label="Unobserved Confounders"]; U->y;v0->y;U->v0;W0-> v0; W1-> v0; W2-> v0; W3-> v0; W4-> v0;Z0-> v0; Z1-> v0;W0-> y; W1-> y; W2-> y; W3-> y; W4-> y;X0-> y;} graph[directed 1node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "Unobserved Confounders" target "y"]node[ id "W0" label "W0"] node[ id "W1" label "W1"] node[ id "W2" label "W2"] node[ id "W3" label "W3"] node[ id "W4" label "W4"]node[ id "Z0" label "Z0"] node[ id "Z1" label "Z1"]node[ id "v0" label "v0"]edge[source "Unobserved Confounders" target "v0"]edge[source "v0" target "y"]edge[ source "W0" target "v0"] edge[ source "W1" target "v0"] edge[ source "W2" target "v0"] edge[ source "W3" target "v0"] edge[ source "W4" target "v0"]edge[ source "Z0" target "v0"] edge[ source "Z1" target "v0"]edge[ source "W0" target "y"] edge[ source "W1" target "y"] edge[ source "W2" target "y"] edge[ source "W3" target "y"] edge[ source "W4" target "y"]node[ id "X0" label "X0"] edge[ source "X0" target "y"]] ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format.To create the causal graph for your dataset, you can use a tool like [DAGitty](http://dagitty.net/dags.html) that provides a GUI to construct the graph. You can export the graph string that it generates. The graph string is very close to the DOT format: just rename `dag` to `digraph`, remove newlines and add a semicolon after every line, to convert it to the DOT format and input to DoWhy. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. **DoWhy philosophy: Keep identification and estimation separate**Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps.* Identification ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:['Z0', 'Z1'] INFO:dowhy.causal_identifier:Frontdoor variables for treatment and outcome:[] ###Markdown Note the parameter flag *proceed\_when\_unidentifiable*. It needs to be set to *True* to convey the assumption that we are ignoring any unobserved confounding. The default behavior is to prompt the user to double-check that the unobserved confounders can be ignored. * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W1+W3+W4+W0+X0 ###Markdown You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify "effect modifiers" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. ###Code # Causal effect on the control group (ATC) causal_estimate_att = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification", target_units = "atc") print(causal_estimate_att) print("Causal Estimate is " + str(causal_estimate_att.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W1+W3+W4+W0+X0 ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"]) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[] INFO:dowhy.causal_identifier:Frontdoor variables for treatment and outcome:[] ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3+w_random ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output INFO:dowhy.causal_refuters.placebo_treatment_refuter:Refutation over 100 simulated datasets of permute treatment INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W2+W4+W1+W0+W3 INFO:dowhy.causal_refuters.placebo_treatment_refuter:Making use of Bootstrap as we have more than 100 examples. Note: The greater the number of examples, the more accurate are the confidence estimates ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output INFO:dowhy.causal_refuters.data_subset_refuter:Refutation over 0.9 simulated datasets of size 18000.0 each INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_refuters.data_subset_refuter:Making use of Bootstrap as we have more than 100 examples. Note: The greater the number of examples, the more accurate are the confidence estimates ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output INFO:dowhy.causal_refuters.data_subset_refuter:Refutation over 0.9 simulated datasets of size 18000.0 each INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W2+W4+W1+W0+W3 INFO:dowhy.causal_refuters.data_subset_refuter:Making use of Bootstrap as we have more than 100 examples. Note: The greater the number of examples, the more accurate are the confidence estimates ###Markdown Getting started with DoWhy: A simple example>>> Heyang Gong, A testThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.First, let us add the required path for Python to find the DoWhy code and load all required packages. ###Code import os, sys sys.path.append(os.path.abspath("../../")) ###Output _____no_output_____ ###Markdown Let's check the python version. ###Code print(sys.version) import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_samples=10000, treatment_is_binary=True) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output Z0 Z1 X0 X1 X2 X3 X4 v \ 0 0.0 0.726137 0.209729 -0.501985 0.603780 -0.126718 0.384465 1.0 1 1.0 0.717027 0.715954 -1.786145 -0.845255 -1.533578 -0.551045 0.0 2 0.0 0.647865 -0.320209 -0.410796 -1.460011 -1.667352 -0.180602 0.0 3 1.0 0.078318 1.733261 1.138876 -0.248288 -2.129048 -0.083776 1.0 4 1.0 0.183760 1.947433 -1.670101 -0.860602 -0.879129 1.759958 1.0 y 0 11.324853 1 -8.295659 2 -6.895545 3 7.714629 4 10.668822 digraph { v ->y; U[label="Unobserved Confounders"]; U->v; U->y;X0-> v; X1-> v; X2-> v; X3-> v; X4-> v;X0-> y; X1-> y; X2-> y; X3-> y; X4-> y;Z0-> v; Z1-> v;} graph[directed 1node[ id "v" label "v"]node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "v" target "y"]edge[source "Unobserved Confounders" target "v"]edge[source "Unobserved Confounders" target "y"]node[ id "X0" label "X0"] edge[ source "X0" target "v"] node[ id "X1" label "X1"] edge[ source "X1" target "v"] node[ id "X2" label "X2"] edge[ source "X2" target "v"] node[ id "X3" label "X3"] edge[ source "X3" target "v"] node[ id "X4" label "X4"] edge[ source "X4" target "v"]edge[ source "X0" target "y"] edge[ source "X1" target "y"] edge[ source "X2" target "y"] edge[ source "X3" target "y"] edge[ source "X4" target "y"]node[ id "Z0" label "Z0"] edge[ source "Z0" target "v"] node[ id "Z1" label "Z1"] edge[ source "Z1" target "v"]] ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. **DoWhy philosophy: Keep identification and estimation separate**Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps.* Identification ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['X2', 'X1', 'X0', 'X3', 'X4', 'Unobserved Confounders'] WARNING:dowhy.causal_identifier:There are unobserved common causes. Causal effect cannot be identified. ###Markdown If you want to disable the warning for ignoring unobserved confounders, you can add a parameter flag ( *proceed\_when\_unidentifiable* ). The same parameter can also be added when instantiating the CausalModel object. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['X2', 'X1', 'X0', 'X3', 'X4', 'Unobserved Confounders'] WARNING:dowhy.causal_identifier:There are unobserved common causes. Causal effect cannot be identified. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:['Z0', 'Z1'] ###Markdown * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X2+X1+X0+X3+X4 ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"]) model.view_model() ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['X2', 'X1', 'X0', 'X3', 'X4', 'U'] WARNING:dowhy.causal_identifier:There are unobserved common causes. Causal effect cannot be identified. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[] ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X2+X1+X0+X3+X4 ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X2+X1+X0+X3+X4+w_random ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X2+X1+X0+X3+X4 ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+X2+X1+X0+X3+X4 ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X2+X1+X0+X3+X4 ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X2+X1+X0+X3+X4 ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified) treatment variable on a (pre-specified) outcome variable.First, let us load all required packages. ###Code import numpy as np import pandas as pd from dowhy import CausalModel import dowhy.datasets # Avoid printing dataconversion warnings from sklearn and numpy import warnings from sklearn.exceptions import DataConversionWarning warnings.filterwarnings(action='ignore', category=DataConversionWarning) warnings.filterwarnings(action='ignore', category=FutureWarning) # Config dict to set the logging level import logging import logging.config DEFAULT_LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'loggers': { '': { 'level': 'WARN', }, } } logging.config.dictConfig(DEFAULT_LOGGING) logging.info("Getting started with DoWhy. Running notebook...") ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=5000, treatment_is_binary=True, stddev_treatment_noise=10, num_discrete_common_causes=1) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output _____no_output_____ ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format.To create the causal graph for your dataset, you can use a tool like [DAGitty](http://dagitty.net/dags.html) that provides a GUI to construct the graph. You can export the graph string that it generates. The graph string is very close to the DOT format: just rename `dag` to `digraph`, remove newlines and add a semicolon after every line, to convert it to the DOT format and input to DoWhy. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. DoWhy philosophy: Keep identification and estimation separateIdentification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps. Identification ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output _____no_output_____ ###Markdown Note the parameter flag *proceed\_when\_unidentifiable*. It needs to be set to *True* to convey the assumption that we are ignoring any unobserved confounding. The default behavior is to prompt the user to double-check that the unobserved confounders can be ignored. Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output _____no_output_____ ###Markdown You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify "effect modifiers" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. ###Code # Causal effect on the control group (ATC) causal_estimate_att = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification", target_units = "atc") print(causal_estimate_att) print("Causal Estimate is " + str(causal_estimate_att.value)) ###Output _____no_output_____ ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"]) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. Identification ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output _____no_output_____ ###Markdown Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output _____no_output_____ ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Refutation methods provide tests that every correct estimator should pass. So if an estimator fails the refutation test (p-value is <0.05), then it means that there is some problem with the estimator. Note that we cannot verify that the estimate is correct, but we can reject it if it violates certain expected behavior (this is analogous to scientific theories that can be falsified but not proven true). The below refutation tests are based on either 1) **Invariant transformations**: changes in the data that should not change the estimate. Any estimator whose result varies significantly between the original data and the modified data fails the test; a) Random Common Cause b) Data Subset 2) **Nullifying transformations**: after the data change, the causal true estimate is zero. Any estimator whose result varies significantly from zero on the new data fails the test. a) Placebo Treatment Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output _____no_output_____ ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output _____no_output_____ ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output _____no_output_____ ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output _____no_output_____ ###Markdown Adding an unobserved common cause variableThis refutation does not return a p-value. Instead, it provides a _sensitivity_ test on how quickly the estimate changes if the identifying assumptions (used in `identify_effect`) are not valid. Specifically, it checks sensitivity to violation of the backdoor assumption: that all common causes are observed. To do so, it creates a new dataset with an additional common cause between treatment and outcome. To capture the effect of the common cause, the method takes as input the strength of common cause's effect on treatment and outcome. Based on these inputs on the common cause's effects, it changes the treatment and outcome values and then reruns the estimator. The hope is that the new estimate does not change drastically with a small effect of the unobserved common cause, indicating a robustness to any unobserved confounding.Another equivalent way of interpreting this procedure is to assume that there was already unobserved confounding present in the input data. The change in treatment and outcome values _removes_ the effect of whatever unobserved common cause was present in the original data. Then rerunning the estimator on this modified data provides the correct identified estimate and we hope that the difference between the new estimate and the original estimate is not too high, for some bounded value of the unobserved common cause's effect.**Importance of domain knowledge**: This test requires _domain knowledge_ to set plausible input values of the effect of unobserved confounding. We first show the result for a single value of confounder's effect on treatment and outcome. ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output _____no_output_____ ###Markdown It is often more useful to inspect the trend as the effect of unobserved confounding is increased. For that, we can provide an array of hypothesized confounders' effects. The output is the *(min, max)* range of the estimated effects under different unobserved confounding. ###Code res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=np.array([0.001, 0.005, 0.01, 0.02]), effect_strength_on_outcome=0.01) print(res_unobserved_range) ###Output _____no_output_____ ###Markdown The above plot shows how the estimate decreases as the hypothesized confounding on treatment increases. By domain knowledge, we may know the maximum plausible confounding effect on treatment. Since we see that the effect does not go beyond zero, we can safely conclude that the causal effect of treatment `v0` is positive.We can also vary the confounding effect on both treatment and outcome. We obtain a heatmap. ###Code res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=[0.001, 0.005, 0.01, 0.02], effect_strength_on_outcome=[0.001, 0.005, 0.01,0.02]) print(res_unobserved_range) ###Output _____no_output_____ ###Markdown **Automatically inferring effect strength parameters.** Finally, DoWhy supports automatic selection of the effect strength parameters. This is based on an assumption that the effect of the unobserved confounder on treatment or outcome cannot be stronger than that of any observed confounder. That is, we have collected data at least for the most relevant confounder. If that is the case, then we can bound the range of `effect_strength_on_treatment` and `effect_strength_on_outcome` by the effect strength of observed confounders. There is an additional optional parameter signifying whether the effect strength of unobserved confounder should be as high as the highest observed, or a fraction of it. You can set it using the optional `effect_fraction_on_treatment` and `effect_fraction_on_outcome` parameters. By default, these two parameters are 1. ###Code res_unobserved_auto = model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear") print(res_unobserved_auto) ###Output _____no_output_____ ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.First, let us add the required path for Python to find the DoWhy code and load all required packages. ###Code import os, sys sys.path.append(os.path.abspath("../../../")) ###Output _____no_output_____ ###Markdown Let's check the python version. ###Code print(sys.version) import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=10000, treatment_is_binary=True, num_discrete_common_causes=1) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output X0 Z0 Z1 W0 W1 W2 W3 W4 v0 \ 0 -1.902783 0.0 0.687000 -1.898910 -0.286528 -0.588747 -1.242401 1 False 1 0.452612 1.0 0.663722 0.266777 -1.191347 -1.446463 -0.441300 1 True 2 -1.682866 0.0 0.409278 -0.800777 -0.393854 2.263778 -0.720168 3 True 3 0.099228 1.0 0.267023 -0.515692 0.402726 -0.093554 0.390602 2 True 4 0.580962 1.0 0.569584 -0.209274 1.377837 0.480206 -0.508706 3 True y 0 -4.218153 1 5.513300 2 17.337335 3 18.859245 4 26.775018 digraph { U[label="Unobserved Confounders"]; U->y;v0->y; U->v0;W0-> v0; W1-> v0; W2-> v0; W3-> v0; W4-> v0;Z0-> v0; Z1-> v0;W0-> y; W1-> y; W2-> y; W3-> y; W4-> y;X0-> y;} graph[directed 1node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "Unobserved Confounders" target "y"]node[ id "W0" label "W0"] node[ id "W1" label "W1"] node[ id "W2" label "W2"] node[ id "W3" label "W3"] node[ id "W4" label "W4"]node[ id "Z0" label "Z0"] node[ id "Z1" label "Z1"]node[ id "v0" label "v0"]edge[source "v0" target "y"]edge[source "Unobserved Confounders" target "v0"]edge[ source "W0" target "v0"] edge[ source "W1" target "v0"] edge[ source "W2" target "v0"] edge[ source "W3" target "v0"] edge[ source "W4" target "v0"]edge[ source "Z0" target "v0"] edge[ source "Z1" target "v0"]edge[ source "W0" target "y"] edge[ source "W1" target "y"] edge[ source "W2" target "y"] edge[ source "W3" target "y"] edge[ source "W4" target "y"]node[ id "X0" label "X0"] edge[ source "X0" target "y"]] ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. **DoWhy philosophy: Keep identification and estimation separate**Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps.* Identification ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['W4', 'W3', 'W1', 'W2', 'W0', 'Unobserved Confounders'] WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. ###Markdown If you want to disable the warning for ignoring unobserved confounders, you can add a parameter flag ( *proceed\_when\_unidentifiable* ). The same parameter can also be added when instantiating the CausalModel object. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['W4', 'W3', 'W1', 'W2', 'W0', 'Unobserved Confounders'] WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:['Z1', 'Z0'] ###Markdown * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W4+W3+W1+W2+W0 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify "effect modifiers" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. ###Code # Causal effect on the control group (ATC) causal_estimate_att = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification", target_units = "atc") print(causal_estimate_att) print("Causal Estimate is " + str(causal_estimate_att.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W4+W3+W1+W2+W0 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"]) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['U', 'W3', 'W1', 'W2', 'W0', 'W4'] WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[] ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4+w_random /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output INFO:dowhy.causal_refuters.placebo_treatment_refuter:Refutation over 100 simulated datasets of permute treatment INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output INFO:dowhy.causal_refuters.data_subset_refuter:Refutation over 0.9 simulated datasets of size 9000.0 each INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output INFO:dowhy.causal_refuters.data_subset_refuter:Refutation over 0.9 simulated datasets of size 9000.0 each INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W2+W0+W4 /home/amit/python-virtual-envs/env3.6/lib/python3.6/site-packages/sklearn/utils/validation.py:73: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.First, let us add the required path for Python to find the DoWhy code and load all required packages. ###Code import os, sys sys.path.append(os.path.abspath("../../../")) ###Output _____no_output_____ ###Markdown Let's check the python version. ###Code print(sys.version) import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=10000, treatment_is_binary=True) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output X0 Z0 Z1 W0 W1 W2 W3 W4 \ 0 -1.110828 1.0 0.046271 -0.951561 -1.303769 -0.536930 0.278931 0.138245 1 0.319915 1.0 0.434955 0.215239 -1.667459 0.003459 -1.216948 -0.012950 2 0.011011 1.0 0.476466 0.115917 -1.229601 0.712344 0.370646 -0.025362 3 0.541417 1.0 0.776554 -0.982107 -0.956335 0.398041 2.504118 -0.023444 4 1.010656 1.0 0.086765 -0.275326 -0.812568 -1.353166 1.668027 0.399852 v0 y 0 False -9.656904 1 False -7.573962 2 True 9.600593 3 True 10.209022 4 False -4.220975 digraph { U[label="Unobserved Confounders"]; U->y;v0->y; U->v0;W0-> v0; W1-> v0; W2-> v0; W3-> v0; W4-> v0;Z0-> v0; Z1-> v0;W0-> y; W1-> y; W2-> y; W3-> y; W4-> y;X0-> y;} graph[directed 1node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "Unobserved Confounders" target "y"]node[ id "W0" label "W0"] node[ id "W1" label "W1"] node[ id "W2" label "W2"] node[ id "W3" label "W3"] node[ id "W4" label "W4"]node[ id "Z0" label "Z0"] node[ id "Z1" label "Z1"]node[ id "v0" label "v0"]edge[source "v0" target "y"]edge[source "Unobserved Confounders" target "v0"]edge[ source "W0" target "v0"] edge[ source "W1" target "v0"] edge[ source "W2" target "v0"] edge[ source "W3" target "v0"] edge[ source "W4" target "v0"]edge[ source "Z0" target "v0"] edge[ source "Z1" target "v0"]edge[ source "W0" target "y"] edge[ source "W1" target "y"] edge[ source "W2" target "y"] edge[ source "W3" target "y"] edge[ source "W4" target "y"]node[ id "X0" label "X0"] edge[ source "X0" target "y"]] ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. **DoWhy philosophy: Keep identification and estimation separate**Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps.* Identification ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['W3', 'Unobserved Confounders', 'W0', 'W2', 'W4', 'W1'] WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. ###Markdown If you want to disable the warning for ignoring unobserved confounders, you can add a parameter flag ( *proceed\_when\_unidentifiable* ). The same parameter can also be added when instantiating the CausalModel object. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output _____no_output_____ ###Markdown * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output _____no_output_____ ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"]) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output _____no_output_____ ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output _____no_output_____ ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output _____no_output_____ ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output _____no_output_____ ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output _____no_output_____ ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output _____no_output_____ ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output _____no_output_____ ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified) treatment variable on a (pre-specified) outcome variable.First, let us load all required packages. ###Code import numpy as np import pandas as pd from dowhy import CausalModel import dowhy.datasets # Avoid printing dataconversion warnings from sklearn and numpy import warnings from sklearn.exceptions import DataConversionWarning warnings.filterwarnings(action='ignore', category=DataConversionWarning) warnings.filterwarnings(action='ignore', category=FutureWarning) # Config dict to set the logging level import logging import logging.config DEFAULT_LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'loggers': { '': { 'level': 'WARN', }, } } logging.config.dictConfig(DEFAULT_LOGGING) logging.info("Getting started with DoWhy. Running notebook...") ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=5000, treatment_is_binary=True, stddev_treatment_noise=10, num_discrete_common_causes=1) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output X0 Z0 Z1 W0 W1 W2 W3 W4 v0 \ 0 1.813045 0.0 0.386437 1.737406 -0.770319 0.368265 2.265652 2 True 1 0.289600 1.0 0.330004 2.005843 -0.816153 -0.963184 -1.939368 2 True 2 0.882267 0.0 0.728241 0.964749 -1.786494 -0.239238 0.194021 0 True 3 2.002775 1.0 0.865513 2.773636 -0.751109 -1.085667 -1.385620 3 True 4 2.053132 0.0 0.327094 0.120475 -0.370037 2.716818 1.185396 1 True y 0 23.426899 1 -4.980316 2 3.646363 3 -0.890903 4 30.815931 digraph { U[label="Unobserved Confounders"]; U->y;v0->y;U->v0;W0-> v0; W1-> v0; W2-> v0; W3-> v0; W4-> v0;Z0-> v0; Z1-> v0;W0-> y; W1-> y; W2-> y; W3-> y; W4-> y;X0-> y;} graph[directed 1node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "Unobserved Confounders" target "y"]node[ id "W0" label "W0"] node[ id "W1" label "W1"] node[ id "W2" label "W2"] node[ id "W3" label "W3"] node[ id "W4" label "W4"]node[ id "Z0" label "Z0"] node[ id "Z1" label "Z1"]node[ id "v0" label "v0"]edge[source "Unobserved Confounders" target "v0"]edge[source "v0" target "y"]edge[ source "W0" target "v0"] edge[ source "W1" target "v0"] edge[ source "W2" target "v0"] edge[ source "W3" target "v0"] edge[ source "W4" target "v0"]edge[ source "Z0" target "v0"] edge[ source "Z1" target "v0"]edge[ source "W0" target "y"] edge[ source "W1" target "y"] edge[ source "W2" target "y"] edge[ source "W3" target "y"] edge[ source "W4" target "y"]node[ id "X0" label "X0"] edge[ source "X0" target "y"]] ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format.To create the causal graph for your dataset, you can use a tool like [DAGitty](http://dagitty.net/dags.html) that provides a GUI to construct the graph. You can export the graph string that it generates. The graph string is very close to the DOT format: just rename `dag` to `digraph`, remove newlines and add a semicolon after every line, to convert it to the DOT format and input to DoWhy. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. DoWhy philosophy: Keep identification and estimation separateIdentification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps. Identification ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. ###Markdown Note the parameter flag *proceed\_when\_unidentifiable*. It needs to be set to *True* to convey the assumption that we are ignoring any unobserved confounding. The default behavior is to prompt the user to double-check that the unobserved confounders can be ignored. Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output *** Causal Estimate *** ## Identified estimand Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|W3,W4,W2,Z0,X0,W0,Z1,W1)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,W3,W4,W2,Z0,X0,W0,Z1,W1,U) = P(y|v0,W3,W4,W2,Z0,X0,W0,Z1,W1) ## Realized estimand b: y~v0+W3+W4+W2+Z0+X0+W0+Z1+W1 Target units: ate ## Estimate Mean value: 10.503496778665827 Causal Estimate is 10.503496778665827 ###Markdown You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify "effect modifiers" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. ###Code # Causal effect on the control group (ATC) causal_estimate_att = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification", target_units = "atc") print(causal_estimate_att) print("Causal Estimate is " + str(causal_estimate_att.value)) ###Output *** Causal Estimate *** ## Identified estimand Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|W3,W4,W2,Z0,X0,W0,Z1,W1)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,W3,W4,W2,Z0,X0,W0,Z1,W1,U) = P(y|v0,W3,W4,W2,Z0,X0,W0,Z1,W1) ## Realized estimand b: y~v0+W3+W4+W2+Z0+X0+W0+Z1+W1 Target units: atc ## Estimate Mean value: 10.490751967183876 Causal Estimate is 10.490751967183876 ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"]) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. Identification ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output _____no_output_____ ###Markdown Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output *** Causal Estimate *** ## Identified estimand Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|W3,W4,W2,W0,W1)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,W3,W4,W2,W0,W1,U) = P(y|v0,W3,W4,W2,W0,W1) ## Realized estimand b: y~v0+W3+W4+W2+W0+W1 Target units: ate ## Estimate Mean value: 10.272446990065047 Causal Estimate is 10.272446990065047 ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Refutation methods provide tests that every correct estimator should pass. So if an estimator fails the refutation test (p-value is <0.05), then it means that there is some problem with the estimator. Note that we cannot verify that the estimate is correct, but we can reject it if it violates certain expected behavior (this is analogous to scientific theories that can be falsified but not proven true). The below refutation tests are based on either 1) **Invariant transformations**: changes in the data that should not change the estimate. Any estimator whose result varies significantly between the original data and the modified data fails the test; a) Random Common Cause b) Data Subset 2) **Nullifying transformations**: after the data change, the causal true estimate is zero. Any estimator whose result varies significantly from zero on the new data fails the test. a) Placebo Treatment Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output Refute: Add a random common cause Estimated effect:10.272446990065047 New effect:10.22855222425012 p value:0.28 ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output Refute: Use a Placebo Treatment Estimated effect:10.272446990065047 New effect:0.014834254443949355 p value:0.44 ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output Refute: Use a subset of data Estimated effect:10.272446990065047 New effect:10.231666326841792 p value:0.31999999999999995 ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output Refute: Use a subset of data Estimated effect:10.272446990065047 New effect:10.245311785764631 p value:0.36 ###Markdown Adding an unobserved common cause variableThis refutation does not return a p-value. Instead, it provides a _sensitivity_ test on how quickly the estimate changes if the identifying assumptions (used in `identify_effect`) are not valid. Specifically, it checks sensitivity to violation of the backdoor assumption: that all common causes are observed. To do so, it creates a new dataset with an additional common cause between treatment and outcome. To capture the effect of the common cause, the method takes as input the strength of common cause's effect on treatment and outcome. Based on these inputs on the common cause's effects, it changes the treatment and outcome values and then reruns the estimator. The hope is that the new estimate does not change drastically with a small effect of the unobserved common cause, indicating a robustness to any unobserved confounding.Another equivalent way of interpreting this procedure is to assume that there was already unobserved confounding present in the input data. The change in treatment and outcome values _removes_ the effect of whatever unobserved common cause was present in the original data. Then rerunning the estimator on this modified data provides the correct identified estimate and we hope that the difference between the new estimate and the original estimate is not too high, for some bounded value of the unobserved common cause's effect.**Importance of domain knowledge**: This test requires _domain knowledge_ to set plausible input values of the effect of unobserved confounding. We first show the result for a single value of confounder's effect on treatment and outcome. ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output Refute: Add an Unobserved Common Cause Estimated effect:10.272446990065047 New effect:9.372739046931116 ###Markdown It is often more useful to inspect the trend as the effect of unobserved confounding is increased. For that, we can provide an array of hypothesized confounders' effects. The output is the *(min, max)* range of the estimated effects under different unobserved confounding. ###Code res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=np.array([0.001, 0.005, 0.01, 0.02]), effect_strength_on_outcome=0.01) print(res_unobserved_range) ###Output _____no_output_____ ###Markdown The above plot shows how the estimate decreases as the hypothesized confounding on treatment increases. By domain knowledge, we may know the maximum plausible confounding effect on treatment. Since we see that the effect does not go beyond zero, we can safely conclude that the causal effect of treatment `v0` is positive.We can also vary the confounding effect on both treatment and outcome. We obtain a heatmap. ###Code res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=[0.001, 0.005, 0.01, 0.02], effect_strength_on_outcome=[0.001, 0.005, 0.01,0.02]) print(res_unobserved_range) ###Output _____no_output_____ ###Markdown **Automatically inferring effect strength parameters.** Finally, DoWhy supports automatic selection of the effect strength parameters. This is based on an assumption that the effect of the unobserved confounder on treatment or outcome cannot be stronger than that of any observed confounder. That is, we have collected data at least for the most relevant confounder. If that is the case, then we can bound the range of `effect_strength_on_treatment` and `effect_strength_on_outcome` by the effect strength of observed confounders. There is an additional optional parameter signifying whether the effect strength of unobserved confounder should be as high as the highest observed, or a fraction of it. You can set it using the optional `effect_fraction_on_treatment` and `effect_fraction_on_outcome` parameters. By default, these two parameters are 1. ###Code res_unobserved_auto = model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear") print(res_unobserved_auto) ###Output _____no_output_____ ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.First, let us add the required path for Python to find the DoWhy code and load all required packages. ###Code import os, sys sys.path.append(os.path.abspath("../../../")) ###Output _____no_output_____ ###Markdown Let's check the python version. ###Code print(sys.version) import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=10000, treatment_is_binary=True, num_discrete_common_causes=1) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output (10000, 5) (10000,) ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. **DoWhy philosophy: Keep identification and estimation separate**Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps.* Identification ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output _____no_output_____ ###Markdown If you want to disable the warning for ignoring unobserved confounders, you can add a parameter flag ( *proceed\_when\_unidentifiable* ). The same parameter can also be added when instantiating the CausalModel object. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output _____no_output_____ ###Markdown * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output _____no_output_____ ###Markdown You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify "effect modifiers" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. ###Code # Causal effect on the control group (ATC) causal_estimate_att = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification", target_units = "atc") print(causal_estimate_att) print("Causal Estimate is " + str(causal_estimate_att.value)) ###Output _____no_output_____ ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"]) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output _____no_output_____ ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output _____no_output_____ ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output _____no_output_____ ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output _____no_output_____ ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output _____no_output_____ ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output _____no_output_____ ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output _____no_output_____ ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.First, let us add the required path for Python to find the DoWhy code and load all required packages. ###Code import os, sys sys.path.append(os.path.abspath("../../../")) ###Output _____no_output_____ ###Markdown Let's check the python version. ###Code print(sys.version) import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=20000, treatment_is_binary=True, num_discrete_common_causes=1) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output X0 Z0 Z1 W0 W1 W2 W3 W4 v0 \ 0 -0.170655 0.0 0.308945 -0.801552 -0.295687 -0.698121 -1.345645 3 True 1 -1.392633 0.0 0.730718 0.090451 -0.946255 -2.727565 2.008142 0 True 2 -0.135233 0.0 0.543428 -0.514534 -1.848059 -0.517070 0.624243 0 True 3 -1.039307 0.0 0.536432 -1.091775 -0.604876 -0.798937 0.243565 1 True 4 0.118141 0.0 0.269764 -1.329861 0.404219 -1.608674 1.542783 2 True y 0 2.298181 1 -4.230823 2 3.978352 3 -0.019290 4 1.115475 digraph { U[label="Unobserved Confounders"]; U->y;v0->y;U->v0;W0-> v0; W1-> v0; W2-> v0; W3-> v0; W4-> v0;Z0-> v0; Z1-> v0;W0-> y; W1-> y; W2-> y; W3-> y; W4-> y;X0-> y;} graph[directed 1node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "Unobserved Confounders" target "y"]node[ id "W0" label "W0"] node[ id "W1" label "W1"] node[ id "W2" label "W2"] node[ id "W3" label "W3"] node[ id "W4" label "W4"]node[ id "Z0" label "Z0"] node[ id "Z1" label "Z1"]node[ id "v0" label "v0"]edge[source "Unobserved Confounders" target "v0"]edge[source "v0" target "y"]edge[ source "W0" target "v0"] edge[ source "W1" target "v0"] edge[ source "W2" target "v0"] edge[ source "W3" target "v0"] edge[ source "W4" target "v0"]edge[ source "Z0" target "v0"] edge[ source "Z1" target "v0"]edge[ source "W0" target "y"] edge[ source "W1" target "y"] edge[ source "W2" target "y"] edge[ source "W3" target "y"] edge[ source "W4" target "y"]node[ id "X0" label "X0"] edge[ source "X0" target "y"]] ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. **DoWhy philosophy: Keep identification and estimation separate**Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps.* Identification ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:['Z0', 'Z1'] INFO:dowhy.causal_identifier:Frontdoor variables for treatment and outcome:[] ###Markdown Note the parameter flag *proceed\_when\_unidentifiable*. It needs to be set to *True* to convey the assumption that we are ignoring any unobserved confounding. The default behavior is to prompt the user to double-check that the unobserved confounders can be ignored. * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W2+X0+W0+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify "effect modifiers" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. ###Code # Causal effect on the control group (ATC) causal_estimate_att = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification", target_units = "atc") print(causal_estimate_att) print("Causal Estimate is " + str(causal_estimate_att.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W2+X0+W0+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"]) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[] INFO:dowhy.causal_identifier:Frontdoor variables for treatment and outcome:[] ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3+w_random /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output INFO:dowhy.causal_refuters.placebo_treatment_refuter:Refutation over 100 simulated datasets of permute treatment INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output INFO:dowhy.causal_refuters.data_subset_refuter:Refutation over 0.9 simulated datasets of size 18000.0 each INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output INFO:dowhy.causal_refuters.data_subset_refuter:Refutation over 0.9 simulated datasets of size 18000.0 each INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W1+W0+W2+W4+W3 /home/amit/py-envs/env3.8/lib/python3.8/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). return f(**kwargs) ###Markdown DoWhy: 一个简单例子这是DoWhy因果推理库的快速介绍。我们将 load in a sample dataset,并估计the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.首先,让我们为Python添加所需的路径以找到DoWhy代码并加载所有必需的软件包。 ###Code import os, sys sys.path.append(os.path.abspath("../../../")) # Let's check the python version. print(sys.version) import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets ###Output _____no_output_____ ###Markdown 现在,让我们加载一个数据集。为简单起见,我们模拟了一个数据集,该数据集具有 linear relationships between common causes and treatment, and common causes and outcome. Beta是真正的因果效应。 ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=10000, treatment_is_binary=True) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output X0 Z0 Z1 W0 W1 W2 W3 W4 \ 0 0.298611 1.0 0.472086 1.968652 0.008722 1.085433 -0.999968 1.389409 1 -0.048842 1.0 0.584457 2.898602 -0.904939 -0.745294 -0.980058 0.969095 2 -0.123133 1.0 0.138142 -0.802696 -0.790802 1.029180 0.010684 -0.205064 3 -0.248771 0.0 0.098777 1.297670 -1.027000 0.792586 1.247469 -0.007736 4 -0.583826 1.0 0.924724 -1.341020 -1.295737 -0.612708 -2.955439 -0.674400 v0 y 0 True 9.440576 1 True 7.356542 2 True 10.143119 3 True 17.301749 4 False -15.930469 digraph { U[label="Unobserved Confounders"]; U->y;v0->y; U->v0;W0-> v0; W1-> v0; W2-> v0; W3-> v0; W4-> v0;Z0-> v0; Z1-> v0;W0-> y; W1-> y; W2-> y; W3-> y; W4-> y;X0-> y;} graph[directed 1node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "Unobserved Confounders" target "y"]node[ id "W0" label "W0"] node[ id "W1" label "W1"] node[ id "W2" label "W2"] node[ id "W3" label "W3"] node[ id "W4" label "W4"]node[ id "Z0" label "Z0"] node[ id "Z1" label "Z1"]node[ id "v0" label "v0"]edge[source "v0" target "y"]edge[source "Unobserved Confounders" target "v0"]edge[ source "W0" target "v0"] edge[ source "W1" target "v0"] edge[ source "W2" target "v0"] edge[ source "W3" target "v0"] edge[ source "W4" target "v0"]edge[ source "Z0" target "v0"] edge[ source "Z1" target "v0"]edge[ source "W0" target "y"] edge[ source "W1" target "y"] edge[ source "W2" target "y"] edge[ source "W3" target "y"] edge[ source "W4" target "y"]node[ id "X0" label "X0"] edge[ source "X0" target "y"]] ###Markdown Note that we are using a pandas dataframe to load the data. 目前,DoWhy 仅支持 pandas 数据框作为输入。 建立因果模型有两种方式来指定因果模型中的因果图,包括直接输入因果图和指定 Common causes and IVs。 Interface 1: 输入因果图(recommended) 现在,我们以GML图格式输入因果图(推荐)。您也可以使用DOT格式。 ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"], proceed_when_unidentifiable=True ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown 上面的因果图显示了因果模型中编码的因果关系假设。现在,我们可以使用此图首先 identify 因果效应 (go from a causal estimand to a probability expression),然后估计因果效应。 **DoWhy 的哲学: 把识别和估计分开**Identification 问题仅仅需要直到因果图,而不需要直到数据就可以回答。 This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step. 把 Identification 和 Estimation 分开是一件重要的事情。* Identification ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['W3', 'Unobserved Confounders', 'W1', 'W4', 'W2', 'W0'] WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:['Z1', 'Z0'] ###Markdown If you want to disable the warning for ignoring unobserved confounders, you can add a parameter flag ( *proceed\_when\_unidentifiable* ). The same parameter can also be added when instantiating the CausalModel object. ###Code # identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) # print(identified_estimand) ###Output _____no_output_____ ###Markdown * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W4+W2+W0 /Users/gong/opt/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning) /Users/gong/opt/anaconda3/lib/python3.7/site-packages/sklearn/utils/validation.py:724: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). y = column_or_1d(y, warn=True) INFO:numexpr.utils:NumExpr defaulting to 4 threads. ###Markdown 您可以将额外参数输入到 estimate_effect 方法中。 例如, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. 您还可以指定 "effect modifiers" 来估计 heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. ###Code # Causal effect on the control group (ATC) causal_estimate_att = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification", target_units = "atc") print(causal_estimate_att) print("Causal Estimate is " + str(causal_estimate_att.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W4+W2+W0 ###Markdown Interface 2: 指定共同原因和工具变量另外一种建立因果模型的办法是指定共同原因和工具变量,DoWhy 会把其他协变量自动理解成 Confounders. ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"], proceed_when_unidentifiable=True) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown 我们得到相同的因果图。Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['U', 'W3', 'W1', 'W4', 'W2', 'W0'] WARNING:dowhy.causal_identifier:If this is observed data (not from a randomized experiment), there might always be missing confounders. Causal effect cannot be identified perfectly. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[] ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W4+W2+W0 ###Markdown 稳健性分析我们通过多种方法来 Refuting the estimate obtained. Adding a random common cause variable增加一个随机的公共原因之后,因果效应应该变化不大。 ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W4+W2+W0+w_random ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W4+W2+W0 ###Markdown Replacing treatment with a random (placebo) variable 用随机的变量来替代 treatment 之后,因果效应应该接近于零。 ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W1+W4+W2+W0 ###Markdown Removing a random subset of the data随机去掉部分数据之后,因果效应应该差别不大。 ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W4+W2+W0 ###Markdown 如您所见,propensity score stratification estimator 对于反驳具有相当强的鲁棒性。为了重现性,您可以向任何反驳方法中添加参数“ random_seed”,如下所示。 ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v0+W3+W1+W4+W2+W0 /Users/gong/opt/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning) /Users/gong/opt/anaconda3/lib/python3.7/site-packages/sklearn/utils/validation.py:724: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). y = column_or_1d(y, warn=True) ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.First, let us add the required path for Python to find the DoWhy code and load all required packages. ###Code import os, sys sys.path.append(os.path.abspath("../../../")) ###Output _____no_output_____ ###Markdown Let's check the python version. ###Code print(sys.version) import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_samples=10000, treatment_is_binary=True) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output Z0 Z1 X0 X1 X2 X3 X4 v \ 0 1.0 0.497037 0.457819 -0.122799 -1.008029 -0.593407 -1.662122 1.0 1 0.0 0.124689 -0.787039 1.946060 0.860549 0.694571 -1.336487 1.0 2 0.0 0.804227 0.341221 -0.270201 0.689100 -1.286903 -2.049364 1.0 3 0.0 0.023383 0.216145 1.210546 -1.858824 -0.678202 -0.809396 1.0 4 0.0 0.284569 -1.790000 2.213267 -0.539061 -0.476480 -0.941096 1.0 y 0 6.098897 1 11.891127 2 8.033139 3 6.218750 4 7.096195 digraph { v ->y; U[label="Unobserved Confounders"]; U->v; U->y;X0-> v; X1-> v; X2-> v; X3-> v; X4-> v;X0-> y; X1-> y; X2-> y; X3-> y; X4-> y;Z0-> v; Z1-> v;} graph[directed 1node[ id "v" label "v"]node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "v" target "y"]edge[source "Unobserved Confounders" target "v"]edge[source "Unobserved Confounders" target "y"]node[ id "X0" label "X0"] edge[ source "X0" target "v"] node[ id "X1" label "X1"] edge[ source "X1" target "v"] node[ id "X2" label "X2"] edge[ source "X2" target "v"] node[ id "X3" label "X3"] edge[ source "X3" target "v"] node[ id "X4" label "X4"] edge[ source "X4" target "v"]edge[ source "X0" target "y"] edge[ source "X1" target "y"] edge[ source "X2" target "y"] edge[ source "X3" target "y"] edge[ source "X4" target "y"]node[ id "Z0" label "Z0"] edge[ source "Z0" target "v"] node[ id "Z1" label "Z1"] edge[ source "Z1" target "v"]] ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. **DoWhy philosophy: Keep identification and estimation separate**Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps.* Identification ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['Unobserved Confounders', 'X4', 'X3', 'X2', 'X0', 'X1'] WARNING:dowhy.causal_identifier:There are unobserved common causes. Causal effect cannot be identified. ###Markdown If you want to disable the warning for ignoring unobserved confounders, you can add a parameter flag ( *proceed\_when\_unidentifiable* ). The same parameter can also be added when instantiating the CausalModel object. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['Unobserved Confounders', 'X4', 'X3', 'X2', 'X0', 'X1'] WARNING:dowhy.causal_identifier:There are unobserved common causes. Causal effect cannot be identified. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:['Z1', 'Z0'] ###Markdown * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X4+X3+X2+X0+X1 ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"]) model.view_model() ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['X4', 'X3', 'X2', 'U', 'X0', 'X1'] WARNING:dowhy.causal_identifier:There are unobserved common causes. Causal effect cannot be identified. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[] ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X4+X3+X2+X0+X1 ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X4+X3+X2+X0+X1+w_random ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X4+X3+X2+X0+X1 ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+X4+X3+X2+X0+X1 ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X4+X3+X2+X0+X1 ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X4+X3+X2+X0+X1 ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified) treatment variable on a (pre-specified) outcome variable.First, let us load all required packages. ###Code import numpy as np import pandas as pd from dowhy import CausalModel import dowhy.datasets # Avoid printing dataconversion warnings from sklearn and numpy import warnings from sklearn.exceptions import DataConversionWarning warnings.filterwarnings(action='ignore', category=DataConversionWarning) warnings.filterwarnings(action='ignore', category=FutureWarning) # Config dict to set the logging level import logging.config DEFAULT_LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'loggers': { '': { 'level': 'WARN', }, } } logging.config.dictConfig(DEFAULT_LOGGING) ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=20000, treatment_is_binary=True, num_discrete_common_causes=1) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output X0 Z0 Z1 W0 W1 W2 W3 W4 v0 \ 0 -1.731581 0.0 0.683217 -1.338730 -0.950288 -2.669989 -0.213464 2 False 1 -2.447741 1.0 0.558058 -0.977687 -1.335374 0.019435 1.246170 3 True 2 -0.238902 1.0 0.578728 -1.188444 1.973122 -1.187432 0.406087 1 True 3 -0.685912 1.0 0.298980 -0.719749 0.336174 1.144770 0.559923 0 True 4 0.318496 0.0 0.445429 -1.389642 0.838794 -0.856835 1.576670 1 True y 0 -1.409807 1 12.541142 2 8.746019 3 9.621235 4 8.716707 digraph { U[label="Unobserved Confounders"]; U->y;v0->y;U->v0;W0-> v0; W1-> v0; W2-> v0; W3-> v0; W4-> v0;Z0-> v0; Z1-> v0;W0-> y; W1-> y; W2-> y; W3-> y; W4-> y;X0-> y;} graph[directed 1node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "Unobserved Confounders" target "y"]node[ id "W0" label "W0"] node[ id "W1" label "W1"] node[ id "W2" label "W2"] node[ id "W3" label "W3"] node[ id "W4" label "W4"]node[ id "Z0" label "Z0"] node[ id "Z1" label "Z1"]node[ id "v0" label "v0"]edge[source "Unobserved Confounders" target "v0"]edge[source "v0" target "y"]edge[ source "W0" target "v0"] edge[ source "W1" target "v0"] edge[ source "W2" target "v0"] edge[ source "W3" target "v0"] edge[ source "W4" target "v0"]edge[ source "Z0" target "v0"] edge[ source "Z1" target "v0"]edge[ source "W0" target "y"] edge[ source "W1" target "y"] edge[ source "W2" target "y"] edge[ source "W3" target "y"] edge[ source "W4" target "y"]node[ id "X0" label "X0"] edge[ source "X0" target "y"]] ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format.To create the causal graph for your dataset, you can use a tool like [DAGitty](http://dagitty.net/dags.html) that provides a GUI to construct the graph. You can export the graph string that it generates. The graph string is very close to the DOT format: just rename `dag` to `digraph`, remove newlines and add a semicolon after every line, to convert it to the DOT format and input to DoWhy. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. DoWhy philosophy: Keep identification and estimation separateIdentification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps. Identification ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|X0,W0,W1,W2,W3,Z1,W4,Z0)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,X0,W0,W1,W2,W3,Z1,W4,Z0,U) = P(y|v0,X0,W0,W1,W2,W3,Z1,W4,Z0) ### Estimand : 2 Estimand name: iv Estimand expression: Expectation(Derivative(y, [Z1, Z0])*Derivative([v0], [Z1, Z0])**(-1)) Estimand assumption 1, As-if-random: If U→→y then ¬(U →→{Z1,Z0}) Estimand assumption 2, Exclusion: If we remove {Z1,Z0}→{v0}, then ¬({Z1,Z0}→y) ### Estimand : 3 Estimand name: frontdoor No such variable found! ###Markdown Note the parameter flag *proceed\_when\_unidentifiable*. It needs to be set to *True* to convey the assumption that we are ignoring any unobserved confounding. The default behavior is to prompt the user to double-check that the unobserved confounders can be ignored. Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output *** Causal Estimate *** ## Identified estimand Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|X0,W0,W1,W2,W3,Z1,W4,Z0)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,X0,W0,W1,W2,W3,Z1,W4,Z0,U) = P(y|v0,X0,W0,W1,W2,W3,Z1,W4,Z0) ## Realized estimand b: y~v0+X0+W0+W1+W2+W3+Z1+W4+Z0 Target units: ate ## Estimate Mean value: 9.848224410391552 Causal Estimate is 9.848224410391552 ###Markdown You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify "effect modifiers" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. ###Code # Causal effect on the control group (ATC) causal_estimate_att = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification", target_units = "atc") print(causal_estimate_att) print("Causal Estimate is " + str(causal_estimate_att.value)) ###Output *** Causal Estimate *** ## Identified estimand Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|X0,W0,W1,W2,W3,Z1,W4,Z0)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,X0,W0,W1,W2,W3,Z1,W4,Z0,U) = P(y|v0,X0,W0,W1,W2,W3,Z1,W4,Z0) ## Realized estimand b: y~v0+X0+W0+W1+W2+W3+Z1+W4+Z0 Target units: atc ## Estimate Mean value: 10.04252197806373 Causal Estimate is 10.04252197806373 ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"]) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. Identification ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output _____no_output_____ ###Markdown Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output *** Causal Estimate *** ## Identified estimand Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|W0,W1,W2,W3,W4)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,W0,W1,W2,W3,W4,U) = P(y|v0,W0,W1,W2,W3,W4) ## Realized estimand b: y~v0+W0+W1+W2+W3+W4 Target units: ate ## Estimate Mean value: 9.903223504041334 Causal Estimate is 9.903223504041334 ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Refutation methods provide tests that every correct estimator should pass. So if an estimator fails the refutation test (p-value is <0.05), then it means that there is some problem with the estimator. Note that we cannot verify that the estimate is correct, but we can reject it if it violates certain expected behavior (this is analogous to scientific theories that can be falsified but not proven true). The below refutation tests are based on either 1) **Invariant transformations**: changes in the data that should not change the estimate. Any estimator whose result varies significantly between the original data and the modified data fails the test; a) Random Common Cause b) Data Subset 2) **Nullifying transformations**: after the data change, the causal true estimate is zero. Any estimator whose result varies significantly from zero on the new data fails the test. a) Placebo Treatment Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output Refute: Add a random common cause Estimated effect:9.903223504041334 New effect:9.891127024805735 p value:0.31000000000000005 ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output Refute: Use a Placebo Treatment Estimated effect:9.903223504041334 New effect:-0.009679645742284613 p value:0.44999999999999996 ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output Refute: Use a subset of data Estimated effect:9.903223504041334 New effect:9.890977247165887 p value:0.20999999999999996 ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output Refute: Use a subset of data Estimated effect:9.903223504041334 New effect:9.890348425255238 p value:0.21999999999999997 ###Markdown Adding an unobserved common cause variableThis refutation does not return a p-value. Instead, it provides a _sensitivity_ test on how quickly the estimate changes if the identifying assumptions (used in `identify_effect`) are not valid. Specifically, it checks sensitivity to violation of the backdoor assumption: that all common causes are observed. To do so, it creates a new dataset with an additional common cause between treatment and outcome. To capture the effect of the common cause, the method takes as input the strength of common cause's effect on treatment and outcome. Based on these inputs on the common cause's effects, it changes the treatment and outcome values and then reruns the estimator. The hope is that the new estimate does not change drastically with a small effect of the unobserved common cause, indicating a robustness to any unobserved confounding.Another equivalent way of interpreting this procedure is to assume that there was already unobserved confounding present in the input data. The change in treatment and outcome values _removes_ the effect of whatever unobserved common cause was present in the original data. Then rerunning the estimator on this modified data provides the correct identified estimate and we hope that the difference between the new estimate and the original estimate is not too high, for some bounded value of the unobserved common cause's effect.**Importance of domain knowledge**: This test requires _domain knowledge_ to set plausible input values of the effect of unobserved confounding. We first show the result for a single value of confounder's effect on treatment and outcome. ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output Refute: Add an Unobserved Common Cause Estimated effect:9.903223504041334 New effect:8.917226996727925 ###Markdown It is often more useful to inspect the trend as the effect of unobserved confounding is increased. For that, we can provide an array of hypothesized confounders' effects. ###Code res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=np.array([0.001, 0.005, 0.01, 0.02]), effect_strength_on_outcome=0.01) print(res_unobserved_range) ###Output _____no_output_____ ###Markdown The above plot shows how the estimate decreases as the hypothesized confounding on treatment increases. By domain knowledge, we may know that 0.5 is the maximum plausible confounding effect, and since we see that the effect changes by only 20%, we can safely conclude that the causal effect of treatment `v0` is positive.We can also vary the confounding effect on both treatment and outcome. We obtain a heatmap. ###Code res_unobserved_range=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=[0.001, 0.005, 0.01, 0.02], effect_strength_on_outcome=[0.001, 0.005, 0.01,0.02]) print(res_unobserved_range) ###Output _____no_output_____ ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.First, let us load all required packages. ###Code import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets # Avoid printing dataconversion warnings from sklearn import warnings from sklearn.exceptions import DataConversionWarning warnings.filterwarnings(action='ignore', category=DataConversionWarning) # Config dict to set the logging level import logging.config DEFAULT_LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'loggers': { '': { 'level': 'WARN', }, } } logging.config.dictConfig(DEFAULT_LOGGING) ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=20000, treatment_is_binary=True, num_discrete_common_causes=1) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output X0 Z0 Z1 W0 W1 W2 W3 W4 v0 \ 0 -1.030109 0.0 0.779284 -0.394498 -0.385477 -0.367360 0.645206 3 True 1 -2.030467 0.0 0.535372 -1.006139 -0.926995 -0.138417 -1.484328 0 False 2 -1.586061 0.0 0.913209 -0.759918 -0.338319 1.081894 2.015009 1 True 3 -0.252026 0.0 0.240364 0.289508 -1.560852 2.178684 1.255095 2 True 4 -0.202098 1.0 0.455254 0.713760 -0.968544 0.454407 -0.410060 0 True y 0 13.574042 1 -14.557342 2 14.659045 3 13.701713 4 7.247772 digraph { U[label="Unobserved Confounders"]; U->y;v0->y;U->v0;W0-> v0; W1-> v0; W2-> v0; W3-> v0; W4-> v0;Z0-> v0; Z1-> v0;W0-> y; W1-> y; W2-> y; W3-> y; W4-> y;X0-> y;} graph[directed 1node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "Unobserved Confounders" target "y"]node[ id "W0" label "W0"] node[ id "W1" label "W1"] node[ id "W2" label "W2"] node[ id "W3" label "W3"] node[ id "W4" label "W4"]node[ id "Z0" label "Z0"] node[ id "Z1" label "Z1"]node[ id "v0" label "v0"]edge[source "Unobserved Confounders" target "v0"]edge[source "v0" target "y"]edge[ source "W0" target "v0"] edge[ source "W1" target "v0"] edge[ source "W2" target "v0"] edge[ source "W3" target "v0"] edge[ source "W4" target "v0"]edge[ source "Z0" target "v0"] edge[ source "Z1" target "v0"]edge[ source "W0" target "y"] edge[ source "W1" target "y"] edge[ source "W2" target "y"] edge[ source "W3" target "y"] edge[ source "W4" target "y"]node[ id "X0" label "X0"] edge[ source "X0" target "y"]] ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format.To create the causal graph for your dataset, you can use a tool like [DAGitty](http://dagitty.net/dags.html) that provides a GUI to construct the graph. You can export the graph string that it generates. The graph string is very close to the DOT format: just rename `dag` to `digraph`, remove newlines and add a semicolon after every line, to convert it to the DOT format and input to DoWhy. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. **DoWhy philosophy: Keep identification and estimation separate**Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps.* Identification ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|W3,W1,W4,W0,W2,X0)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,W3,W1,W4,W0,W2,X0,U) = P(y|v0,W3,W1,W4,W0,W2,X0) ### Estimand : 2 Estimand name: iv Estimand expression: Expectation(Derivative(y, [Z1, Z0])*Derivative([v0], [Z1, Z0])**(-1)) Estimand assumption 1, As-if-random: If U→→y then ¬(U →→{Z1,Z0}) Estimand assumption 2, Exclusion: If we remove {Z1,Z0}→{v0}, then ¬({Z1,Z0}→y) ### Estimand : 3 Estimand name: frontdoor No such variable found! ###Markdown Note the parameter flag *proceed\_when\_unidentifiable*. It needs to be set to *True* to convey the assumption that we are ignoring any unobserved confounding. The default behavior is to prompt the user to double-check that the unobserved confounders can be ignored. * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output *** Causal Estimate *** ## Identified estimand Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|W3,W1,W4,W0,W2,X0)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,W3,W1,W4,W0,W2,X0,U) = P(y|v0,W3,W1,W4,W0,W2,X0) ## Realized estimand b: y~v0+W3+W1+W4+W0+W2+X0 Target units: ate ## Estimate Mean value: 9.566139500556192 Causal Estimate is 9.566139500556192 ###Markdown You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify "effect modifiers" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. ###Code # Causal effect on the control group (ATC) causal_estimate_att = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification", target_units = "atc") print(causal_estimate_att) print("Causal Estimate is " + str(causal_estimate_att.value)) ###Output *** Causal Estimate *** ## Identified estimand Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|W3,W1,W4,W0,W2,X0)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,W3,W1,W4,W0,W2,X0,U) = P(y|v0,W3,W1,W4,W0,W2,X0) ## Realized estimand b: y~v0+W3+W1+W4+W0+W2+X0 Target units: atc ## Estimate Mean value: 9.573808685737262 Causal Estimate is 9.573808685737262 ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"]) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output _____no_output_____ ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output *** Causal Estimate *** ## Identified estimand Estimand type: nonparametric-ate ### Estimand : 1 Estimand name: backdoor Estimand expression: d ─────(Expectation(y|W0,W3,W1,W2,W4)) d[v₀] Estimand assumption 1, Unconfoundedness: If U→{v0} and U→y then P(y|v0,W0,W3,W1,W2,W4,U) = P(y|v0,W0,W3,W1,W2,W4) ## Realized estimand b: y~v0+W0+W3+W1+W2+W4 Target units: ate ## Estimate Mean value: 9.60353896239299 Causal Estimate is 9.60353896239299 ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output Refute: Add a Random Common Cause Estimated effect:9.60353896239299 New effect:9.602995259163794 ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output Refute: Add an Unobserved Common Cause Estimated effect:9.60353896239299 New effect:7.591550789699005 ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output Refute: Use a Placebo Treatment Estimated effect:9.60353896239299 New effect:0.005166426876598622 p value:0.47 ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output Refute: Use a subset of data Estimated effect:9.60353896239299 New effect:9.6152898407909 p value:0.42 ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output Refute: Use a subset of data Estimated effect:9.60353896239299 New effect:9.618271557173077 p value:0.33 ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.First, let us add the required path for Python to find the DoWhy code and load all required packages. ###Code import os, sys sys.path.append(os.path.abspath("../../")) ###Output _____no_output_____ ###Markdown Let's check the python version. ###Code print(sys.version) import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_samples=10000, treatment_is_binary=True) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output Z0 Z1 X0 X1 X2 X3 X4 v \ 0 0.0 0.726137 0.209729 -0.501985 0.603780 -0.126718 0.384465 1.0 1 1.0 0.717027 0.715954 -1.786145 -0.845255 -1.533578 -0.551045 0.0 2 0.0 0.647865 -0.320209 -0.410796 -1.460011 -1.667352 -0.180602 0.0 3 1.0 0.078318 1.733261 1.138876 -0.248288 -2.129048 -0.083776 1.0 4 1.0 0.183760 1.947433 -1.670101 -0.860602 -0.879129 1.759958 1.0 y 0 11.324853 1 -8.295659 2 -6.895545 3 7.714629 4 10.668822 digraph { v ->y; U[label="Unobserved Confounders"]; U->v; U->y;X0-> v; X1-> v; X2-> v; X3-> v; X4-> v;X0-> y; X1-> y; X2-> y; X3-> y; X4-> y;Z0-> v; Z1-> v;} graph[directed 1node[ id "v" label "v"]node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "v" target "y"]edge[source "Unobserved Confounders" target "v"]edge[source "Unobserved Confounders" target "y"]node[ id "X0" label "X0"] edge[ source "X0" target "v"] node[ id "X1" label "X1"] edge[ source "X1" target "v"] node[ id "X2" label "X2"] edge[ source "X2" target "v"] node[ id "X3" label "X3"] edge[ source "X3" target "v"] node[ id "X4" label "X4"] edge[ source "X4" target "v"]edge[ source "X0" target "y"] edge[ source "X1" target "y"] edge[ source "X2" target "y"] edge[ source "X3" target "y"] edge[ source "X4" target "y"]node[ id "Z0" label "Z0"] edge[ source "Z0" target "v"] node[ id "Z1" label "Z1"] edge[ source "Z1" target "v"]] ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. **DoWhy philosophy: Keep identification and estimation separate**Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps.* Identification ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['X2', 'X1', 'X0', 'X3', 'X4', 'Unobserved Confounders'] WARNING:dowhy.causal_identifier:There are unobserved common causes. Causal effect cannot be identified. ###Markdown If you want to disable the warning for ignoring unobserved confounders, you can add a parameter flag ( *proceed\_when\_unidentifiable* ). The same parameter can also be added when instantiating the CausalModel object. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['X2', 'X1', 'X0', 'X3', 'X4', 'Unobserved Confounders'] WARNING:dowhy.causal_identifier:There are unobserved common causes. Causal effect cannot be identified. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:['Z0', 'Z1'] ###Markdown * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X2+X1+X0+X3+X4 ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"]) model.view_model() ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['X2', 'X1', 'X0', 'X3', 'X4', 'U'] WARNING:dowhy.causal_identifier:There are unobserved common causes. Causal effect cannot be identified. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[] ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X2+X1+X0+X3+X4 ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X2+X1+X0+X3+X4+w_random ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X2+X1+X0+X3+X4 ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+X2+X1+X0+X3+X4 ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X2+X1+X0+X3+X4 ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+X2+X1+X0+X3+X4 ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.First, let us add the required path for Python to find the DoWhy code and load all required packages. ###Code import os, sys sys.path.append(os.path.abspath("../../../")) ###Output _____no_output_____ ###Markdown Let's check the python version. ###Code print(sys.version) import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=10000, treatment_is_binary=True) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output X0 Z0 Z1 W0 W1 W2 W3 W4 \ 0 -1.043320 0.0 0.523430 -1.390494 0.499882 2.059895 1.275582 -0.801243 1 0.277130 1.0 0.657054 -0.703268 0.181127 1.979497 1.685872 -1.274490 2 0.355053 0.0 0.495130 -0.240308 -0.262587 -1.507041 -0.894224 -0.577460 3 -1.740237 1.0 0.664943 -0.255855 -0.857088 0.506427 -0.091888 -1.859506 4 0.132229 0.0 0.451824 -1.711469 -0.702885 0.734297 1.056272 -0.987051 v y 0 True 5.927543 1 True 8.147414 2 False -7.564643 3 False -15.067172 4 False -12.062976 digraph { v ->y; U[label="Unobserved Confounders"]; U->v; U->y;W0-> v; W1-> v; W2-> v; W3-> v; W4-> v;W0-> y; W1-> y; W2-> y; W3-> y; W4-> y;Z0-> v; Z1-> v;X0-> y;} graph[directed 1node[ id "v" label "v"]node[ id "y" label "y"]node[ id "Unobserved Confounders" label "Unobserved Confounders"]edge[source "v" target "y"]edge[source "Unobserved Confounders" target "v"]edge[source "Unobserved Confounders" target "y"]node[ id "W0" label "W0"] edge[ source "W0" target "v"] node[ id "W1" label "W1"] edge[ source "W1" target "v"] node[ id "W2" label "W2"] edge[ source "W2" target "v"] node[ id "W3" label "W3"] edge[ source "W3" target "v"] node[ id "W4" label "W4"] edge[ source "W4" target "v"]edge[ source "W0" target "y"] edge[ source "W1" target "y"] edge[ source "W2" target "y"] edge[ source "W3" target "y"] edge[ source "W4" target "y"]node[ id "Z0" label "Z0"] edge[ source "Z0" target "v"] node[ id "Z1" label "Z1"] edge[ source "Z1" target "v"]node[ id "X0" label "X0"] edge[ source "X0" target "y"]] ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. **DoWhy philosophy: Keep identification and estimation separate**Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps.* Identification ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['W3', 'Unobserved Confounders', 'W2', 'W0', 'W4', 'W1'] WARNING:dowhy.causal_identifier:There are unobserved common causes. Causal effect cannot be identified. ###Markdown If you want to disable the warning for ignoring unobserved confounders, you can add a parameter flag ( *proceed\_when\_unidentifiable* ). The same parameter can also be added when instantiating the CausalModel object. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['W3', 'Unobserved Confounders', 'W2', 'W0', 'W4', 'W1'] WARNING:dowhy.causal_identifier:There are unobserved common causes. Causal effect cannot be identified. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:['Z1', 'Z0'] ###Markdown * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+W3+W2+W0+W4+W1 /usr/local/lib/python3.5/dist-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning) ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"]) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output INFO:dowhy.causal_identifier:Common causes of treatment and outcome:['W3', 'W2', 'W0', 'W4', 'W1', 'U'] WARNING:dowhy.causal_identifier:There are unobserved common causes. Causal effect cannot be identified. INFO:dowhy.causal_identifier:Continuing by ignoring these unobserved confounders because proceed_when_unidentifiable flag is True. INFO:dowhy.causal_identifier:Instrumental variables for treatment and outcome:[] ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+W3+W2+W0+W4+W1 /usr/local/lib/python3.5/dist-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning) ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+W3+W2+W0+W4+W1+w_random /usr/local/lib/python3.5/dist-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning) ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+W3+W2+W0+W4+W1 /usr/local/lib/python3.5/dist-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning) ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~placebo+W3+W2+W0+W4+W1 /usr/local/lib/python3.5/dist-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning) ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+W3+W2+W0+W4+W1 /usr/local/lib/python3.5/dist-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning) ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output INFO:dowhy.causal_estimator:INFO: Using Propensity Score Stratification Estimator INFO:dowhy.causal_estimator:b: y~v+W3+W2+W0+W4+W1 /usr/local/lib/python3.5/dist-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning) ###Markdown Getting started with DoWhy: A simple exampleThis is a quick introduction to the DoWhy causal inference library.We will load in a sample dataset and estimate the causal effect of a (pre-specified)treatment variable on a (pre-specified) outcome variable.First, let us add the required path for Python to find the DoWhy code and load all required packages. ###Code import os, sys sys.path.append(os.path.abspath("../../../")) ###Output _____no_output_____ ###Markdown Let's check the python version. ###Code print(sys.version) import numpy as np import pandas as pd import dowhy from dowhy import CausalModel import dowhy.datasets ###Output _____no_output_____ ###Markdown Now, let us load a dataset. For simplicity, we simulate a dataset with linear relationships between common causes and treatment, and common causes and outcome. Beta is the true causal effect. ###Code data = dowhy.datasets.linear_dataset(beta=10, num_common_causes=5, num_instruments = 2, num_effect_modifiers=1, num_samples=10000, treatment_is_binary=True, num_discrete_common_causes=1) df = data["df"] print(df.head()) print(data["dot_graph"]) print("\n") print(data["gml_graph"]) ###Output (10000, 5) (10000,) ###Markdown Note that we are using a pandas dataframe to load the data. At present, DoWhy only supports pandas dataframe as input. Interface 1 (recommended): Input causal graph We now input a causal graph in the GML graph format (recommended). You can also use the DOT format. ###Code # With graph model=CausalModel( data = df, treatment=data["treatment_name"], outcome=data["outcome_name"], graph=data["gml_graph"] ) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown The above causal graph shows the assumptions encoded in the causal model. We can now use this graph to first identify the causal effect (go from a causal estimand to a probability expression), and then estimate the causal effect. **DoWhy philosophy: Keep identification and estimation separate**Identification can be achieved without access to the data, acccesing only the graph. This results in an expression to be computed. This expression can then be evaluated using the available data in the estimation step.It is important to understand that these are orthogonal steps.* Identification ###Code identified_estimand = model.identify_effect() print(identified_estimand) ###Output _____no_output_____ ###Markdown If you want to disable the warning for ignoring unobserved confounders, you can add a parameter flag ( *proceed\_when\_unidentifiable* ). The same parameter can also be added when instantiating the CausalModel object. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) ###Output _____no_output_____ ###Markdown * Estimation ###Code causal_estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(causal_estimate) print("Causal Estimate is " + str(causal_estimate.value)) ###Output _____no_output_____ ###Markdown You can input additional parameters to the estimate_effect method. For instance, to estimate the effect on any subset of the units, you can specify the "target_units" parameter which can be a string ("ate", "att", or "atc"), lambda function that filters rows of the data frame, or a new dataframe on which to compute the effect. You can also specify "effect modifiers" to estimate heterogeneous effects across these variables. See `help(CausalModel.estimate_effect)`. ###Code # Causal effect on the control group (ATC) causal_estimate_att = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification", target_units = "atc") print(causal_estimate_att) print("Causal Estimate is " + str(causal_estimate_att.value)) ###Output _____no_output_____ ###Markdown Interface 2: Specify common causes and instruments ###Code # Without graph model= CausalModel( data=df, treatment=data["treatment_name"], outcome=data["outcome_name"], common_causes=data["common_causes_names"], effect_modifiers=data["effect_modifier_names"]) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ###Output _____no_output_____ ###Markdown We get the same causal graph. Now identification and estimation is done as before. ###Code identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) ###Output _____no_output_____ ###Markdown * Estimation ###Code estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_stratification") print(estimate) print("Causal Estimate is " + str(estimate.value)) ###Output _____no_output_____ ###Markdown Refuting the estimateLet us now look at ways of refuting the estimate obtained. Adding a random common cause variable ###Code res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(res_random) ###Output _____no_output_____ ###Markdown Adding an unobserved common cause variable ###Code res_unobserved=model.refute_estimate(identified_estimand, estimate, method_name="add_unobserved_common_cause", confounders_effect_on_treatment="binary_flip", confounders_effect_on_outcome="linear", effect_strength_on_treatment=0.01, effect_strength_on_outcome=0.02) print(res_unobserved) ###Output _____no_output_____ ###Markdown Replacing treatment with a random (placebo) variable ###Code res_placebo=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter", placebo_type="permute") print(res_placebo) ###Output _____no_output_____ ###Markdown Removing a random subset of the data ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9) print(res_subset) ###Output _____no_output_____ ###Markdown As you can see, the propensity score stratification estimator is reasonably robust to refutations.For reproducibility, you can add a parameter "random_seed" to any refutation method, as shown below. ###Code res_subset=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter", subset_fraction=0.9, random_seed = 1) print(res_subset) ###Output _____no_output_____
2019/release/Exam/exam.ipynb
###Markdown APPM4057 - November 2019 Exam Office Info| Examiner | Name ||-------------------|-------------------|| Internal Examiner | Prof B. A. Jacobs || External Examiner | Prof N. Hale (Stellenbosch) | CDEs Honours - Exam Instructions* Read all the instructions carefully.* The exam consists of **80 Marks**, with **three and a half** hours available.* The written section is to be answered in the book provided.* You must only access Moodle **TESTS** and NOT Moodle.* The programming section is to be answered within this Jupyter notebook and resubmitted to Moodle **TESTS**.* Do not rename the notebook, simply answer the questions and resubmit the file to Moodle.* The moodle submission link will expire at exactly **12:30** and **NO** late submission will be accepted. Please make sure you submit timeously!* The **Numpy** and **Matplotlib** documentation has been downloaded and is open on your current machine.* **NB!!!** Anyone caught using Moodle (and its file storing), flash drives or external notes will be awarded zero and reported to the legal office. Written Section* Answer the following questions in the answer book provided. Question 1 - 10 MarksDiscuss each of the below listed terms and their relation to one another (i.e. the theorem which describes this relationship):* Consistency,* Stability, * Convergence,Be sure to give potential methods of analysing consistency, stability and convergence of a linear PDE. Question 2 - 10 MarksAnalyse the stability of the difference scheme given by:$$u^{m}_{n} = -\beta u^{m+1}_{n-1} + (1+2\beta) u^{m+1}_{n} - \beta u^{m+1}_{n+1},$$using the discrete Fourier Transform:$$\hat{u}^{m+1}(\xi)=\frac{1}{\sqrt{2 \pi}} \sum_{n=-\infty}^{\infty} e^{-i n \xi} u_{n}^{m+1}.$$ Question 3 - 10 MarksConsider the Crank-Nicolson (CN) scheme applied to the heat equation:\begin{equation}u_t = \nu u_{xx}.\label{eq:heat}\end{equation}(a) [3 Marks] Write out the CN scheme for the heat equation given by above.(b) [7 Marks] Investigate the consistency of the scheme derived in Question 3(a). Question 4 - 10 MarksConsider the following two-dimensional wave equation:$$\dfrac{\partial^2 u}{\partial t^2} = \nu^2 \left(\dfrac{\partial^2 u}{\partial x^2} + \dfrac{\partial^2 u}{\partial y^2}\right).\label{eq:wave}$$(a) [5 Marks] Set up the system of coupled first-order conservative partial differential equations by structuring new dependent variables $r$, $s$ and $l$, which can be used to solve the wave equation given above via the Lax-Wendroff finite difference scheme.(b) [5 Marks] Provide the system in vector notation where the system is represented as:$$\dfrac{\partial \bf{U}}{\partial t} + \nabla \bf{F}(\bf{U}) = 0, $$where $\bf{U} = [r,s,l]$. Programming Section Question 1 - 15 Marks(a) [10 Marks] Consider the wave equation given by:$$u_{tt} = c^2u_{xx},$$$$u(x, 0) = x(1 - x), \ \ \ u_t(x, 0) = 0, \ \ \ u(0, t) = u(1, t) = 0.$$Write a function that implements a centered space, centered time scheme. Specifically, your function should take as inputs, the stepsizes `dx` and `dt`, the wave speed `c` (not squared valued), the number of iterations to perform `N` and the left and right endpoints `a`, `b`.Your function should return the solution matrix, containing the wave profile at each time iteration, i.e. `N + 1` rows. ###Code def wave_eq(dx, dt, c, N, ic, a, b): # YOUR CODE HERE raise NotImplementedError() # Run this test cell to check your code # Do not delete this cell # 1 mark # Unit test dx = 0.1 c = 0.2 tf = 8 N = 20 dt = (tf - 0)/N a = 0 b = 1 ic = lambda x: x*(1 - x) tans = np.array([0. , 0.09, 0.16, 0.21, 0.24, 0.25, 0.24, 0.21, 0.16, 0.09, 0. ]) U = wave_eq(dx, dt, c, N, ic, a, b) nt.assert_array_almost_equal(tans, U[0, :]) print('Test case passed!!!') # Run this test cell to check your code # Do not delete this cell # 1 mark # Unit test dx = 0.1 c = 0.2 tf = 8 N = 20 dt = (tf - 0)/N a = 0 b = 1 ic = lambda x: x*(1 - x) tans = np.array([0. , 0.0836, 0.1536, 0.2036, 0.2336, 0.2436, 0.2336, 0.2036, 0.1536, 0.0836, 0. ]) U = wave_eq(dx, dt, c, N, ic, a, b) nt.assert_array_almost_equal(tans, U[1, :]) print('Test case passed!!!') # Hidden test # No output will be produced # 8 marks ###Output _____no_output_____ ###Markdown (b) [5 Marks] Continuing with Question 1(a), given the input below, produce the 3D surface plot, illustrating the evolution of the wave profile over time. ###Code dx = 0.001 c = 0.2 tf = 8 N = 2000 dt = (tf - 0)/N a = 0 b = 1 ic = lambda x: x*(1 - x) xvals = np.linspace(a, b, int((b - a)/dx) + 1) U = wave_eq(dx, dt, c, N, ic, a, b) # 5 Marks # YOUR CODE HERE raise NotImplementedError() ###Output _____no_output_____ ###Markdown Question 2 - 10 Marks(a) [8 Marks] Write a function implements a finite difference scheme to solve the partial differential equation below. The function should implement an explicit scheme which is forward difference in time and central difference in space, specifically first order in time and second order in space. It should take as inputs a time step `dt`, a spatial step `dx`, a number of time-steps `N` to perform, the coefficients `D` and `nu`, an initial function `ic` passed as a handle and boundary values `alpha` and `beta`. The function should output the solution space matrix `u`, as well as the `xvals` (i.e. vector of $x$ steps) and `tvals` (i.e the vector of time steps). The PDE is given below:$$v_t + \nu v_x = D v_{xx},$$$$\text{BC}: v(0, t) = \alpha = v(1, t) = \beta = 0;\quad\text{IC}: v(x, 0) = \sin(8\pi x); \quad t > 0, \ x \in [0, 1]$$ ###Code def heatEqn(dt, dx, N, ic, D, alpha, beta, nu): # YOUR CODE HERE raise NotImplementedError() # Run this test cell to check your code # Do not delete this cell # 1 mark # Unit test dx = 0.01 dt = 0.01 N = 2 D = 0.1 nu = 1 alpha = 0 beta = 0 ic = lambda x: np.sin(8*np.pi*x) u, xx, tt = heatEqn(dt, dx, N, ic, D, alpha, beta, nu) tans = np.array( [ 0. , 0.2486899, 0.4817537, 0.6845471, 0.8443279, 0.9510565, 0.9980267, 0.9822873, 0.9048271, 0.7705132, 0.5877853, 0.3681246, 0.1253332, -0.1253332, -0.3681246, -0.5877853, -0.7705132, -0.9048271, -0.9822873, -0.9980267, -0.9510565, -0.8443279, -0.6845471, -0.4817537, -0.2486899, -0. , 0.2486899, 0.4817537, 0.6845471, 0.8443279, 0.9510565, 0.9980267, 0.9822873, 0.9048271, 0.7705132, 0.5877853, 0.3681246, 0.1253332, -0.1253332, -0.3681246, -0.5877853, -0.7705132, -0.9048271, -0.9822873, -0.9980267, -0.9510565, -0.8443279, -0.6845471, -0.4817537, -0.2486899, -0. , 0.2486899, 0.4817537, 0.6845471, 0.8443279, 0.9510565, 0.9980267, 0.9822873, 0.9048271, 0.7705132, 0.5877853, 0.3681246, 0.1253332, -0.1253332, -0.3681246, -0.5877853, -0.7705132, -0.9048271, -0.9822873, -0.9980267, -0.9510565, -0.8443279, -0.6845471, -0.4817537, -0.2486899, -0. , 0.2486899, 0.4817537, 0.6845471, 0.8443279, 0.9510565, 0.9980267, 0.9822873, 0.9048271, 0.7705132, 0.5877853, 0.3681246, 0.1253332, -0.1253332, -0.3681246, -0.5877853, -0.7705132, -0.9048271, -0.9822873, -0.9980267, -0.9510565, -0.8443279, -0.6845471, -0.4817537, -0.2486899, 0. ]) nt.assert_array_almost_equal(tans, u[0]) print('Test case passed!!!') # Run this test cell to check your code # Do not delete this cell # 1 mark # Unit test dx = 0.01 dt = 0.01 N = 2 D = 0.1 nu = 1 alpha = 0 beta = 0 ic = lambda x: np.sin(8*np.pi*x) u, xx, tt = heatEqn(dt, dx, N, ic, D, alpha, beta, nu) tans = np.array( [ 0. , 2.4511655, -0.1252406, -0.0825335, -0.0346406, 0.015429 , 0.064529 , 0.1095745, 0.1477351, 0.1766128, 0.1943934, 0.1999595, 0.1929614, 0.1738388, 0.1437933, 0.1047128, 0.0590527, 0.0096822, -0.0402967, -0.0877437, -0.1296773, -0.1634629, -0.1869775, -0.1987436, -0.1980219, -0.1848578, -0.1600783, -0.1252406, -0.0825335, -0.0346406, 0.015429 , 0.064529 , 0.1095745, 0.1477351, 0.1766128, 0.1943934, 0.1999595, 0.1929614, 0.1738388, 0.1437933, 0.1047128, 0.0590527, 0.0096822, -0.0402967, -0.0877437, -0.1296773, -0.1634629, -0.1869775, -0.1987436, -0.1980219, -0.1848578, -0.1600783, -0.1252406, -0.0825335, -0.0346406, 0.015429 , 0.064529 , 0.1095745, 0.1477351, 0.1766128, 0.1943934, 0.1999595, 0.1929614, 0.1738388, 0.1437933, 0.1047128, 0.0590527, 0.0096822, -0.0402967, -0.0877437, -0.1296773, -0.1634629, -0.1869775, -0.1987436, -0.1980219, -0.1848578, -0.1600783, -0.1252406, -0.0825335, -0.0346406, 0.015429 , 0.064529 , 0.1095745, 0.1477351, 0.1766128, 0.1943934, 0.1999595, 0.1929614, 0.1738388, 0.1437933, 0.1047128, 0.0590527, 0.0096822, -0.0402967, -0.0877437, -0.1296773, -0.1634629, -0.1869775, -0.1987436, 2.164532 , 0. ]) nt.assert_array_almost_equal(tans, u[-1]) print('Test case passed!!!') # Hidden test # No output will be produced # 6 marks ###Output _____no_output_____ ###Markdown (b) [2 Marks] Continuing on from Question 3(a). Plot the profile of each time step on the same set of axes, given the following inputs (be sure to use a legend indicating which iteration and curve is which): ###Code dx = 0.01 dt = 0.01 N = 2 D = 0.1 nu = 1 alpha = 0 beta = 0 ic = lambda x: np.sin(8*np.pi*x) # 2 Marks # YOUR CODE HERE raise NotImplementedError() ###Output _____no_output_____ ###Markdown Question 3 - 15 Marks(a) [10 Marks] Consider the PDE given by:$$v_t = D\left( v_{xx} + v_{yy}\right), $$$$v(-1, y, t) = v(1, y, t) = 0,\ v(x, y, 0) = 0, $$\begin{equation}v(x, -1, 0) = v(x, 1, 0) = 100, \ \ \ x \in (-1, 1)\ \ \text{i.e. endpoints excluded}\end{equation}Periodic boundary conditions should be imposed on $y = -1, y = 1$, i.e.$v(x,-1,t)=v(x,1,t)$ for all $t$ and $x$.Write a function which implements an ADI scheme to solve the PDE given above. Specifically, the function should take as inputs, the step-sizes, `dx` and `dy`, the time-step `dt`, the constant `D`, and the number of iterations to perform `N`.The function should return the 2D heatmap for each time-step, that is, a 3D array of `N + 1` time-steps. ###Code def heat2D_ADI(dx, dy, dt, D, N): # YOUR CODE HERE raise NotImplementedError() # Run this test cell to check your code # Do not delete this cell # 1 mark # Unit test dx = 0.2 dy = 0.2 dt = 0.1 D = 0.1 N = 5 tans = np.array([[ 0., 100., 100., 100., 100., 100., 100., 100., 100., 100., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 100., 100., 100., 100., 100., 100., 100., 100., 100., 0.]]) U = heat2D_ADI(dx, dy, dt, D, N) nt.assert_array_almost_equal(tans, U[:, :, 0]) print('Test case passed!!!') # Run this test cell to check your code # Do not delete this cell # 1 mark # Unit test dx = 0.2 dy = 0.2 dt = 0.1 D = 0.1 N = 5 tans = np.array([[ 0. , 65.3025091, 78.3317661, 79.6479746, 79.780803 , 79.7928783, 79.780803 , 79.6479746, 78.3317661, 65.3025091, 0. ], [ 0. , 14.8641016, 17.82981 , 18.1294042, 18.1596385, 18.1623871, 18.1596385, 18.1294042, 17.82981 , 14.8641016, 0. ], [ 0. , 1.5015808, 1.8011785, 1.8314437, 1.834498 , 1.8347756, 1.834498 , 1.8314437, 1.8011785, 1.5015808, 0. ], [ 0. , 0.1517061, 0.1819747, 0.1850325, 0.185341 , 0.1853691, 0.185341 , 0.1850325, 0.1819747, 0.1517061, 0. ], [ 0. , 0.0154802, 0.0185689, 0.0188809, 0.0189124, 0.0189152, 0.0189124, 0.0188809, 0.0185689, 0.0154802, 0. ], [ 0. , 0.003096 , 0.0037138, 0.0037762, 0.0037825, 0.003783 , 0.0037825, 0.0037762, 0.0037138, 0.003096 , 0. ], [ 0. , 0.0154802, 0.0185689, 0.0188809, 0.0189124, 0.0189152, 0.0189124, 0.0188809, 0.0185689, 0.0154802, 0. ], [ 0. , 0.1517061, 0.1819747, 0.1850325, 0.185341 , 0.1853691, 0.185341 , 0.1850325, 0.1819747, 0.1517061, 0. ], [ 0. , 1.5015808, 1.8011785, 1.8314437, 1.834498 , 1.8347756, 1.834498 , 1.8314437, 1.8011785, 1.5015808, 0. ], [ 0. , 14.8641016, 17.82981 , 18.1294042, 18.1596385, 18.1623871, 18.1596385, 18.1294042, 17.82981 , 14.8641016, 0. ], [ 0. , 65.3025091, 78.3317661, 79.6479746, 79.780803 , 79.7928783, 79.780803 , 79.6479746, 78.3317661, 65.3025091, 0. ]]) U = heat2D_ADI(dx, dy, dt, D, N) nt.assert_array_almost_equal(tans, U[:, :, 1]) print('Test case passed!!!') # Hidden test # No output will be produced # 8 marks ###Output _____no_output_____ ###Markdown (b) [2 Marks] Plot a heatmap of the final time-step using the ADI function in Question 2(a), given the input below: ###Code dx = 0.1 dy = 0.1 dt = 0.1 D = 0.2 N = 5 U = heat2D_ADI(dx, dy, dt, D, N) # 2 Marks # YOUR CODE HERE raise NotImplementedError() ###Output _____no_output_____ ###Markdown (c) [3 Marks] Modify your function from Question 3(a) to plot the two tridiagonal ADI matrices $A$ and $B$. ###Code #3 Marks # YOUR CODE HERE raise NotImplementedError() ###Output _____no_output_____
statsmodels_poc/statsmodels_logistic_reg.ipynb
###Markdown POC work on statsmodels logistic regressionPOC on statsmodels Logistic Regression including1. Training the model2. Pulling predictions3. Basic evaluation metrics4. Wrapping Logistic Regression statsmodel into an sklearn Pipeline ###Code import statsmodels.api as sm import pandas as pd import numpy as np import seaborn as sns import scipy from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.base import BaseEstimator, RegressorMixin from sklearn.pipeline import Pipeline from sklearn.metrics import ( roc_auc_score, recall_score, accuracy_score, precision_score, confusion_matrix, roc_curve ) # To help when Jupyter Notebook autocomplete is really slow %config Completer.use_jedi = False obs = 100000 population_df = ( # Make columns of noise pd.DataFrame(np.random.rand(obs, 3), columns=[f"f{i}" for i in range(3)]) .assign(target_class=np.random.rand(obs, 1) > 0.5) # Make columns that can help predict the target .assign(f3=lambda p_df: p_df.target_class.apply(lambda tc: tc + np.random.standard_normal())) ) population_df.head() X_train, X_test, y_train, y_test = train_test_split( population_df.drop(columns=['target_class']), population_df.target_class, test_size=0.2, random_state=42) # By default there is no intercept but can be added into the X_train as a column of 1's logit_mod = sm.Logit(y_train, X_train) logit_mod_fit = logit_mod.fit() logit_mod_fit.summary() # Be default these are the probability of target_class == True # For not just use the standard cutoff of 0.5 predictions = logit_mod_fit.predict(X_test) > 0.5 print(f"recall: {recall_score(y_test, predictions)}") print(f"accuracy: {accuracy_score(y_test, predictions)}") print(f"precision: {precision_score(y_test, predictions)}") print(f"ROC: {roc_auc_score(y_test, predictions)}") display(pd.DataFrame(confusion_matrix(y_test, predictions), index=[False, True], columns=[False, True])) fpr, tpr, _ = roc_curve(y_test, predictions) ax = sns.lineplot(x=fpr, y=tpr) ax.set_title('ROC Curve for Logistic Regression model') ###Output _____no_output_____ ###Markdown Wrap the Logit predictor in an sklearn Pipeline ###Code class SMWrapper(BaseEstimator, RegressorMixin): """ A universal sklearn-style wrapper for statsmodels regressors """ def __init__(self, model_class, fit_intercept=True): self.model_class = model_class self.fit_intercept = fit_intercept def fit(self, X, y): if isinstance(X, scipy.sparse.csr.csr_matrix): X = X.todense() # statsmodels has trouble with csr_matrix if self.fit_intercept: X = sm.add_constant(X) self.model_ = self.model_class(y, X) self.results_ = self.model_.fit() def predict(self, X): if self.fit_intercept: X = sm.add_constant(X) return self.results_.predict(X) pipeline_logit_model = Pipeline( steps=[ ('classifier', SMWrapper(sm.Logit, fit_intercept=False)) ]) pipeline_logit_model.fit(X_train, y_train) pl_predictions = pipeline_logit_model.predict(X_test) > 0.5 assert all(pl_predictions == predictions), 'Base predictions without the pipeline should match these predictions' ###Output _____no_output_____
.ipynb_checkpoints/keras_minst_linear-checkpoint.ipynb
###Markdown Load Dataset---- Dataset operations - data normalization - data reshaping - label gethering ###Code # pre-defined mnist dataest from keras.datasets import mnist batch_size = 128 n_classes = 10 # 10 digits 0 to 9 # the data, shuffled and split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() # input image dimentions n_sample, img_rows, img_cols = x_train.shape # Reshape data x_train = x_train.reshape(x_train.shape[0], img_rows * img_cols) x_test = x_test.reshape(x_test.shape[0], -1) input_shape = (img_rows * img_cols,) # float limiting for optimized memmory (for GPU usage) # basic gaming GPUs only works with 32 bit float and 32 bit int x_train = x_train.astype("float32") x_test = x_test.astype("float32") # normalizing the input between [1 0] x_train /= 255 x_test /= 255 print('x max:{} x min {}'.format(x_train.max(), x_train.min())) print('train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') #convert class vectors to binary class matrices Y_train = utils.to_categorical(y_train, n_classes) Y_test = utils.to_categorical(y_test, n_classes) print("label: {} ,One hot encoding: {}".format(y_train[0], Y_train[0, :])) ###Output x max:1.0 x min 0.0 train shape: (60000, 784) 60000 train samples 10000 test samples label: 5 ,One hot encoding: [0. 0. 0. 0. 0. 1. 0. 0. 0. 0.] ###Markdown Mnist data example ###Code for i in range(9): plt.subplot(3, 3, i+1) plt.imshow(x_train[i].reshape(img_rows, img_cols), cmap='gray') plt.axis("off") ###Output _____no_output_____ ###Markdown Model Definition ###Code # needed for model definition from keras.models import Sequential from keras.layers import Dense, Dropout from keras.layers import Activation model = Sequential([ Dense(128, input_shape=input_shape), Activation('relu'), Dense(128, activation='relu'), Dropout(0.5), Dense(64, activation='relu'), Dropout(0.5), Dense(64, activation='relu'), Dropout(0.5), Dense(32, activation='relu'), Dropout(0.5), Dense(n_classes, activation='softmax') ]) ###Output _____no_output_____ ###Markdown Train ###Code LR = 1e-3 opt = keras.optimizers.Adam(lr=LR) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'] ) model.summary() utils.plot_model(model, to_file='images/linear_model.png') ###Output _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 128) 100480 _________________________________________________________________ activation_1 (Activation) (None, 128) 0 _________________________________________________________________ dense_2 (Dense) (None, 128) 16512 _________________________________________________________________ dropout_1 (Dropout) (None, 128) 0 _________________________________________________________________ dense_3 (Dense) (None, 64) 8256 _________________________________________________________________ dropout_2 (Dropout) (None, 64) 0 _________________________________________________________________ dense_4 (Dense) (None, 64) 4160 _________________________________________________________________ dropout_3 (Dropout) (None, 64) 0 _________________________________________________________________ dense_5 (Dense) (None, 32) 2080 _________________________________________________________________ dropout_4 (Dropout) (None, 32) 0 _________________________________________________________________ dense_6 (Dense) (None, 10) 330 ================================================================= Total params: 131,818 Trainable params: 131,818 Non-trainable params: 0 _________________________________________________________________ ###Markdown Printed model graph-----![model graph](images/linear_model.png) ###Code n_epoch = 3 # we can increase epoch history = model.fit(x_train, Y_train, batch_size=batch_size, epochs=n_epoch, verbose=1, validation_split=0.2, shuffle=True) score = model.evaluate(x_test, Y_test, verbose=1) print ('Test score : {:.6f}'.format(score[0])) print ('Test accuracy: {:5.2f}%'.format(score[1] * 100)) ###Output Train on 48000 samples, validate on 12000 samples Epoch 1/3 48000/48000 [==============================] - 22s 457us/step - loss: 1.5917 - acc: 0.4240 - val_loss: 0.6878 - val_acc: 0.8082 Epoch 2/3 48000/48000 [==============================] - 20s 425us/step - loss: 0.7828 - acc: 0.7532 - val_loss: 0.3499 - val_acc: 0.9218 Epoch 3/3 48000/48000 [==============================] - 21s 431us/step - loss: 0.5354 - acc: 0.8530 - val_loss: 0.2551 - val_acc: 0.9437 10000/10000 [==============================] - 6s 597us/step Test score : 0.271645 Test accuracy: 94.25% ###Markdown Training History Visiualization ###Code # Plot training & validation accuracy values plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('Model accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() # Plot training & validation loss values plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='best') plt.show() ###Output _____no_output_____ ###Markdown saving and loading weights ###Code model.save_weights('weights/mnist_linear.h5') print('model saved.') model.load_weights('weights/mnist_linear.h5') score = model.evaluate(x_test, Y_test, verbose=1) print ('Test score : {:.6f}'.format(score[0])) print ('Test accuracy: {:5.2f}%'.format(score[1] * 100)) ## Visualize sample result radn_n = np.random.randint(x_test.shape[0] - 9) res = model.predict_classes(x_test[radn_n:radn_n+9]) plt.figure(figsize=(10, 10)) for i in range(9): plt.subplot(3, 3, i+1) plt.imshow(x_test[i+radn_n].reshape(img_rows, img_cols), 'gray') plt.gca().get_xaxis().set_ticks([]) plt.gca().get_yaxis().set_ticks([]) plt.xlabel("prediction = %d" % res[i], fontsize= 18) model = Sequential([ #Dense(128, input_shape=input_shape, activation='relu', kernel_regularizer=keras.regularizers.l2(0.2)), #Dropout(0.5), Dense(n_classes, input_shape=input_shape, activation='softmax', kernel_regularizer=keras.regularizers.l2(0.2)) ]) ###Output _____no_output_____ ###Markdown Train ###Code LR = 1e-3 opt = keras.optimizers.Adam(lr=LR) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'] ) model.summary() utils.plot_model(model, to_file='images/linear_model_2.png') ###Output _____no_output_____ ###Markdown Printed model graph-----![model graph](images/linear_model_2.png) ###Code n_epoch = 4 # we can increase epoch history = model.fit(x_train, Y_train, batch_size=batch_size, epochs=n_epoch, verbose=1, validation_split=0.2, shuffle=True) score = model.evaluate(x_test, Y_test, verbose=1) print ('Test score : {:.6f}'.format(score[0])) print ('Test accuracy: {:5.2f}%'.format(score[1] * 100)) model.save_weights('weights/mnist_linear_2.h5') print('model saved.') ###Output model saved.
notebooks/DummyDataTest.ipynb
###Markdown create_dummy_data med 10mio rækker ###Code from shared.create_dummy_data import create_double_helix from pyspark.sql import functions as F helix = create_double_helix(100, 5., 1., ) helix_df = spark.createDataFrame(helix) helix_df.withColumn('dd',F.isnan('unknown_label')).show() helix.to_csv('/home/svanhmic/workspace/data/DABAI/sparkdata/csv/double_helix3.csv', sep=',', index=False, columns=['id','unknown_label','x','y','z']) # (helix_df # .select('id','unknown_label','x','y','z') # .write # .csv('/home/svanhmic/workspace/data/DABAI/sparkdata/csv/double_helix2.csv', # mode='overwrite',header=True, # nullValue=' ') # ) df = spark.read.csv('/home/svanhmic/workspace/data/DABAI/sparkdata/csv/double_helix2.csv', sep=',',inferSchema=True, header=True, nanValue=None, nullValue=None) df.show(200) mnist_df = spark.read.csv('/home/svanhmic/workspace/data/DABAI/mnist/train.csv',header=True) mnist_df.show(5) mnist_pdf = pd.read_csv('/home/svanhmic/workspace/data/DABAI/mnist/train.csv', header=0) spark.createDataFrame(mnist_pdf).show(5) ###Output _____no_output_____
notebooks/pytorch-ted-kaggle.ipynb
###Markdown TED Talks keyword labeling with pre-trained word embeddingsIn this notebook, we'll use pre-trained [GloVe word embeddings](http://nlp.stanford.edu/projects/glove/) for keyword labeling using PyTorch. This notebook is largely based on the blog post [Using pre-trained word embeddings in a Keras model](https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html) by François Chollet.**Note that using a GPU with this notebook is highly recommended.**First, the needed imports. ###Code %matplotlib inline import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.autograd import Variable from torch.utils.data import TensorDataset, DataLoader from distutils.version import LooseVersion as LV from keras.preprocessing import sequence, text from sklearn import metrics import os import sys import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set() if torch.cuda.is_available(): device = torch.device('cuda') else: device = torch.device('cpu') print('Using PyTorch version:', torch.__version__, ' Device:', device) assert(LV(torch.__version__) >= LV("1.0.0")) ###Output _____no_output_____ ###Markdown TensorBoard is a tool for visualizing progress during training. Although TensorBoard was created for TensorFlow, it can also be used with PyTorch. It is easiest to use it with the tensorboardX module. ###Code try: import tensorboardX import os, datetime logdir = os.path.join(os.getcwd(), "logs", "ted-"+datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S')) print('TensorBoard log directory:', logdir) os.makedirs(logdir) log = tensorboardX.SummaryWriter(logdir) except ImportError as e: log = None ###Output _____no_output_____ ###Markdown GloVe word embeddingsLet's begin by loading a datafile containing pre-trained word embeddings from [Pouta Object Storage](https://research.csc.fi/pouta-object-storage). The datafile contains 100-dimensional embeddings for 400,000 English words. ###Code !wget -nc https://object.pouta.csc.fi/swift/v1/AUTH_dac/mldata/glove6b100dtxt.zip !unzip -u glove6b100dtxt.zip GLOVE_DIR = "." print('Indexing word vectors.') embeddings_index = {} with open(os.path.join(GLOVE_DIR, 'glove.6B.100d.txt')) as f: for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefs embedding_dim = len(coefs) print('Found %d word vectors of dimensionality %d.' % (len(embeddings_index), embedding_dim)) print('Examples of embeddings:') for w in ['some', 'random', 'words']: print(w, embeddings_index[w]) ###Output _____no_output_____ ###Markdown TED Talks data setNext we'll load the TED Talks data set (Kaggle [TED Talks](https://www.kaggle.com/rounakbanik/ted-talks), 2017 edition). The data is stored in two CSV files, so we load both of them and merge them into a single DataFrame. The merged dataset contains transcripts and metadata of 2467 TED talks. Each talk is also annotated with a set of tags. ###Code !wget -nc https://object.pouta.csc.fi/swift/v1/AUTH_dac/mldata/ted-talks.zip !unzip -u ted-talks.zip TEXT_DATA_DIR = "." df1 = pd.read_csv(TEXT_DATA_DIR+'/ted_main.csv') df2 = pd.read_csv(TEXT_DATA_DIR+'/transcripts.csv') df = pd.merge(left=df1, right=df2, how='inner', left_on='url', right_on='url') print(len(df), 'talks') df.head() ###Output _____no_output_____ ###Markdown Textual dataThere are two potential columns to be used as the input text source: `transcript` and `description`. The former is the full transcript of the talk, whereas the latter is a shorter abstract of the contents of the talk. Let's inspect the distributions of the lengths of these columns: ###Code len_trans, len_descr = np.empty(len(df)), np.empty(len(df)) for i, row in df.iterrows(): len_trans[i]=len(row['transcript']) len_descr[i]=len(row['description']) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title('Length of descriptions, mean: %.2f' % np.mean(len_descr)) plt.xlabel('words') plt.hist(len_descr, 'auto') plt.subplot(122) plt.title('Length of transcripts, mean: %.2f' % np.mean(len_trans)) plt.xlabel('words') plt.hist(len_trans, 'auto'); ###Output _____no_output_____ ###Markdown Now we decide to use either the `transcipt` or the `description` column: ###Code texttype = "transcript" #texttype = "description" ###Output _____no_output_____ ###Markdown KeywordsLet's start by converting the string-type lists of tags to Python lists. Then, we take a look at a histogram of number of tags attached to talks: ###Code import ast df['taglist']=df['tags'].apply(lambda x: ast.literal_eval(x)) df.head() l = np.empty(len(df)) for i, v in df['taglist'].iteritems(): l[i]=len(v) plt.figure() plt.title('Number of tags, mean: %.2f' % np.mean(l)) plt.xlabel('labels') plt.hist(l,np.arange(40)+1); ###Output _____no_output_____ ###Markdown We use the `NLABELS` most frequent tags as keyword labels we wish to predict: ###Code NLABELS=100 ntags = dict() for tl in df['taglist']: for t in tl: if t in ntags: ntags[t] += 1 else: ntags[t] = 1 ntagslist_sorted = sorted(ntags, key=ntags.get, reverse=True) print('Total of', len(ntagslist_sorted), 'tags found. Showing', NLABELS, 'most common tags:') for i, t in enumerate(ntagslist_sorted[:NLABELS]): print(i, t, ntags[t]) def tags_to_indices(x): ilist = [] for t in x: ilist.append(ntagslist_sorted.index(t)) return ilist df['tagidxlist'] = df['taglist'].apply(tags_to_indices) def indices_to_labels(x): labels = np.zeros(NLABELS) for i in x: if i < NLABELS: labels[i] = 1 return labels df['labels'] = df['tagidxlist'].apply(indices_to_labels) df.head() ###Output _____no_output_____ ###Markdown Produce input and label tensorsWe vectorize the text samples and labels into a 2D integer tensors. `MAX_NUM_WORDS` is the number of different words to use as tokens, selected based on word frequency. `MAX_SEQUENCE_LENGTH` is the fixed sequence length obtained by truncating or padding the original sequences. ###Code MAX_NUM_WORDS = 10000 MAX_SEQUENCE_LENGTH = 1000 tokenizer = text.Tokenizer(num_words=MAX_NUM_WORDS) tokenizer.fit_on_texts([x for x in df[texttype]]) sequences = tokenizer.texts_to_sequences([x for x in df[texttype]]) word_index = tokenizer.word_index print('Found %s unique tokens.' % len(word_index)) data = sequence.pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH) labels = np.asarray([x for x in df['labels']]) print('Shape of data tensor:', data.shape) print('Shape of labels tensor:', labels.shape) ###Output _____no_output_____ ###Markdown Next, we split the data into a training set and a validation set. We use a fraction of the data specified by `VALIDATION_DATA` for validation. Note that we do not use a separate test set in this notebook, due to the small size of the dataset. ###Code VALIDATION_SPLIT = 0.2 indices = np.arange(data.shape[0]) np.random.shuffle(indices) data = data[indices] labels = labels[indices] num_validation_samples = int(VALIDATION_SPLIT * data.shape[0]) x_train = data[:-num_validation_samples] y_train = labels[:-num_validation_samples] x_val = data[-num_validation_samples:] y_val = labels[-num_validation_samples:] print('Shape of training data tensor:', x_train.shape) print('Shape of training label tensor:', y_train.shape) print('Shape of validation data tensor:', x_val.shape) print('Shape of validation label tensor:', y_val.shape) BATCH_SIZE = 16 print('Train: ', end="") train_dataset = TensorDataset(torch.LongTensor(x_train), torch.FloatTensor(y_train)) train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) print(len(train_dataset), 'talks') print('Validation: ', end="") validation_dataset = TensorDataset(torch.LongTensor(x_val), torch.FloatTensor(y_val)) validation_loader = DataLoader(validation_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=4) print(len(validation_dataset), 'talks') ###Output _____no_output_____ ###Markdown We prepare the embedding matrix by retrieving the corresponding word embedding for each token in our vocabulary: ###Code print('Preparing embedding matrix.') num_words = min(MAX_NUM_WORDS, len(word_index) + 1) embedding_matrix = np.zeros((num_words, embedding_dim)) for word, i in word_index.items(): if i >= MAX_NUM_WORDS: continue embedding_vector = embeddings_index.get(word) if embedding_vector is not None: # words not found in embedding index will be all-zeros. embedding_matrix[i] = embedding_vector embedding_matrix = torch.FloatTensor(embedding_matrix) print('Shape of embedding matrix:', embedding_matrix.shape) ###Output _____no_output_____ ###Markdown 1-D CNN Initialization ###Code class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.embed = nn.Embedding.from_pretrained(embedding_matrix, freeze=True) self.conv1 = nn.Conv1d(100, 128, 5) self.pool1 = nn.MaxPool1d(5) self.conv2 = nn.Conv1d(128, 128, 5) self.pool2 = nn.MaxPool1d(5) self.conv3 = nn.Conv1d(128, 128, 5) self.pool3 = nn.MaxPool1d(35) self.fc1 = nn.Linear(128, 64) self.fc2 = nn.Linear(64, NLABELS) def forward(self, x): x = self.embed(x) x = x.transpose(1,2) x = F.relu(self.conv1(x)) x = self.pool1(x) x = F.relu(self.conv2(x)) x = self.pool2(x) x = F.relu(self.conv3(x)) x = self.pool3(x) x = x.view(-1, 128) x = F.relu(self.fc1(x)) return torch.sigmoid(self.fc2(x)) #return F.log_softmax(self.fc2(x), dim=1) model = Net().to(device) optimizer = optim.RMSprop(model.parameters(), lr=0.005) criterion = nn.BCELoss() print(model) ###Output _____no_output_____ ###Markdown Learning ###Code def train(epoch, log_interval=200): # Set model to training mode model.train() # Loop over each batch from the training set for batch_idx, (data, target) in enumerate(train_loader): # Copy data to GPU if needed data = data.to(device) target = target.to(device) # Zero gradient buffers optimizer.zero_grad() # Pass data through the network output = model(data) # Calculate loss loss = criterion(output, target) # Backpropagate loss.backward() # Update weights optimizer.step() if batch_idx % log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.data.item())) def evaluate(loader, loss_vector=None): model.eval() loss, correct = 0, 0 pred_vector = torch.FloatTensor() pred_vector = pred_vector.to(device) for data, target in loader: data = data.to(device) target = target.to(device) output = model(data) loss += criterion(output, target).data.item() pred = output.data pred_vector = torch.cat((pred_vector, pred)) loss /= len(validation_loader) if loss_vector is not None: loss_vector.append(loss) print('Average loss: {:.4f}\n'.format(loss)) return np.array(pred_vector.cpu()) %%time epochs = 20 lossv = [] for epoch in range(1, epochs + 1): train(epoch) with torch.no_grad(): print('\nValidation set:') evaluate(validation_loader, lossv) plt.figure(figsize=(5,3)) plt.plot(np.arange(1,epochs+1), lossv) plt.title('validation loss') ###Output _____no_output_____ ###Markdown InferenceTo further analyze the results, we can produce the actual predictions for the validation data. ###Code %%time with torch.no_grad(): predictions = evaluate(validation_loader) ###Output _____no_output_____ ###Markdown The selected threshold controls the number of label predictions we'll make: ###Code threshold = 0.5 avg_n_gt, avg_n_pred = 0, 0 for t in range(len(y_val)): avg_n_gt += len(np.where(y_val[t]>0.5)[0]) avg_n_pred += len(np.where(predictions[t]>threshold)[0]) avg_n_gt /= len(y_val) avg_n_pred /= len(y_val) print('Average number of ground-truth labels per talk: %.2f' % avg_n_gt) print('Average number of predicted labels per talk: %.2f' % avg_n_pred) ###Output _____no_output_____ ###Markdown Let's look at the correct and predicted labels for some talks in the validation set. ###Code nb_talks_to_show = 20 for t in range(nb_talks_to_show): print(t,':') print(' correct: ', end='') for idx in np.where(y_val[t]>0.5)[0].tolist(): sys.stdout.write('['+ntagslist_sorted[idx]+'] ') print() print(' predicted: ', end='') for idx in np.where(predictions[t]>threshold)[0].tolist(): sys.stdout.write('['+ntagslist_sorted[idx]+'] ') print() ###Output _____no_output_____ ###Markdown Precision, recall, the F1 measure, and NDCG (normalized discounted cumulative gain) after *k* returned labels are common performance metrics for multi-label classification: ###Code def dcg_at_k(vals, k): res = 0 for i in range(k): res += vals[i][1] / np.log2(i + 2) return res def scores_at_k(truevals, predvals, k): precision_at_k, recall_at_k, f1score_at_k, ndcg_at_k = 0, 0, 0, 0 for j in range(len(truevals)): z = list(zip(predvals[j], truevals[j])) sorted_z = sorted(z, reverse=True, key=lambda tup: tup[0]) opt_z = sorted(z, reverse=True, key=lambda tup: tup[1]) truesum = 0 for i in range(k): truesum += sorted_z[i][1] pr = truesum / k rc = truesum / np.sum(truevals[0]) if truesum>0: f1score_at_k += 2*((pr*rc)/(pr+rc)) precision_at_k += pr recall_at_k += rc cg = dcg_at_k(sorted_z, k) / (dcg_at_k(opt_z, k) + 0.00000001) ndcg_at_k += cg precision_at_k /= len(truevals) recall_at_k /= len(truevals) f1score_at_k /= len(truevals) ndcg_at_k /= len(truevals) print('Precision@{0} : {1:.2f}'.format(k, precision_at_k)) print('Recall@{0} : {1:.2f}'.format(k, recall_at_k)) print('F1@{0} : {1:.2f}'.format(k, f1score_at_k)) print('NDCG@{0} : {1:.2f}'.format(k, ndcg_at_k)) scores_at_k(y_val, predictions, 5) ###Output _____no_output_____ ###Markdown Scikit-learn has also some applicable performance [metrics](http://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.metrics) we can try: ###Code print('Precision: {0:.3f} (threshold: {1:.2f})' .format(metrics.precision_score(y_val.flatten(), predictions.flatten()>threshold), threshold)) print('Recall: {0:.3f} (threshold: {1:.2f})' .format(metrics.recall_score(y_val.flatten(), predictions.flatten()>threshold), threshold)) print('F1 score: {0:.3f} (threshold: {1:.2f})' .format(metrics.f1_score(y_val.flatten(), predictions.flatten()>threshold), threshold)) average_precision = metrics.average_precision_score(y_val.flatten(), predictions.flatten()) print('Average precision: {0:.3f}'.format(average_precision)) print('Coverage: {0:.3f}' .format(metrics.coverage_error(y_val, predictions))) print('LRAP: {0:.3f}' .format(metrics.label_ranking_average_precision_score(y_val, predictions))) precision, recall, _ = metrics.precision_recall_curve(y_val.flatten(), predictions.flatten()) plt.step(recall, precision, color='b', alpha=0.2, where='post') plt.fill_between(recall, precision, step='post', alpha=0.2, color='b') plt.xlabel('Recall') plt.ylabel('Precision') plt.ylim([0.0, 1.05]) plt.xlim([0.0, 1.0]) plt.title('Precision-recall curve'); ###Output _____no_output_____ ###Markdown LSTM Initialization ###Code class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.embed = nn.Embedding.from_pretrained(embedding_matrix, freeze=True) self.lstm = nn.LSTM(100, 128, num_layers=2, batch_first=True) self.fc1 = nn.Linear(128, 64) self.fc2 = nn.Linear(64, NLABELS) def forward(self, x): x = self.embed(x) _, (h_n, _) = self.lstm(x) x = h_n[1,:,:] x = F.relu(self.fc1(x)) return torch.sigmoid(self.fc2(x)) model = Net().to(device) optimizer = optim.RMSprop(model.parameters(), lr=0.005) criterion = nn.BCELoss() print(model) ###Output _____no_output_____ ###Markdown Learning ###Code %%time epochs = 20 lossv = [] for epoch in range(1, epochs + 1): train(epoch) with torch.no_grad(): print('\nValidation set:') evaluate(validation_loader, lossv) plt.figure(figsize=(5,3)) plt.plot(np.arange(1,epochs+1), lossv) plt.title('validation loss') ###Output _____no_output_____ ###Markdown Inference ###Code %%time with torch.no_grad(): predictions = evaluate(validation_loader) threshold = 0.5 avg_n_gt, avg_n_pred = 0, 0 for t in range(len(y_val)): avg_n_gt += len(np.where(y_val[t]>0.5)[0]) avg_n_pred += len(np.where(predictions[t]>threshold)[0]) avg_n_gt /= len(y_val) avg_n_pred /= len(y_val) print('Average number of ground-truth labels per talk: %.2f' % avg_n_gt) print('Average number of predicted labels per talk: %.2f' % avg_n_pred) nb_talks_to_show = 20 for t in range(nb_talks_to_show): print(t,':') print(' correct: ', end='') for idx in np.where(y_val[t]>0.5)[0].tolist(): sys.stdout.write('['+ntagslist_sorted[idx]+'] ') print() print(' predicted: ', end='') for idx in np.where(predictions[t]>threshold)[0].tolist(): sys.stdout.write('['+ntagslist_sorted[idx]+'] ') print() scores_at_k(y_val, predictions, 5) print('Precision: {0:.3f} (threshold: {1:.2f})' .format(metrics.precision_score(y_val.flatten(), predictions.flatten()>threshold), threshold)) print('Recall: {0:.3f} (threshold: {1:.2f})' .format(metrics.recall_score(y_val.flatten(), predictions.flatten()>threshold), threshold)) print('F1 score: {0:.3f} (threshold: {1:.2f})' .format(metrics.f1_score(y_val.flatten(), predictions.flatten()>threshold), threshold)) average_precision = metrics.average_precision_score(y_val.flatten(), predictions.flatten()) print('Average precision: {0:.3f}'.format(average_precision)) print('Coverage: {0:.3f}' .format(metrics.coverage_error(y_val, predictions))) print('LRAP: {0:.3f}' .format(metrics.label_ranking_average_precision_score(y_val, predictions))) precision, recall, _ = metrics.precision_recall_curve(y_val.flatten(), predictions.flatten()) plt.step(recall, precision, color='b', alpha=0.2, where='post') plt.fill_between(recall, precision, step='post', alpha=0.2, color='b') plt.xlabel('Recall') plt.ylabel('Precision') plt.ylim([0.0, 1.05]) plt.xlim([0.0, 1.0]) plt.title('Precision-recall curve'); ###Output _____no_output_____
Arvato Project Workbook.ipynb
###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import sys import pickle import ast from sklearn.preprocessing import Imputer, StandardScaler from sklearn.model_selection import GridSearchCV, train_test_split from sklearn.linear_model import LogisticRegression from sklearn.ensemble import BaggingClassifier from sklearn.metrics import roc_auc_score from hyperopt import hp import lightgbm as lgb from skopt import BayesSearchCV sys.path += ['./ilikeds'] import eda import helper_functions as h import train_classifier as t import warnings warnings.filterwarnings('ignore') # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # load in the data # azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';') # customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') azdias = pd.read_pickle ('../data/azdias.p') customers = pd.read_pickle ('../data/customers.p') sampling_rate = 0.1 r=np.random.randint(0, azdias.shape[0], int(azdias.shape[0]*sampling_rate)) azdias=azdias.loc[r,:].copy() r=np.random.randint(0, customers.shape[0], int(customers.shape[0]*sampling_rate)) customers=customers.loc[r,:].copy() # read in feature info file feat_info = pd.read_csv('./feats_info.csv', sep=';', names=['feat', 'type', 'unknow']) feat_info.set_index('feat', inplace =True) feat_info # create a EDA instance for Azdias. eda_azdias= eda.EDA(azdias, feat_info, label = 'Azdias') # create a EDA instance for Azdias. for customers eda_customers= eda.EDA(customers, feat_info, label = 'Customers') mixed = eda_azdias.feat_info[ (eda_azdias.feat_info.type == 'mixed') & (eda_azdias.feat_info.is_drop == 0)] mixed ###Output _____no_output_____ ###Markdown Data Preprocessing ###Code #### Define action dictionary # action_dic ={ # 1: 'drop: high missing values', # 2: 'drop: duplicated', # 3: 're-encoding: mapping', # 4: 're-encoding: logarithmic scaling', # 5: 'split', # } # Removing the three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP') in Customers feats_customers_excl = list(set(eda_customers.data.columns) - set(eda_azdias.data.columns)) feats_customers_excl eda_customers.data.drop(columns = feats_customers_excl, inplace =True) ###Output _____no_output_____ ###Markdown Step 1: Convert Unknown and Missing Values to NaN ###Code eda_azdias.missing2nan() eda_customers.missing2nan() # Re-Collecting feature stats eda_azdias.update_stats() eda_customers.update_stats() ###Output Number of missing values in Azdias: Before converstion is 3299139 Ater converstion IS 8440301 Increase in missing values: 155.83 % Number of missing values in Customers: Before converstion is 1393614 Ater converstion IS 2432252 Increase in missing values: 74.53 % ###Markdown Step 2: Remove Unknown and Missing ValuesRemove rows and columns with high NaN values a. Deleting rows ###Code rows_n_nans = azdias.isnull().sum(axis=1) plt.hist(rows_n_nans / azdias.shape[1], bins=90) plt.title('Distribution of missing value per row') _, rows_droped = h.split_dataset(azdias, threshold=0.25) n_deleted_rows = rows_droped.shape[0] print(f'Before delete the missing rows, {eda_azdias} has {eda_azdias.data.shape[0]} rows') azdias.drop(index = rows_droped.index, inplace =True) print(f'After delete the missing rows, {eda_azdias} has {eda_azdias.data.shape[0]} rows') print(f'Delete {n_deleted_rows} rows in total') _, rows_droped = h.split_dataset(customers, threshold=0.25) n_deleted_rows = rows_droped.shape[0] print(f'Before delete the high missing rate rows, {eda_customers} has {eda_customers.data.shape[0]} rows') eda_customers.data.drop(index = rows_droped.index, inplace =True) print(f'After delete the hitg missing rate rows, {eda_customers} has {eda_customers.data.shape[0]} rows') print(f'Delete {n_deleted_rows} rows in total') ###Output Before delete the high missing rate rows, Customers has 19165 rows After delete the hitg missing rate rows, Customers has 13367 rows Delete 5798 rows in total ###Markdown b. Deleteing columns ###Code eda_azdias.feat_info.percent_of_nans.sort_values().hist(bins = 40, alpha = 0.7) plt.xlabel('Missing Rate ') plt.ylabel('Num Of Features') plt.title('Distribution of missing value per column') thr_col_missing = 0.6 feats_high_missing_azdias = eda_azdias.feat_info.loc[eda_azdias.feat_info.percent_of_nans > thr_col_missing].index feats_high_missing_azdias feats_high_missing_customers = eda_customers.feat_info.loc[eda_customers.feat_info.percent_of_nans >thr_col_missing ].index feats_high_missing_customers # # Get features with high missing rates in both datasets feats_high_missing = set(feats_high_missing_azdias).intersection(set(feats_high_missing_customers)) feats_high_missing eda_azdias.feat_info.loc[feats_high_missing,'action'] = h.action_dic[1] eda_azdias.feat_info.loc[feats_high_missing,'is_drop'] = 1 eda_customers.feat_info.loc[feats_high_missing,'action'] = h.action_dic[1] eda_customers.feat_info.loc[feats_high_missing,'is_drop'] = 1 eda_azdias.feat_info.loc[feats_high_missing] eda_azdias.data.drop(columns = list(feats_high_missing), inplace =True) eda_customers.data.drop(columns = list(feats_high_missing), inplace =True) # Re-Collecting feature stats eda_azdias.update_stats() eda_customers.update_stats() ###Output _____no_output_____ ###Markdown Step 3. Remove duplicated features a. Compare _GROB and _FEIN features ###Code feats_fein =[x for x in eda_azdias.data.columns if x.endswith( '_FEIN')] feats_grob = [x for x in eda_azdias.data.columns if x.endswith( '_GROB')] feats_duplicate = [x for x in zip(pd.Series(feats_grob).sort_values(), pd.Series(feats_fein).sort_values())] h.plot_2feats_comparison(eda_azdias.data, feats_duplicate) # CAMEO_DEU_2015 and CAMEO_DEUG_2015 are both describing the wealth and life stage topology but at different scales. I've decided to keep CAMEO_DEUG_2015 which describes the information at a rough scale and drop CAMEO_DEU_2015. Another reason for dropping this feature is that it contains over 40 categories. feats_duplicated = ['ALTERSKATEGORIE_FEIN', 'LP_FAMILIE_FEIN', 'LP_LEBENSPHASE_FEIN', 'LP_STATUS_GROB'] eda_azdias.feat_info.loc[feats_duplicated,'action'] = h.action_dic[2] eda_azdias.feat_info.loc[feats_duplicated,'is_drop'] = 1 eda_customers.feat_info.loc[feats_duplicated,'action'] = h.action_dic[2] eda_customers.feat_info.loc[feats_duplicated,'is_drop'] = 1 eda_azdias.feat_info.loc[feats_duplicated] ###Output _____no_output_____ ###Markdown Step 4: Re-encodings features a. Re-encodings binary/categorical/mixed features ###Code feats_encoding = [ 'OST_WEST_KZ', 'CAMEO_DEUG_2015', # 'CAMEO_DEU_2015', 'CAMEO_INTL_2015', # 'EINGEFUEGT_AM', # 'D19_LETZTER_KAUF_BRANCHE', ] # h.check_features(eda_azdias, encoding) for x in feats_encoding: print(x, eda_azdias.data[x].unique()) for x in feats_encoding: eda_azdias.re_encoding(x) eda_customers.re_encoding(x) ###Output _____no_output_____ ###Markdown b. Re-encodings numeric features ###Code eda_azdias.feat_info.loc[(eda_azdias.feat_info.type == 'numeric') & (eda_azdias.feat_info.is_drop == 0)] numeric_feats = eda_azdias.feat_info.loc[(eda_azdias.feat_info.type == 'numeric') & (eda_azdias.feat_info.is_drop == 0)].index numeric_feats = numeric_feats.drop(['EINGEZOGENAM_HH_JAHR', 'GEBURTSJAHR', 'MIN_GEBAEUDEJAHR']) data = eda_azdias.data[numeric_feats] # pd.plotting.scatter_matrix(data, alpha = 0.3, figsize = (20,12), diagonal = 'kde') ###Output _____no_output_____ ###Markdown - ANZ_HAUSHALTE_AKTIV appears rather highly correlated with ANZ_STATISTISCHE_HAUSHALTE. so I decide delete ANZ_STATISTISCHE_HAUSHALTE and keep ANZ_HAUSHALTE_AKTIV. - ANZ_HAUSHALTE_AKTIV and KBA13_ANZAHL_PKW appear to have a skewed distribution, I will apply the natural logarithmic transformation to them ###Code eda_azdias.feat_info.loc['ANZ_STATISTISCHE_HAUSHALTE' ,['is_drop', 'action']] = 1, h.action_dic[2] eda_customers.feat_info.loc['ANZ_STATISTISCHE_HAUSHALTE' ,['is_drop', 'action']] = 1, h.action_dic[2] eda_azdias.data['ANZ_HAUSHALTE_AKTIV'] = np.log(eda_azdias.data['ANZ_HAUSHALTE_AKTIV'] +2) eda_azdias.data['KBA13_ANZAHL_PKW'] = np.log(eda_azdias.data['KBA13_ANZAHL_PKW']) eda_customers.data['ANZ_HAUSHALTE_AKTIV'] = np.log(eda_customers.data['ANZ_HAUSHALTE_AKTIV'] +2) eda_customers.data['KBA13_ANZAHL_PKW'] = np.log(eda_customers.data['KBA13_ANZAHL_PKW']) eda_azdias.feat_info.loc['ANZ_STATISTISCHE_HAUSHALTE','action'] =h.action_dic[4] eda_customers.feat_info.loc['ANZ_STATISTISCHE_HAUSHALTE','action'] = h.action_dic[4] # pd.plotting.scatter_matrix(eda_azdias.data[['ANZ_HAUSHALTE_AKTIV', 'KBA13_ANZAHL_PKW']] , alpha = 0.3, figsize = (20,12), diagonal = 'kde') # Re-Collecting feature stats eda_azdias.update_stats() eda_customers.update_stats() # eda_azdias.feat_info.loc[(eda_azdias.feat_info.action == 're-encoding: mapping') || (eda_azdias.feat_info.action == 're-encoding: logarithmic scaling'), ['type', 'unknow', 'action')] eda_azdias.feat_info.loc[(eda_azdias.feat_info.action == 're-encoding: mapping') | (eda_azdias.feat_info.action == 're-encoding: logarithmic scaling') , ['type', 'unknow', 'action']] ###Output _____no_output_____ ###Markdown Step 5. Split Mixed Features ###Code # mixed = eda_azdias.feat_info[ (eda_azdias.feat_info['type'] == 'mixed') & (eda_azdias.feat_info.is_drop == '0')] mixed_feats = eda_azdias.feat_info[ (eda_azdias.feat_info.type == 'mixed') & (eda_azdias.feat_info.is_drop == 0)] mixed_feats # mixed_feats = ['CAMEO_INTL_2015', 'LP_LEBENSPHASE_GROB', 'PRAEGENDE_JUGENDJAHRE', 'EINGEFUEGT_AM'] for x in mixed_feats.index: eda_azdias.split_mixed_feat(x) eda_customers.split_mixed_feat(x) feats_splited = ['CAMEO_INTL_2015_SPLIT_WEALTH', 'CAMEO_INTL_2015_SPLIT_LIFE_STAGE','LP_LEBENSPHASE_GROB_SPLIT_FAMILY','LP_LEBENSPHASE_GROB_SPLIT_AGE','LP_LEBENSPHASE_GROB_SPLIT_INCOME','PRAEGENDE_JUGENDJAHRE_SPLIT_DECADE','PRAEGENDE_JUGENDJAHRE_SPLIT_MOVEMENT'] eda_azdias.data[feats_splited] feats_splited_info = pd.DataFrame({ 'type': pd.Series('categorical', index = feats_splited), 'unknow': pd.Series('[]', index = feats_splited)}, index = feats_splited) feat_info_split = eda_azdias.build_feat_info(feats_splited_info) eda_azdias.feat_info= pd.concat([eda_azdias.feat_info, feat_info_split], sort = False) # eda_azdias.feat_info feat_info_split = eda_customers.build_feat_info(feats_splited_info) eda_customers.feat_info= pd.concat([eda_customers.feat_info, feat_info_split], sort = False) # eda_customers.feat_info eda_azdias.update_stats() eda_customers.update_stats() ###Output _____no_output_____ ###Markdown Step 6. Remove features with high distinct values ###Code to_drop = eda_azdias.feat_info.loc[(eda_azdias.feat_info.is_drop == 0) & (eda_azdias.feat_info.value_distinct > 40)].index eda_azdias.feat_info.loc[to_drop ,['is_drop', 'action']] = 1, h.action_dic[10] eda_azdias.data.drop(columns = to_drop, inplace =True) to_drop = eda_customers.feat_info.loc[(eda_customers.feat_info.is_drop == 0) & (eda_customers.feat_info.value_distinct > 40)].index eda_customers.feat_info.loc[to_drop,['is_drop', 'action']] = 1, h.action_dic[10] eda_customers.data.drop(columns = to_drop, inplace =True) # eda_azdias.feat_info.loc['D19_LETZTER_KAUF_BRANCHE'] obj_feats = eda_azdias.data.select_dtypes(include=['object']).columns eda_azdias.feat_info.loc[obj_feats ,['is_drop', 'action']] = 1, h.action_dic[3] eda_azdias.data.drop(columns = obj_feats, inplace =True) obj_feats = eda_customers.data.select_dtypes(include=['object']).columns eda_customers.feat_info.loc[obj_feats,['is_drop', 'action']] = 1, h.action_dic[3] eda_customers.data.drop(columns = obj_feats, inplace =True) ###Output _____no_output_____ ###Markdown Step 7. Remove outliers ###Code outlier_feats = eda_azdias.feat_info[ eda_azdias.feat_info.is_drop == 0].index eda_azdias.clean_outlier(outlier_feats) eda_customers.clean_outlier(outlier_feats) ###Output Cleaning outliers for AGER_TYP ... Cleaning outliers for ALTERSKATEGORIE_GROB ... Cleaning outliers for ALTER_HH ... Cleaning outliers for ANREDE_KZ ... Cleaning outliers for ANZ_HH_TITEL ... Cleaning outliers for ANZ_KINDER ... Cleaning outliers for ANZ_PERSONEN ... Cleaning outliers for ANZ_TITEL ... Cleaning outliers for ARBEIT ... Cleaning outliers for BALLRAUM ... Cleaning outliers for CAMEO_DEUG_2015 ... Cleaning outliers for CJT_GESAMTTYP ... Cleaning outliers for D19_BUCH_CD ... Cleaning outliers for D19_GESAMT_ANZ_12 ... Cleaning outliers for D19_GESAMT_ANZ_24 ... Cleaning outliers for D19_GESAMT_DATUM ... Cleaning outliers for D19_GESAMT_OFFLINE_DATUM ... Cleaning outliers for D19_GESAMT_ONLINE_DATUM ... Cleaning outliers for D19_KONSUMTYP ... Cleaning outliers for D19_SONSTIGE ... Cleaning outliers for D19_SOZIALES ... Cleaning outliers for D19_VERSAND_ANZ_24 ... Cleaning outliers for D19_VERSAND_DATUM ... Cleaning outliers for D19_VERSAND_OFFLINE_DATUM ... Cleaning outliers for D19_VERSAND_ONLINE_DATUM ... Cleaning outliers for D19_VOLLSORTIMENT ... Cleaning outliers for DSL_FLAG ... Cleaning outliers for EINGEZOGENAM_HH_JAHR ... Cleaning outliers for EWDICHTE ... Cleaning outliers for FINANZTYP ... Cleaning outliers for FINANZ_ANLEGER ... Cleaning outliers for FINANZ_HAUSBAUER ... Cleaning outliers for FINANZ_MINIMALIST ... Cleaning outliers for FINANZ_SPARER ... Cleaning outliers for FINANZ_UNAUFFAELLIGER ... Cleaning outliers for FINANZ_VORSORGER ... Cleaning outliers for GEBAEUDETYP ... Cleaning outliers for GEBAEUDETYP_RASTER ... Cleaning outliers for GFK_URLAUBERTYP ... Cleaning outliers for GREEN_AVANTGARDE ... Cleaning outliers for HEALTH_TYP ... Cleaning outliers for HH_DELTA_FLAG ... Cleaning outliers for HH_EINKOMMEN_SCORE ... Cleaning outliers for INNENSTADT ... Cleaning outliers for KBA05_ALTER1 ... Cleaning outliers for KBA05_ALTER2 ... Cleaning outliers for KBA05_ALTER3 ... Cleaning outliers for KBA05_ALTER4 ... Cleaning outliers for KBA05_ANHANG ... Cleaning outliers for KBA05_ANTG1 ... Cleaning outliers for KBA05_ANTG2 ... Cleaning outliers for KBA05_ANTG3 ... Cleaning outliers for KBA05_ANTG4 ... Cleaning outliers for KBA05_AUTOQUOT ... Cleaning outliers for KBA05_BAUMAX ... Cleaning outliers for KBA05_CCM1 ... Cleaning outliers for KBA05_CCM2 ... Cleaning outliers for KBA05_CCM3 ... Cleaning outliers for KBA05_CCM4 ... Cleaning outliers for KBA05_DIESEL ... Cleaning outliers for KBA05_FRAU ... Cleaning outliers for KBA05_GBZ ... Cleaning outliers for KBA05_HERST1 ... Cleaning outliers for KBA05_HERST2 ... Cleaning outliers for KBA05_HERST3 ... Cleaning outliers for KBA05_HERST4 ... Cleaning outliers for KBA05_HERST5 ... Cleaning outliers for KBA05_HERSTTEMP ... Cleaning outliers for KBA05_KRSAQUOT ... Cleaning outliers for KBA05_KRSHERST1 ... Cleaning outliers for KBA05_KRSHERST2 ... Cleaning outliers for KBA05_KRSHERST3 ... Cleaning outliers for KBA05_KRSKLEIN ... Cleaning outliers for KBA05_KRSOBER ... Cleaning outliers for KBA05_KRSVAN ... Cleaning outliers for KBA05_KRSZUL ... Cleaning outliers for KBA05_KW1 ... Cleaning outliers for KBA05_KW2 ... Cleaning outliers for KBA05_KW3 ... Cleaning outliers for KBA05_MAXAH ... Cleaning outliers for KBA05_MAXBJ ... Cleaning outliers for KBA05_MAXHERST ... Cleaning outliers for KBA05_MAXSEG ... Cleaning outliers for KBA05_MAXVORB ... Cleaning outliers for KBA05_MOD1 ... Cleaning outliers for KBA05_MOD2 ... Cleaning outliers for KBA05_MOD3 ... Cleaning outliers for KBA05_MOD4 ... Cleaning outliers for KBA05_MOD8 ... Cleaning outliers for KBA05_MODTEMP ... Cleaning outliers for KBA05_MOTOR ... Cleaning outliers for KBA05_MOTRAD ... Cleaning outliers for KBA05_SEG1 ... Cleaning outliers for KBA05_SEG10 ... Cleaning outliers for KBA05_SEG2 ... Cleaning outliers for KBA05_SEG3 ... Cleaning outliers for KBA05_SEG4 ... Cleaning outliers for KBA05_SEG5 ... Cleaning outliers for KBA05_SEG6 ... Cleaning outliers for KBA05_SEG7 ... Cleaning outliers for KBA05_SEG8 ... Cleaning outliers for KBA05_SEG9 ... Cleaning outliers for KBA05_VORB0 ... Cleaning outliers for KBA05_VORB1 ... Cleaning outliers for KBA05_VORB2 ... Cleaning outliers for KBA05_ZUL1 ... Cleaning outliers for KBA05_ZUL2 ... Cleaning outliers for KBA05_ZUL3 ... Cleaning outliers for KBA05_ZUL4 ... Cleaning outliers for KBA13_ALTERHALTER_30 ... Cleaning outliers for KBA13_ALTERHALTER_45 ... Cleaning outliers for KBA13_ALTERHALTER_60 ... Cleaning outliers for KBA13_ALTERHALTER_61 ... Cleaning outliers for KBA13_ANTG1 ... Cleaning outliers for KBA13_ANTG2 ... Cleaning outliers for KBA13_ANTG3 ... Cleaning outliers for KBA13_ANTG4 ... Cleaning outliers for KBA13_AUDI ... Cleaning outliers for KBA13_AUTOQUOTE ... Cleaning outliers for KBA13_BAUMAX ... Cleaning outliers for KBA13_BJ_1999 ... Cleaning outliers for KBA13_BJ_2000 ... Cleaning outliers for KBA13_BJ_2004 ... Cleaning outliers for KBA13_BJ_2006 ... Cleaning outliers for KBA13_BJ_2008 ... Cleaning outliers for KBA13_BJ_2009 ... Cleaning outliers for KBA13_BMW ... Cleaning outliers for KBA13_CCM_0_1400 ... Cleaning outliers for KBA13_CCM_1000 ... Cleaning outliers for KBA13_CCM_1200 ... Cleaning outliers for KBA13_CCM_1400 ... Cleaning outliers for KBA13_CCM_1401_2500 ... Cleaning outliers for KBA13_CCM_1500 ... Cleaning outliers for KBA13_CCM_1600 ... Cleaning outliers for KBA13_CCM_1800 ... Cleaning outliers for KBA13_CCM_2000 ... Cleaning outliers for KBA13_CCM_2500 ... Cleaning outliers for KBA13_CCM_2501 ... Cleaning outliers for KBA13_CCM_3000 ... Cleaning outliers for KBA13_CCM_3001 ... Cleaning outliers for KBA13_FAB_ASIEN ... Cleaning outliers for KBA13_FAB_SONSTIGE ... Cleaning outliers for KBA13_FIAT ... Cleaning outliers for KBA13_FORD ... Cleaning outliers for KBA13_GBZ ... Cleaning outliers for KBA13_HALTER_20 ... Cleaning outliers for KBA13_HALTER_25 ... Cleaning outliers for KBA13_HALTER_30 ... Cleaning outliers for KBA13_HALTER_35 ... Cleaning outliers for KBA13_HALTER_40 ... Cleaning outliers for KBA13_HALTER_45 ... Cleaning outliers for KBA13_HALTER_50 ... Cleaning outliers for KBA13_HALTER_55 ... Cleaning outliers for KBA13_HALTER_60 ... Cleaning outliers for KBA13_HALTER_65 ... Cleaning outliers for KBA13_HALTER_66 ... Cleaning outliers for KBA13_HERST_ASIEN ... Cleaning outliers for KBA13_HERST_AUDI_VW ... Cleaning outliers for KBA13_HERST_BMW_BENZ ... Cleaning outliers for KBA13_HERST_EUROPA ... Cleaning outliers for KBA13_HERST_FORD_OPEL ... Cleaning outliers for KBA13_HERST_SONST ... Cleaning outliers for KBA13_HHZ ... Cleaning outliers for KBA13_KMH_0_140 ... Cleaning outliers for KBA13_KMH_110 ... Cleaning outliers for KBA13_KMH_140 ... Cleaning outliers for KBA13_KMH_140_210 ... Cleaning outliers for KBA13_KMH_180 ... Cleaning outliers for KBA13_KMH_210 ... Cleaning outliers for KBA13_KMH_211 ... Cleaning outliers for KBA13_KMH_250 ... Cleaning outliers for KBA13_KMH_251 ... Cleaning outliers for KBA13_KRSAQUOT ... Cleaning outliers for KBA13_KRSHERST_AUDI_VW ... Cleaning outliers for KBA13_KRSHERST_BMW_BENZ ... Cleaning outliers for KBA13_KRSHERST_FORD_OPEL ... Cleaning outliers for KBA13_KRSSEG_KLEIN ... Cleaning outliers for KBA13_KRSSEG_OBER ... Cleaning outliers for KBA13_KRSSEG_VAN ... Cleaning outliers for KBA13_KRSZUL_NEU ... Cleaning outliers for KBA13_KW_0_60 ... Cleaning outliers for KBA13_KW_110 ... Cleaning outliers for KBA13_KW_120 ... Cleaning outliers for KBA13_KW_121 ... Cleaning outliers for KBA13_KW_30 ... Cleaning outliers for KBA13_KW_40 ... Cleaning outliers for KBA13_KW_50 ... Cleaning outliers for KBA13_KW_60 ... Cleaning outliers for KBA13_KW_61_120 ... Cleaning outliers for KBA13_KW_70 ... Cleaning outliers for KBA13_KW_80 ... Cleaning outliers for KBA13_KW_90 ... Cleaning outliers for KBA13_MAZDA ... Cleaning outliers for KBA13_MERCEDES ... Cleaning outliers for KBA13_MOTOR ... Cleaning outliers for KBA13_NISSAN ... Cleaning outliers for KBA13_OPEL ... Cleaning outliers for KBA13_PEUGEOT ... ###Markdown Step 8. Remove collinear features ###Code feats = eda_azdias.feat_info.loc[eda_azdias.feat_info.is_drop == 0].index corr_matrix_azdias = eda_azdias.data.loc[:, feats].dropna().corr().abs() corr_matrix_azdias.head() # Upper triangle of correlations upper_azdias = corr_matrix_azdias.where(np.triu(np.ones(corr_matrix_azdias.shape), k=1).astype(np.bool)) upper_azdias.head() # Threshold for removing correlated variables threshold = 0.85 to_drop_azdias = [column for column in upper_azdias.columns if any(upper_azdias[column] > threshold)] print(f'{eda_azdias}: There are {len(to_drop_azdias)} with correlations above {threshold}.') feats = eda_customers.feat_info.loc[eda_customers.feat_info.is_drop == 0].index corr_matrix_cus = eda_customers.data.loc[:, feats].dropna().corr().abs() # Upper triangle of correlations upper_cus = corr_matrix_cus.where(np.triu(np.ones(corr_matrix_cus.shape), k=1).astype(np.bool)) # Select columns with correlations above threshold to_drop_cus = [column for column in upper_cus.columns if any(upper_cus[column] > threshold)] print(f'{eda_customers}: There are {len(to_drop_cus)} with correlations above {threshold}.') to_drop = set(to_drop_azdias).intersection(set(to_drop_cus)) print(f'There are {len(to_drop)} columns to be removed due to with a correlation bigger than {threshold} !') eda_azdias.feat_info.loc[to_drop ,['is_drop', 'action']] = 1, h.action_dic[10] eda_customers.feat_info.loc[to_drop,['is_drop', 'action']] = 1, h.action_dic[10] eda_azdias.data.drop(columns = to_drop, inplace =True) eda_customers.data.drop(columns = to_drop, inplace =True) eda_customers.feat_info.loc[to_drop,['type','is_drop','action']] # Save the droped features into a file. feats_todrop1 = eda_azdias.feat_info.drop(feats_splited)[eda_azdias.feat_info.is_drop == 1].index.tolist() feats_todrop2 = eda_customers.feat_info.drop(feats_splited)[ eda_customers.feat_info.is_drop == 1].index.tolist() feats_todrop = list(set(feats_todrop1).intersection(set(feats_todrop2))) for c in feats_todrop: if c in eda_azdias.data.columns: eda_azdias.data.drop(columns = c, inplace = True) if c in eda_customers.data.columns: eda_customers.data.drop(columns = c, inplace = True) pd.Series(feats_todrop).sort_values().to_csv('feats_dropped.csv', index=False) ###Output _____no_output_____ ###Markdown Step 8: Impute missing value ###Code impMedian = Imputer(strategy='median') impFreq = Imputer(strategy='most_frequent') # eda_azdias.data_imputed = pd.DataFrame(imputer.fit_transform(eda_azdias.data)) # eda_customers.data_imputed = pd.DataFrame(imputer.fit_transform(eda_customers.data)) # other_feats = eda_azdias.data.columns.drop(numeric_feats) # numeric_imputed = pd.DataFrame(impMedian.fit_transform(eda_azdias.data[numeric_feats]), columns = numeric_feats) # other_imputed = pd.DataFrame(impFreq.fit_transform(eda_azdias.data[other_feats]), columns =other_feats ) # eda_azdias.data_imputed = pd.concat([other_imputed, numeric_imputed], axis=0) # eda_azdias.data_imputed eda_azdias.data_imputed = pd.DataFrame(impMedian.fit_transform(eda_azdias.data), columns=eda_azdias.data.columns).astype('float') eda_customers.data_imputed = pd.DataFrame(impMedian.fit_transform(eda_customers.data), columns=eda_customers.data.columns).astype('float') ###Output _____no_output_____ ###Markdown Step 9: Feature Scaling ###Code eda_azdias.data_scaled = pd.DataFrame(StandardScaler().fit_transform(eda_azdias.data_imputed) , columns=eda_azdias.data.columns) eda_customers.data_scaled = pd.DataFrame(StandardScaler().fit_transform(eda_customers.data_imputed) , columns=eda_customers.data.columns) ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. Principal component analysis (PCA) ###Code h.do_pca(eda_azdias, 200) print(f'total explained_variance: {eda_azdias.pca.explained_variance_ratio_.sum()}') # print('explained_variance_ratio: ', pca.explained_variance_ratio_) # print('explained_variance: ', pca.explained_variance_) print('n_components: ', eda_azdias.pca.n_components_) h.scree_plot(eda_azdias) h.do_pca(eda_customers, 200) print(f'total explained_variance: {eda_customers.pca.explained_variance_ratio_.sum()}') # print('explained_variance_ratio: ', pca.explained_variance_ratio_) # print('explained_variance: ', pca.explained_variance_) print('n_components: ', eda_customers.pca.n_components_) h.scree_plot(eda_customers) ###Output total explained_variance: 0.9754301733777978 n_components: 200 ###Markdown Clustering with KMean ###Code h.do_pca(eda_azdias, 100) h.do_pca(eda_customers, 100) scores = [] centers = list(range(1,21)) for center in centers: _, score = h.get_kmeans_score(eda_customers.X_pca, center) scores.append(score) plt.plot(centers, scores, linestyle='--', marker='o', color='b') plt.xlabel('K') plt.ylabel('SSE') plt.title('SSE vs. K') ###Output _____no_output_____ ###Markdown Clustering Comparison Azdias vs Customers ###Code model_c, score = h.get_kmeans_score(eda_customers.X_pca, 10) print(score) model_a, score = h.get_kmeans_score(eda_azdias.X_pca, 10) print(score) #save the list # file = open("preds_c.pkl", 'wb') # pickle.dump(preds_c,file) # file = open("preds_a.pkl", 'wb') # pickle.dump(preds_a,file) preds_c = model_c.predict(eda_customers.X_pca) preds_a = model_a.predict(eda_azdias.X_pca) counts_c, counts_a = h.plot_cluster_comparison(preds_c, preds_a) comp_diff_s = (counts_c.percent - counts_a.percent).sort_values(ascending=False) pd.DataFrame( { 'cluster': comp_diff_s.index, 'diff_pct': comp_diff_s.values } ) ###Output _____no_output_____ ###Markdown The plot above represents the cluster distribution of the general population and the customers of the company. positive (cluster 2, 6, 9) is overrepresented and negative (cluster 9) is underrepresented. ###Code h.list_component(eda_customers, 2, 10) h.list_component(eda_customers, 1, 10) h.list_component(eda_customers, 9, 10) # Check features of cluster 1 feats = ['HH_EINKOMMEN_SCORE', 'CAMEO_DEUG_2015', 'FINANZ_HAUSBAUER', 'ZABEOTYP', 'KBA05_AUTOQUOT', 'KBA05_GBZ','GREEN_AVANTGARDE', 'LP_STATUS_FEIN', 'MOBI_REGIO'] h.plot_feats_comparison(eda_azdias.data, eda_customers.data, feats, fig_height=6, fig_aspect=0.8) # Check features of cluster 2 feats = ['PRAEGENDE_JUGENDJAHRE_SPLIT_DECADE', 'FINANZ_SPARER', 'FINANZ_ANLEGER', 'SEMIO_PFLICHT', 'ONLINE_AFFINITAET', 'SEMIO_PFLICHT', 'SEMIO_RAT', 'SEMIO_LUST', 'ALTERSKATEGORIE_GROB', 'FINANZ_VORSORGER', 'ALTER_HH'] h.plot_feats_comparison(eda_azdias.data, eda_customers.data, feats, fig_height=6, fig_aspect=0.8) ###Output _____no_output_____ ###Markdown So, The target customers are upper class (1: CAMEO_DEUG_2015), wealthy(1: HH_EINKOMMEN_SCORE, ) , 50–70 years old (9: PRAEGENDE_JUGENDJAHRE_SPLIT_DECADE) , money savers and investors with high probability (9: FINANZ_SPARER, 2: FINANZ_ANLEGER). They are high earners (9: LP_STATUS_FEIN). These people are with low movement pattern (MOBI_REGIO=4.2). These people are also religious and traditional-minded (SEMIO feature). Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. Step 1: Load the data ###Code # mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') mailout_train = pd.read_pickle ('..//data//mailout_train.p') mailout_train.shape # read in feature info file feat_info = pd.read_csv('./feats_info.csv', sep=';', names=['feat', 'type', 'unknow']) feat_info.set_index('feat', inplace =True) ###Output _____no_output_____ ###Markdown Step 2: Preparing and splitting the data ###Code positive_cnts = mailout_train[mailout_train['RESPONSE'] == 1].shape[0] total_cnts = mailout_train.shape[0] print(f'The train set contains only {positive_cnts / total_cnts *100 :1.2f}% customers with positive response') # extract RESPONSE column response = mailout_train['RESPONSE'] # drop RESPONSE column mailout_train.drop(labels=['RESPONSE'], axis=1, inplace=True) # find features to drop because of many missing values missing_per_column = mailout_train.isnull().mean() plt.hist(missing_per_column, bins=34) # read in feature info file # feat_info = pd.read_csv('./feats_info.csv', sep=';', names=['feat', 'type', 'unknown']) # feat_info.set_index('feat', inplace =True) eda_mailout_train = eda.EDA(mailout_train, feat_info, label = 'mailout_train') # Data Cleaning eda_mailout_train.data_pipeline() response = response.loc[mailout_train.index] response.shape ###Output _____no_output_____ ###Markdown Preparing and splitting the data ###Code # We split the dataset into 2/3 training and 1/3 testing sets. train_data, test_data, train_targets, test_targets = train_test_split( eda_mailout_train.data_scaled, response, test_size=0.33, shuffle=True, random_state=h.RANDOM_STATE) ###Output _____no_output_____ ###Markdown Step 4. model evaluation ###Code lrm = LogisticRegression(random_state=h.RANDOM_STATE) bagm = BaggingClassifier() lgbm = lgb.LGBMClassifier(random_state=h.RANDOM_STATE,application='binary') model_dict = { 'logistic regression': lrm, 'bagging': bagm, 'lgbmclassifier': lgbm, } h.build_roc_auc(model_dict, {},eda_mailout_train.data_scaled, response) ###Output Model: logistic regression, Best ROC AUC score: 0.7263846327543013 Model: bagging, Best ROC AUC score: 0.5690059767076286 Model: lgbmclassifier, Best ROC AUC score: 0.7386709671552163 ###Markdown The LGBM classifier got a slightly better score so I'll use it down below to train and tuning for the kaggle competition. Step 5. LGB Train and Hyperparameter Tuning ###Code def_params={ 'learning_rate': {'hpf' : hp.loguniform('learning_rate', np.log(0.00001), np.log(0.0075)),'dtype' : 'float'}, 'num_leaves' : {'hpf' : hp.quniform('num_leaves', 3, 15, 1),'dtype' : 'int'}, 'min_data_in_leaf' : {'hpf' : hp.quniform('min_data_in_leaf', 1000, 1500, 50),'dtype' : 'int'}, 'min_sum_hessian_in_leaf': {'hpf' : hp.uniform('min_sum_hessian_in_leaf', 0.0005, 0.002),'dtype' : 'float'}, 'colsample_bytree': {'hpf': hp.uniform('colsample_bytree', 0.5, 0.9),'dtype' : 'float'}, 'reg_alpha': {'hpf': hp.uniform('reg_alpha', 0.3, 1.0),'dtype' : 'float'}, 'reg_lambda': {'hpf': hp.uniform('reg_lambda', 0, 0.6),'dtype' : 'float'}, 'max_bin' : {'hpf' : hp.quniform('max_bin', 10, 80, 1),'dtype' : 'int'}, 'feature_fraction': {'hpf': hp.uniform('feature_fraction', 0.3, 0.7),'dtype' : 'float'}, } n_iter = 300 cv = 10 best, trials, objective = t.search_hyperparameter(def_params, n_iter, cv ,train_data, train_targets) model = t.build_model(best, def_params) model.fit(train_data,train_targets) t.evaluate_model(model, objective, best, test_data, test_targets) sa_results_df = t.plot_result(trials.trials) ###Output 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 300/300 [10:05<00:00, 2.02s/trial, best loss: -0.7620878179086161] Best ROC -0.762 params {'colsample_bytree': 0.766715611270604, 'feature_fraction': 0.6170422163553321, 'learning_rate': 0.005830155646889542, 'max_bin': 29.0, 'min_data_in_leaf': 1400.0, 'min_sum_hessian_in_leaf': 0.0013456876243989134, 'num_leaves': 9.0, 'reg_alpha': 0.4694440842707551, 'reg_lambda': 0.25618252156784005} ###Markdown Step 6. Top 15 most important features of the model ###Code lgb.plot_importance(model, max_num_features = 30, figsize=(10,12)) ###Output _____no_output_____ ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code # mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') mailout_test = pd.read_pickle ('..//data//mailout_test.p') mailout_test.shape eda_mailout_test = eda.EDA(mailout_test, feat_info, label = 'mailout_test') LNR = mailout_test.LNR.copy() LNR.shape eda_mailout_test.data_pipeline(clean_rows = False) # fit and predict preds_test = model.predict_proba(eda_mailout_test.data_scaled)[:,1] preds_test # create submission file preds_test = pd.concat([LNR, pd.Series(preds_test)], axis = 1) preds_test.rename(columns={0:'RESPONSE'}, inplace= True) preds_test.to_csv('MAILOUT_TEST.csv') ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # load in the data azdias = pd.read_csv('datasets/Udacity_AZDIAS_052018.csv', sep=';') customers = pd.read_csv('datasets/Udacity_CUSTOMERS_052018.csv', sep=';') # Be sure to add in a lot more cells (both markdown and code) to document your # approach and findings! azdias.head(10) # Removes columns with majority of missing value columns # Number of records with atleast one NaN column? print(azdias.shape) #azdias.describe() print('No. of columns with atleast one NULL: ', len(azdias.columns[azdias.isna().any()].tolist())) # Number of records with more than 10, 20, ... 40, 50% columns NaN? #print('No. of rows with all Null: ', len(azdias.rows[azdias.isna().any()].tolist())) new = azdias.dropna(inplace = False) print("after dropping nas: ", new.shape) # At what threshold of NaNs, a record becomes irrelevant? Can we derive some guidance from customer dataset? print(customers.shape) #customers.describe() print('No. of columns with atleast one NULL: ', len(customers.columns[customers.isna().any()].tolist())) print('No. of records with no NaN: ', customers.dropna(inplace=False).shape) # Number of records with less greater than 30% columns being NaNs # azdias count = 0 azdias_subset = azdias[:100] thresh_row_nan_col = 0.1 for i in range(len(azdias_subset.index)): if azdias_subset.iloc[i].isnull().sum() / len(azdias_subset.columns) > thresh_row_nan_col : count += 1 print("Percentage of records with > {}% NaN columns: {}%".format(thresh_row_nan_col * 100, count/len(azdias_subset.index) * 100)) # customer # Identify columns which are mostly NaNs #print('Columns with Nans: {}'.format(azdias.isnull().sum(axis=0)/len(azdias.index) * 100)) # count columns with greater than a threshold number of NaNs thresh_col_nan = 0.3 count = 0 for i in range(len(azdias.columns)): if azdias.iloc[:, i].isnull().sum(axis=0)/ len(azdias.index) > thresh_col_nan: count += 1 print('Percentage of columns with greater than {}% Nan values are: {}%'.format(thresh_col_nan * 100, count/len(azdias.columns) * 100)) ###Output Percentage of columns with greater than 30.0% Nan values are: 1.639344262295082% ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. ###Code # lets drop the columns with more than 30% Nans thresh_col_nan = 0.3 original_num_columns = len(azdias.columns) azdias.drop(azdias.columns[azdias.apply(lambda col: col.isnull().sum()/len(azdias.index) > thresh_col_nan)], inplace = True, axis=1) after_nan_major_removed_num_columns = len(azdias.columns) print('Dropped {} columns!!'.format(original_num_columns - after_nan_major_removed_num_columns)) # drop rows with more than 80% nan columns thresh_row_nan_col = 0.68 ids = azdias[azdias.isnull().sum(axis=1) / len(azdias.columns) > thresh_row_nan_col].index # fix, TODO #print('number of rows: {}'.format(len(ids))) original_length_azdias = azdias.shape[0] azdias.drop(ids, inplace=True) rows_with_major_nan_removed_length_azdias = azdias.shape[0] print('Dropped {} rows!!'.format(original_length_azdias - rows_with_major_nan_removed_length_azdias)) azdias.describe() # check if there is categorical data? all_cols = azdias.columns numeric_cols = azdias._get_numeric_data().columns print('No. of numerical cols: {}'.format(len(numeric_cols))) categorical_cols = list(set(all_cols) - set(numeric_cols)) print('No. of categorical columns are: {}'.format(len(categorical_cols))) # if so what to do with them, do they add value? ''' print(azdias['D19_LETZTER_KAUF_BRANCHE'].head(10)) print(azdias['OST_WEST_KZ'].head(10)) print(azdias['CAMEO_INTL_2015'].head(10)) print(azdias['EINGEFUEGT_AM'].head(10)) print(azdias['CAMEO_DEU_2015'].head(10)) print(azdias['CAMEO_DEUG_2015'].head(10)) ''' # dropping for now azdias = azdias._get_numeric_data()#azdias.select_dtypes(['number'])#azdias.drop(columns=categorical_cols, inplace=False, axis=1) print('No. of columns in azdias(after categorical removal): {}'.format(len(azdias.columns))) # impute Nans azdias_no_nans = azdias.fillna(value=0, inplace=False) # perform PCA from sklearn.decomposition import PCA n_components = 5 pca = PCA(n_components=n_components) pca.fit(azdias_no_nans) print('Principal components are: {}'.format(pca.components_)) # get corresponding df azdias_pca = pd.DataFrame(pca.transform(azdias_no_nans), columns=['PCA_% i' %i for i in range(n_components)]) azdias_pca.describe() import matplotlib.pyplot as plt from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=3).fit(azdias_pca) centroids = kmeans.cluster_centers_ print(centroids) plt.scatter(azdias_pca['PCA_ 0'], azdias_pca['PCA_ 1'], c= kmeans.labels_.astype(float), s=50, alpha=0.5) #print(azdias_pca.columns) plt.scatter(centroids[:, 0], centroids[:, 1], c='red', s=50) plt.show() # elbow method wcss = [] for i in range(1, 11): kmeans = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0) kmeans.fit(azdias_pca) wcss.append(kmeans.inertia_) plt.plot(range(1, 11), wcss) plt.title('Elbow Method') plt.xlabel('Number of clusters') plt.ylabel('WCSS') plt.show() azdias_pca.to_csv('datasets/current_azdias.csv', index=False) import pandas as pd azdias_pca = pd.read_csv('datasets/current_azdias.csv') azdias_pca.describe() # select the optimal cluster based on elbow method plot above # we will take 3 import matplotlib.pyplot as plt from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=3, init='k-means++', max_iter=300, n_init=10, random_state=0) pred_y = kmeans.fit_predict(azdias_pca) # lets check the effectiveness of clustering azdias_pca.plot(x=azdias_pca.columns[0], y = azdias_pca.columns[1], kind='scatter') plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s=300, c='red') plt.show() customers = pd.read_csv('datasets/Udacity_CUSTOMERS_052018.csv', sep=';') # preprocess/clean customer dataset like we did for azdias ## column dropping # lets drop the columns with more than 30% Nans thresh_col_nan = 0.3 original_num_columns = len(customers.columns) customers.drop(customers.columns[customers.apply(lambda col: col.isnull().sum()/len(customers.index) > thresh_col_nan)], inplace = True, axis=1) after_nan_major_removed_num_columns = len(customers.columns) print('Dropped {} columns!!'.format(original_num_columns - after_nan_major_removed_num_columns)) ## row dropping # drop rows with more than 80% nan columns thresh_row_nan_col = 0.68 ids = customers[customers.isnull().sum(axis=1) / len(customers.columns) > thresh_row_nan_col].index # fix, TODO #print('number of rows: {}'.format(len(ids))) original_length_customers = customers.shape[0] customers.drop(ids, inplace=True) rows_with_major_nan_removed_length_customers = customers.shape[0] print('Dropped {} rows!!'.format(original_length_customers - rows_with_major_nan_removed_length_customers)) ## check for categorical data all_cols = customers.columns numeric_cols = customers._get_numeric_data().columns print('No. of numerical cols: {}'.format(len(numeric_cols))) categorical_cols = list(set(all_cols) - set(numeric_cols)) print('No. of categorical columns are: {}'.format(len(categorical_cols))) ### if so what to do with them, do they add value? ### dropping for now customers = customers._get_numeric_data()#azdias.select_dtypes(['number'])#azdias.drop(columns=categorical_cols, inplace=False, axis=1) print('No. of columns in azdias(after categorical removal): {}'.format(len(customers.columns))) ## impute Nans customers_no_nans = customers.fillna(value=0, inplace=False) ## pca computation # perform PCA from sklearn.decomposition import PCA n_components = 5 pca = PCA(n_components=n_components) pca.fit(customers_no_nans) print('Principal components are: {}'.format(pca.components_)) # get corresponding df customers_pca = pd.DataFrame(pca.transform(customers_no_nans), columns=['PCA_% i' %i for i in range(n_components)]) customers_pca.describe() # plot customer data against population data clustering for getting information regarding under-representation ax = azdias_pca.plot.scatter(x=azdias_pca.columns[0], y = azdias_pca.columns[1], c='green') customers_pca.plot.scatter(x=customers_pca.columns[0], y = customers_pca.columns[1], c='blue', ax = ax) plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s=300, c='red') plt.show() # use this cell to dump the transformed azdias after day of work azdias_pca.to_csv('datasets/current_azdias.csv', index=False) customers_pca.to_csv('datasets/current_customers.csv', index=False) import pandas as pd azdias_pca = pd.read_csv('datasets/current_azdias.csv') customer_pca = pd.read_csv('datasets/current_customers.csv') # visualize over and underrepresented population wrt customers # estimate percentage of total customer population belonging to each cluster # the cluster with least percentage is the under-represented cluster(of population) import matplotlib.pyplot as plt from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=3, init='k-means++', max_iter=300, n_init=10, random_state=0) pred_y = kmeans.fit_predict(azdias_pca) # cluster customers as well pred_y_customers = kmeans.predict(customer_pca) import numpy as np # determine fraction of customers in each cluster ## find distinct cluster ids in pred_y_customers cluster_ids = np.unique(pred_y_customers) print("Unique cluster ids are: {}".format(cluster_ids)) #find number of customers belonging to each cluster id num_customers_in_each_cluster = [np.count_nonzero(pred_y_customers == cluster_id) for cluster_id in cluster_ids] #find percentage for cluster_id in cluster_ids: print("Cluster-id: {}, population-fraction: {}%".format(cluster_ids[cluster_id], num_customers_in_each_cluster[cluster_id] / pred_y_customers.shape[0] * 100.0)) # cluster with least fraction of customers is least-represented # plot fraction of customers with fraction of population for each cluster as bar plot # the cluster with customer fraction higher than the population fraction is over-represented # the cluster with customer fraction lower than the population fraction is under-represented # identifies features which are majority indicator of customer potential ###Output _____no_output_____ ###Markdown Only records with no NaN can be given to PCA or KMeans -> We have to remove irrelevant records(ones with majority of columns being NaNs) + interpolate values for NaN columns of significant records(aka imputation) Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code import pandas as pd mailout_train = pd.read_csv('datasets/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') # identify the data-imbalance # percentage of people who responded postively print("Percentage of people who responded to the campaign: {}%".format(len(mailout_train[mailout_train['RESPONSE'] == 1].count()) / len(mailout_train) * 100.0)) # preprocess before training ## Remove categorical columns mailout_train = mailout_train._get_numeric_data() ## Impute Nans mailout_train = mailout_train.fillna(value=0, inplace=False) y = mailout_train['RESPONSE'] X = mailout_train.drop(['RESPONSE'], axis=1) # model selection, refer https://scikit-learn.org/stable/modules/cross_validation.html import numpy as np from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.4, random_state=0) print("After train val splits: Train_X: {}, Test_X: {}, Train_y: {}, Test_y: {}".format(X_train.shape, X_test.shape, y_train.shape, y_test.shape)) from sklearn.linear_model import SGDClassifier clf = SGDClassifier(loss="hinge", penalty="l2", max_iter=30) clf.fit(X_train, y_train) clf.score(X_test, y_test) ###Output _____no_output_____ ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code mailout_test = pd.read_csv('datasets/Udacity_MAILOUT_052018_TEST.csv', sep=';') # preprocess test input # generate response dataframe for the test data y_pred = clf.predict(mailout_test) ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import pickle import importlib from scipy.stats import skew from scipy import stats from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler, FunctionTransformer, OneHotEncoder from sklearn.compose import ColumnTransformer from sklearn.decomposition import PCA from sklearn.cluster import KMeans from imblearn.over_sampling import SMOTE from sklearn.model_selection import KFold, ShuffleSplit, learning_curve,GridSearchCV from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier, BaggingClassifier from sklearn.metrics import roc_auc_score ## Load custom module import functions as fn importlib.reload(fn) # import is_string_dtype, is_numeric_dtype # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Table of Contents:* [Get to Know the Data](gettoknowthedata) * [Missing values](missingvalues) * [Data cleaning](dataclean) * [Feature engineering](featureengineering)* [Customer Segmentation Report](customersegm) * [Principal Components Analysis](pca) * [K-Means](kmeans) * [Cluster analysis](clusteranalysis)* [Supervised Learning Model](supervisedlearningmodel) * [Class imbalance](classimb) * [Model selection](modelselection) * [Hyperparameter tuning](tuning)* [Conclusion](conclusion) Part 0: Get to Know the Data There are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # # load in the data azdias = pd.read_csv('data/Udacity_AZDIAS_052018.csv', sep=';') customers = pd.read_csv('data/Udacity_CUSTOMERS_052018.csv', sep=';') ###Output _____no_output_____ ###Markdown First, we take a look at the general structure of the datasets, the column types and missing values ###Code ### azdias dataset print("The general German information has {} rows and {} columns".format(azdias.shape[0], azdias.shape[1])) azdias.head() azdias.describe(include = 'all') ### azdias dataset print("The customer information has {} rows and {} columns".format(customers.shape[0], customers.shape[1])) customers.head() customers.describe(include='all') ###Output _____no_output_____ ###Markdown Missing Values At a first glance there are several columns with empty values. In addition to that there are many fetures with a value that actually means the information is missing. In the following section we investigate this further in order to reduce the dataset by dropping columns filled with empty values. ###Code ## the attributes data features = pd.read_excel('data/DIAS Attributes - Values 2017.xlsx', header = 1, usecols = [1,2,3,4]).fillna(method = 'ffill') features_missing = features[features['Meaning'] == 'unknown'] features_missing.head() missing_dict = fn.missing_dict(features_missing) missing_dict az_clean = azdias.copy() ## Replace values meaning 'unknown' with NAs for (key,value) in missing_dict.items(): try: az_clean[key].replace(missing_dict[key], np.nan, inplace = True) except: ## print columns that are in the attribute list but not on the data print(key) continue az_clean['CAMEO_DEU_2015'] = az_clean['CAMEO_DEU_2015'].replace('XX', np.nan) az_clean['CAMEO_INTL_2015'] = az_clean['CAMEO_INTL_2015'].replace('XX', np.nan) az_clean['CAMEO_DEUG_2015'] = az_clean['CAMEO_DEUG_2015'].replace('X', np.nan) az_clean.head() azdias_null_count = az_clean.isnull().sum() ## total missing values azdias_null_share = pd.DataFrame({'share':azdias_null_count/az_clean.shape[0]}).sort_values(by='share',ascending=False).reset_index() plt.rcParams["figure.figsize"] = (14,5) plt.hist(azdias_null_share["share"], color = 'darkred') plt.xlabel('Share of missing values') plt.title('Histogram of share of missing values per column'); ###Output _____no_output_____ ###Markdown From this figure we can determine that we can eliminate all columns with more than 40% missing values. ###Code drop_columns = list(azdias_null_share[azdias_null_share['share']>0.4]['index']) plt.rcParams["figure.figsize"] = (20,5) sns.barplot(data = azdias_null_share[azdias_null_share['index'].isin(drop_columns)], x = 'index', y = 'share', color = 'darkblue' ) plt.xlabel('Column Name') plt.ylabel('Null share') plt.title('Columns with >50% NULL share'); ###Output _____no_output_____ ###Markdown In the following, we explore eliminating rows with missing values as eliminating incomplete entries can also contribute to improve our fit. ###Code azdias_nrows_count = az_clean.isnull().sum(axis=1) azdias_nrows_share = pd.DataFrame({'share':azdias_nrows_count/az_clean.shape[1]}).sort_values(by='share',ascending=False) plt.rcParams["figure.figsize"] = (14,5) plt.hist(azdias_nrows_share["share"], color = 'darkred') plt.xlabel('Share of missing values') plt.title('Histogram of share of missing values per row'); ###Output _____no_output_____ ###Markdown From this figure we determine, we can drop all rows with more than half of the columns empty. ###Code drop_rows = azdias_nrows_share[azdias_nrows_share['share']>= 0.5].index print("Dropping all rows with more than 50% of empty columns will result on a loss of {}% of the rows". format(round(len(drop_rows)*100/azdias.shape[0],2))) ###Output Dropping all rows with more than 50% of empty columns will result on a loss of 11.22% of the rows ###Markdown Data cleaning In addition to eliminating empty values, we also address the following:* Removing redundant columns or with many unique values* Parsing columns to their correct data format according their content (*e.g.*, dates as dates, numbers as float or int, etc.). ###Code az_clean2 = fn.clean_data(df = az_clean, drop_rows= drop_rows, drop_cols= drop_columns) # Print new shape and datatypes print("Old shape: {}".format(azdias.shape)) print("New shape: {} \n".format(az_clean2.shape)) print("Datatypes:") print(az_clean2.dtypes.value_counts()) cat_columns = az_clean2.select_dtypes(['object']).columns print(az_clean2[cat_columns].describe()) for c in cat_columns: print('{} has {} unique values with the following content:'.format(c, az_clean2[c].nunique())) print(az_clean2[c].unique()) ###Output CAMEO_DEU_2015 has 44 unique values with the following content: ['8A' '4C' '2A' '6B' '8C' '4A' '2D' '1A' '1E' '9D' '5C' '8B' '7A' '5D' '9E' '9B' '1B' '3D' nan '4E' '4B' '3C' '5A' '7B' '9A' '6D' '6E' '2C' '7C' '9C' '7D' '5E' '1D' '8D' '6C' '6A' '5B' '4D' '3A' '2B' '7E' '3B' '6F' '5F' '1C'] CAMEO_INTL_2015 has 42 unique values with the following content: [51.0 24.0 12.0 43.0 54.0 22.0 14.0 13.0 15.0 33.0 41.0 34.0 55.0 25.0 nan 23.0 31.0 52.0 35.0 45.0 44.0 32.0 '22' '24' '41' '12' '54' '51' '44' '35' '23' '25' '14' '34' '52' '55' '31' '32' '15' '13' '43' '33' '45'] D19_LETZTER_KAUF_BRANCHE has 35 unique values with the following content: [nan 'D19_UNBEKANNT' 'D19_SCHUHE' 'D19_ENERGIE' 'D19_KOSMETIK' 'D19_VOLLSORTIMENT' 'D19_SONSTIGE' 'D19_BANKEN_GROSS' 'D19_DROGERIEARTIKEL' 'D19_HANDWERK' 'D19_BUCH_CD' 'D19_VERSICHERUNGEN' 'D19_VERSAND_REST' 'D19_TELKO_REST' 'D19_BANKEN_DIREKT' 'D19_BANKEN_REST' 'D19_FREIZEIT' 'D19_LEBENSMITTEL' 'D19_HAUS_DEKO' 'D19_BEKLEIDUNG_REST' 'D19_SAMMELARTIKEL' 'D19_TELKO_MOBILE' 'D19_REISEN' 'D19_BEKLEIDUNG_GEH' 'D19_TECHNIK' 'D19_NAHRUNGSERGAENZUNG' 'D19_DIGIT_SERV' 'D19_LOTTO' 'D19_RATGEBER' 'D19_TIERARTIKEL' 'D19_KINDERARTIKEL' 'D19_BIO_OEKO' 'D19_WEIN_FEINKOST' 'D19_GARTEN' 'D19_BILDUNG' 'D19_BANKEN_LOKAL'] ###Markdown The following actions need to be made regarding these categorical variables:* Even though `CAMEO_DEU_2015` has many unique values, the information that it provides is important. As this information is repeated in `CAMEO_INTL_2015` there is no need to keep both columns and `CAMEO_INTL_2015` can be dropped. `D19_LETZTER_KAUF_BRANCHE` has too many unique values and should be dropped. `LNR` is an internal identification number that can also be dropped.* `CAMEO_DEUG_2015` can be converted to integer.* `EINGEFUEGT_AM` needs to be cast into date from its current string format. * `OST_WEST_KZ` can be coded into "W"=1 AND "E"=0 ###Code drop_columns = drop_columns + ['CAMEO_INTL_2015','D19_LETZTER_KAUF_BRANCHE', 'LNR'] drop_columns az_clean2.drop(['CAMEO_INTL_2015','D19_LETZTER_KAUF_BRANCHE', 'LNR'], axis = 1, inplace = True) ###Output _____no_output_____ ###Markdown In the same way the customers dataset needs to be cleaned. There are, however additional columns which are not found in the German data. ###Code ## The customers data set diff_cols = list(set(customers.columns) - set(azdias.columns)) customers_nrows_count = customers.isnull().sum(axis=1) customers_nrows_share = pd.DataFrame({'share':customers_nrows_count/customers.shape[1]}).sort_values(by='share',ascending=False) customers_drop_rows = customers_nrows_share[customers_nrows_share['share']>= 0.5].index customers_clean = fn.clean_data(df = customers, drop_rows = customers_drop_rows, drop_cols = drop_columns + diff_cols) # Print new shape and datatypes print("Old shape: {}".format(customers.shape)) print("New shape: {} \n".format(customers_clean.shape)) print("Datatypes:") print(customers_clean.dtypes.value_counts()) ###Output Old shape: (191652, 369) New shape: (140901, 353) Datatypes: float64 260 int64 92 object 1 dtype: int64 ###Markdown Feature Engineering In addition to cleaning the data set, some feature engineering work needs to be done. In this section we address imputing empty values, and transforming/standarizing features depending on the column type. ###Code ## Imputing pipe line for binary variables bin_cols = fn.find_binary_cols(az_clean2) print("Binary features", bin_cols) bin_pipe = Pipeline([('bin_impute' , SimpleImputer(missing_values=np.nan , strategy='most_frequent' ) )]) ## Imputing and encoding for categorical variables cat_cols = fn.find_cat_cols(az_clean2) print("Categorical features", cat_cols) cat_pipe = Pipeline([ ('cat_impute', SimpleImputer(missing_values=np.nan, strategy='most_frequent')), ('onehot', OneHotEncoder(handle_unknown='ignore')) ]) ## Imputing for remaining numerical variables num_cols = list(set(az_clean2.columns) - set(bin_cols) - set(cat_cols)) print("Numerical features (first 10 on the list)", num_cols[1:10]) num_pipe = Pipeline([ ('num_impute', SimpleImputer(missing_values=np.nan, strategy='median')), ('num_scale', StandardScaler()) ]) ### Combining transformers column_transformer = ColumnTransformer( transformers = [ ('bin', bin_pipe, bin_cols), ('cat', cat_pipe, cat_cols), ('num', num_pipe, num_cols) ] ) az_clean2[cat_cols] = az_clean2[cat_cols].astype('str') az_trans = column_transformer.fit_transform(az_clean2) ## Transform az_trans to a data frame (recover column names from transformator) onehot_names = list(column_transformer.transformers_[1][1].named_steps['onehot'].get_feature_names(cat_cols)) col_names = bin_cols + onehot_names + num_cols az_df = pd.DataFrame(az_trans, columns = [col_names]) ## Check data types in transformed data set az_df.dtypes.value_counts() ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation Report The main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. Principal Component Analysis ###Code # az_df = pickle.load(open('data/az_df.pckl','rb')) az_df.shape[1] ###Output _____no_output_____ ###Markdown The transformed German dataset has 397 different features even after the data cleaning process. In order to reduce the number of features an approach such as Principal Component Analysis can be applied. ###Code ## Fit the PCA model az_pca = PCA().fit(az_df) fn.scree_plot(az_pca) ###Output _____no_output_____ ###Markdown From this plot we observe that around 200 components explain more than 90% of the variance, which means we can reduce our dataset to almost half of the features without loosing much predictive power. ###Code pca_red = PCA(n_components=200).fit(az_df) az_red = pd.DataFrame(pca_red.transform(az_df)) ###Output _____no_output_____ ###Markdown K-Means We know we can reduce the features in the data for clustering, however, as we make use of a K-Means algorithm for clustering, we still need to determine the no. of clusters required. This can be done, *e.g.* by using the elbow method. ###Code ## Loop over a range of possible cluster values sum_squared_d = [] for i in np.arange(2,41): k = KMeans(n_clusters = i, init = "k-means++") k.fit(az_red.sample(10000)) sum_squared_d.append(k.inertia_) ## Plot sum_squares vs. no. of clusters plt.figure(figsize = (12,4)) plt.plot(np.arange(2,41), sum_squared_d, '-') plt.xticks(np.arange(2,41)) plt.xlabel("Clusters") plt.ylabel("Inertia"); ###Output _____no_output_____ ###Markdown Although not entirely clear from the plot, we observe that the inertia reduces drastically in the range of 2 to 7 clusters. This constitutes the first elbow from the curve and, which means no further (impactful) gains can be achieved by increasing the number of clusters to more than 7. Cluster analysis Once we obtained to which level we can reduce the features of the dataset (*i.e.*, PCA) and the optimal number of clusters to fit (*i.e.*, elbow method for K-Means) then we can proceed to cluster the data to proceed with our analysis. Briefly explained, the approach we use is to fit the reduced German population into K-Means and use the same model to predict the clustering of the customers dataset. Once obtained, the clusters per sample can be compared to determine which group of customers are over- or underrepresented with respect to the general German population. ###Code ## Setup pipeline to reduce and cluster data n_components = 200 n_clusters = 7 cluster_pipe = Pipeline([ ('transform', column_transformer) ,('reduction', PCA(n_components= n_components)) ,('clustering', KMeans(n_clusters = n_clusters, init = 'k-means++')) ]) cluster_pipe.fit(az_clean2) ## Create clustered dataframes az_clust = pd.DataFrame(cluster_pipe.predict(az_clean2), columns = ["Cluster"]) customers_clust = pd.DataFrame(cluster_pipe.predict(customers_clean), columns = ["Cluster"]) ## Join clustered datasets into one dataframe clusters = pd.DataFrame({'Germany':az_clust.value_counts().sort_index() , 'Customers':customers_clust.value_counts().sort_index()}).reset_index() clusters['Cluster'] += 1 clusters['Germany_share'] = clusters['Germany']/clusters['Germany'].sum() clusters['Customers_share'] = clusters['Customers']/clusters['Customers'].sum() clusters['Delta'] = clusters['Customers_share'] - clusters['Germany_share'] ## Plot clusters plt.rcParams["figure.figsize"] = (12,10) plt.subplot(2,1,1) sns.barplot(data = clusters.drop(['Germany', 'Customers', 'Delta'], axis = 1).melt(id_vars = ['Cluster']), x = 'Cluster', y = 'value', hue = 'variable' ) plt.xlabel('Cluster') plt.ylabel('Share') plt.title('Clustered Germany data'); plt.subplot(2,1,2) sns.barplot(data = clusters, x = 'Cluster', y = 'Delta', color = 'grey'#'darkred' ) plt.xlabel('Cluster') plt.ylabel('Share') plt.title('Clustered customers data'); ###Output _____no_output_____ ###Markdown From these graphs we make the following observations:* The German data is somewhat balanced across clusters with the exception of cluster 7. The Customers data, in contrast, is highly concentraded in few clusters (1, 3, and 6).* Precisely clusters 1, 3, 6 are the most overrepresented clusters in comparison to the German data, as seen by the positive difference in shares. Clusters 2 and 7, on the other hand, are underepresented with respect to the German data, as the difference in shares is negative.* Focus clusters can be selected by choosing the clusters with a share difference of more than 10% with respect to the general population.The recommendation to the marketing campaing, thus, is to focus on the customers in clusters 1 and 6 as they stand out from the general population whereas clusters 2 and 5 are quite underepresented and should not be part of the focus group. ###Code ## Extract the values of cluster centers for all clusters cluster_centers = fn.get_cluster_centers(cluster_pipe, num_cols,col_names) ## Separate focus vs non focus clusters selected based on our analysis focus_clusters = cluster_centers.iloc[[0, 5, 1, 4]].T focus_clusters.columns += 1 focus_clusters_red = focus_clusters[focus_clusters.std(axis = 1)>2].sort_index() focus_clusters_red ###Output _____no_output_____ ###Markdown Based on the previous table, we can infer the following traits for customers in both focus and non-focust customer segments:*Focus Clusters** Mid to High-income* Mainly families and single parents* Older than non-focus demographic* Living in single home, duplex or small apartment buildings* More consumption-oriented*Non-focus Clusters** Low-income* Mainly singles and single parents* Younger than focus demographic* Living in big apartment buildings with many household* Less consumption oriented Part 2: Supervised Learning Model Now that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code # mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') mailout_train = pd.read_csv('data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') ## Clean the data mailout_train_clean = pd.concat(\ [fn.clean_data(mailout_train.drop(['RESPONSE'], axis = 1), drop_cols = drop_columns),\ mailout_train['RESPONSE']],\ axis = 1) # Print new shape and datatypes print("Old shape: {}".format(mailout_train.shape)) print("New shape: {} \n".format(mailout_train_clean.shape)) print("Datatypes:") print(mailout_train_clean.dtypes.value_counts()) ## Transform data, impute missing values and separate features from response variables X = mailout_train_clean.drop(['RESPONSE'], axis = 1) X = column_transformer.fit_transform(X) X = pd.DataFrame(X, columns = [col_names]) y = mailout_train_clean['RESPONSE'] ###Output _____no_output_____ ###Markdown Class imbalance The response variable of this dataset is highly imbalanced, as it can be observed in the following plot: ###Code ### class imbalance plot plt.figure(figsize = (6 , 7)) sns.countplot(data = mailout_train_clean, x = 'RESPONSE') plt.title('Respone variable count'); ###Output _____no_output_____ ###Markdown This fact cant greatly impact the quality of the prediction model as the prediction accuracy cannot be properly meassured. In order to overcome this, there is the possibility of over- or undersampling the current dataset to improve the response balance. Furthermore, the [SMOTE](https://www.analyticsvidhya.com/blog/2020/10/overcoming-class-imbalance-using-smote-techniques) approach further enhances under- and oversampling by incorporating generated synthetic samples. ###Code ## Apply SMOTE oversample = SMOTE(random_state= 42) X_bal, y_bal = oversample.fit_resample(X,y) plt.figure(figsize = (6 , 7)) plt.bar(x= [0,1], height = y_bal.value_counts()) plt.xticks([0,1]) plt.title('Response variable counts'); ###Output _____no_output_____ ###Markdown Model selection In this section we train different classification models to determine which approach is more suitable for our present task. Once selected, the best model can be further optimized by hyperparameter tuning. ###Code ## Define candidate models models = {'RandomForestClassifier': RandomForestClassifier(), 'AdaBoostClassifier': AdaBoostClassifier(), 'GradientBoostingClassifier': GradientBoostingClassifier(), 'BaggingClassifier': BaggingClassifier() } ## Run crossvalidation to plot learning curves and determine best model cv = ShuffleSplit(n_splits=2, test_size=0.2, random_state=0) for model_key in models.keys(): print(model_key) ml_pipeline = Pipeline([ ('model', models[model_key]) ]) fn.plot_learning_curve(ml_pipeline, title = "test", X = X_bal, y = y_bal, cv = cv, verbose = 0, n_jobs = -1) plt.show() ###Output RandomForestClassifier ###Markdown From these experiments we can observe the following:* The `RandomForestClassifier` presented simultaneously the best traning score and the worst validation score. This indicates that this model is overfitting (*i.e.* shows high model bias) and therefore will not be chosen* The `AdaBoostClassifier`, the `GradientBoostingClassifier`, and the `BaggingClassifier` show less model bias as the training and validation score converge relatively early (this occurs already after 30-50% of the data is passed to the algorithm). * Even though `GradientBoostingClassifier` and the `BaggingClassifier` both show similar values levels for the validation score, the `GradientBoostingClassifier` is slightly higher (even though computationally more costly) and therefore we chose to use it during our following steps. Hyperparameter Tuning Now that we selected `GradientBoostingClassifier` as our classification algorithm the next step is to tune is parameters. This can be achieved through a grid-search on which a combination of series of parameters can be tested to select the best configuration. ###Code ### Define search grid grid_pipe = Pipeline([ ('gbc', GradientBoostingClassifier(random_state = 42)) ]) grid_params = {'gbc__learning_rate': [0.1, 0.2] , 'gbc__n_estimators': [100] , 'gbc__max_depth': [3, 5] , 'gbc__min_samples_split': [2,4]} grid_opt = GridSearchCV(grid_pipe , grid_params , scoring = 'roc_auc' , verbose = 2) ## Fit model with grid parameters grid_opt.fit(X_bal, y_bal) # Get the estimator and predict print(grid_opt.best_params_) best_estimator = grid_opt.best_estimator_ predictions = best_estimator.predict_proba(X_bal)[:, 1] # Save to file in the current working directory # pkl_filename = "gd_model.pkl" # with open(pkl_filename, 'wb') as file: # pickle.dump(best_estimator, file) # Make predictions using unoptimized and the best model predictions = (grid_pipe.fit(X_bal, y_bal)).predict_proba(X_bal)[:, 1] print("ROC score: {:.4f}".format(roc_auc_score(y_bal, predictions))) print("Final ROC score: {:.4f}".format(roc_auc_score(y_bal, predictions))) ###Output {'gbc__learning_rate': 0.1, 'gbc__max_depth': 5, 'gbc__min_samples_split': 2, 'gbc__n_estimators': 100} ROC score: 0.9900 Final ROC score: 0.9900 ###Markdown In addition we can plot which variables have the most importance for the classification model: ###Code var_imp = pd.Series(best_estimator.named_steps['gbc']\ .feature_importances_, index = col_names)\ .sort_values() plt.barh(var_imp[-10:].index, var_imp[-10:]) plt.xlabel('Feature Importance') plt.ylabel('Column') plt.title('Top 10 most important Variables'); ###Output _____no_output_____ ###Markdown With this section completed, we can therefore proceed to make our predictions for the test set provided for the Kaggle competition, Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter.Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code # mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') mailout_test = pd.read_csv('data/Udacity_MAILOUT_052018_TEST.csv', sep=';', low_memory = False) ## Clean data mailout_clean = fn.clean_data(df = mailout_test, drop_cols = drop_columns) # Print new shape and datatypes print("Old shape: {}".format(mailout_test.shape)) print("New shape: {} \n".format(mailout_clean.shape)) print("Datatypes:") print(mailout_clean.dtypes.value_counts()) ## Transform/impute data mailout_trans = column_transformer.fit_transform(mailout_clean) ## Predict RESPONSE kaggle_submission = best_estimator.predict_proba(mailout_trans)[:, 1] ## Save predictions in Kaggle's format for submission submission_file = pd.DataFrame({'LNR':mailout_test['LNR'],'RESPONSE':kaggle_submission}) submission_file.to_csv('kaggle_submission.csv', index=False) submission_file.head() ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import seaborn as sns # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. Read raw data ###Code #these data is download from udacity, the format is a little bit different with the original data # load in the data azdias = pd.read_csv('data/Udacity_AZDIAS_052018.csv', sep=',') customers = pd.read_csv('data/Udacity_CUSTOMERS_052018.csv', sep=',') #drop the "Unamed: 0" column # this column is added when saving the data from the Udacity workspace azdias.drop('Unnamed: 0',axis=1,inplace=True) customers.drop('Unnamed: 0',axis=1,inplace=True) azdias.shape,customers.shape ###Output _____no_output_____ ###Markdown Merge two datasets into oneAdd one column to label the dataset; Merge two datasets into one can save energy when cleaning the data ###Code # add one column to the data azdias["label"]="Azdias" customers["label"]="Customers" #merge two dataset together merge_data=pd.concat([azdias, customers]) #release memory del azdias del customers #print current shape of merged dataset merge_data.shape ###Output _____no_output_____ ###Markdown Check the type of each column ###Code #check the "object" type columns temp=merge_data.dtypes for i in temp[temp=="object"].index: print(colored(i, 'red')," : \n" ,merge_data[i].unique()) ###Output CAMEO_DEUG_2015 : [nan 8.0 4.0 2.0 6.0 1.0 9.0 5.0 7.0 3.0 '4' '3' '7' '2' '8' '9' '6' '5' '1' 'X'] CAMEO_DEU_2015 : [nan '8A' '4C' '2A' '6B' '8C' '4A' '2D' '1A' '1E' '9D' '5C' '8B' '7A' '5D' '9E' '9B' '1B' '3D' '4E' '4B' '3C' '5A' '7B' '9A' '6D' '6E' '2C' '7C' '9C' '7D' '5E' '1D' '8D' '6C' '6A' '5B' '4D' '3A' '2B' '7E' '3B' '6F' '5F' '1C' 'XX'] CAMEO_INTL_2015 : [nan 51.0 24.0 12.0 43.0 54.0 22.0 14.0 13.0 15.0 33.0 41.0 34.0 55.0 25.0 23.0 31.0 52.0 35.0 45.0 44.0 32.0 '22' '24' '41' '12' '54' '51' '44' '35' '23' '25' '14' '34' '52' '55' '31' '32' '15' '13' '43' '33' '45' 'XX'] CUSTOMER_GROUP : [nan 'MULTI_BUYER' 'SINGLE_BUYER'] D19_LETZTER_KAUF_BRANCHE : [nan 'D19_UNBEKANNT' 'D19_SCHUHE' 'D19_ENERGIE' 'D19_KOSMETIK' 'D19_VOLLSORTIMENT' 'D19_SONSTIGE' 'D19_BANKEN_GROSS' 'D19_DROGERIEARTIKEL' 'D19_HANDWERK' 'D19_BUCH_CD' 'D19_VERSICHERUNGEN' 'D19_VERSAND_REST' 'D19_TELKO_REST' 'D19_BANKEN_DIREKT' 'D19_BANKEN_REST' 'D19_FREIZEIT' 'D19_LEBENSMITTEL' 'D19_HAUS_DEKO' 'D19_BEKLEIDUNG_REST' 'D19_SAMMELARTIKEL' 'D19_TELKO_MOBILE' 'D19_REISEN' 'D19_BEKLEIDUNG_GEH' 'D19_TECHNIK' 'D19_NAHRUNGSERGAENZUNG' 'D19_DIGIT_SERV' 'D19_LOTTO' 'D19_RATGEBER' 'D19_TIERARTIKEL' 'D19_KINDERARTIKEL' 'D19_BIO_OEKO' 'D19_WEIN_FEINKOST' 'D19_GARTEN' 'D19_BILDUNG' 'D19_BANKEN_LOKAL'] EINGEFUEGT_AM : [nan '1992-02-10 00:00:00' '1992-02-12 00:00:00' ..., '2011-04-15 00:00:00' '1997-09-15 00:00:00' '2007-08-13 00:00:00'] OST_WEST_KZ : [nan 'W' 'O'] PRODUCT_GROUP : [nan 'COSMETIC_AND_FOOD' 'FOOD' 'COSMETIC'] label : ['Azdias' 'Customers'] ###Markdown Data cleaning ###Code def str2float(x): if type(x)==str: return eval(x) else: return x def data_cleaning(df): ''' Cleaning the data: replace some values,change string into number and change some columns into categories and datatime. Parameters: INPUT: df(Dataframe): the dataset which will be cleaned OUTPUT: df(Datafrem): the cleaned dataset ''' for column in ['CAMEO_DEUG_2015', 'CAMEO_DEU_2015', 'CAMEO_INTL_2015']: try: df[column][(df[column]=="X")|(df[column]=="XX")]=np.nan except: pass for column in ['CAMEO_DEUG_2015', 'CAMEO_INTL_2015']: df[column]=df[column].apply(str2float) #change the catergory columns into number for column in ["CAMEO_DEU_2015","D19_LETZTER_KAUF_BRANCHE","OST_WEST_KZ",]: df[column] = pd.Categorical(df[column]) df[column] = df[column].cat.codes #extract the time,and keep the year df["EINGEFUEGT_AM"]=pd.to_datetime(df["EINGEFUEGT_AM"]).dt.year #change all the unknown back to nan #this step is not exactly correct, because some unknown data is labled as 0 and 9. for column in df.columns.values: try: df[column][df[column]==-1]=np.nan except: pass return df merge_data=data_cleaning(merge_data) #check the "object" type columns again temp=merge_data.dtypes for i in temp[temp=="object"].index: print(i) print(merge_data[i].unique()) ###Output PRODUCT_GROUP [nan 'COSMETIC_AND_FOOD' 'FOOD' 'COSMETIC'] label ['Azdias' 'Customers'] ###Markdown From the result above, we can see only two variables are not number. We will not use these two variable when doing analysis. Data visualizationCompare the distribution of features between AZDIAS and CUSTOMERS datasets. ###Code def visual(x= "AGER_TYP", y= "Prop",hue ="label",plot=False): #plot the distribution of variable x in the Azdias and customers dataset prop_df = (merge_data[x] .groupby(merge_data[hue]) .value_counts(normalize=True) .rename(y) .reset_index()) sns.barplot(x=x, y=y, hue=hue, data=prop_df, ) if plot: plt.show() else: plt.savefig(x+".png") plt.clf() ###Output _____no_output_____ ###Markdown Plot all the distribution of features and save them as png files ###Code #Get the list of every variable, and delete some special columns plot_list=list(merge_data.columns.values) plot_list.remove("LNR") plot_list.remove("label") #plot the distribution of every variable in the Azdias and customers for col in plot_list[250:]: #print(col) visual(x=col) ###Output _____no_output_____ ###Markdown Plot some distribution of features - different distribution between AZDIAS and CUSTOMERS datasets ###Code #Based on the visualization result, plot some variables have significant different distribution in two dataset variables_list=["ALTER_HH","ALTERSKATEGORIE_FEIN","CJT_TYP_5","D19_KONSUMTYP","D19_KONSUMTYP_MAX", "D19_SOZIALES","FINANZ_ANLEGER","KBA05_GBZ","KOMBIALTER","VK_ZG11"] for col in variables_list: #print(col) plt.figure(figsize=(12,7)) visual(x=col,plot=True) ###Output _____no_output_____ ###Markdown - similar distribution between AZDIAS and CUSTOMERS datasets ###Code variables_list=["VHN", "ALTER_KIND3"] for col in variables_list: #print(col) plt.figure(figsize=(12,7)) visual(x=col,plot=True) import pickle file_Name = "merged_data.pickle" # we open the file for reading fileObject = open(file_Name,'rb') # load the object from the file into var b merge_data = pickle.load(fileObject) ###Output _____no_output_____ ###Markdown Check missing values ###Code temp_desc=merge_data.describe() rate=temp_desc.loc["count",:]/len(merge_data) #print the columns which has more than 20% missing values print(" Name Missing") 1-rate[rate<0.7] ###Output Name Missing ###Markdown Drop the columns which have more than 20% missing values ###Code dorp_list=rate[rate<0.7].index.values merge_data.drop(dorp_list,axis=1,inplace=True) dorp_list ###Output _____no_output_____ ###Markdown Fill nan values as -1 ###Code #merge_data=merge_data.fillna(merge_data.mean()) merge_data=merge_data.fillna(-1) merge_data.head() ###Output _____no_output_____ ###Markdown Save the filled data into pickle file ###Code import pickle file_Name = "merged_data_filled2.pickle" # open the file for writing fileObject = open(file_Name,'wb') # this writes the object a to the # file named 'testfile' pickle.dump(merge_data,fileObject) # here we close the fileObject fileObject.close() ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. ###Code from sklearn import preprocessing from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.manifold import TSNE import matplotlib.pyplot as plt import numpy as np import seaborn as sns ###Output _____no_output_____ ###Markdown Read pickle file ###Code import pickle import pandas as pd from sklearn.cluster import KMeans file_Name = "merged_data_filled2.pickle" # we open the file for reading fileObject = open(file_Name,'rb') # load the object from the file into var b merge_data = pickle.load(fileObject) #Because the dataset is huge, here I only choose 40% of total dataset to process the unsupervised learning merge_data = merge_data.sample(frac=0.3).reset_index(drop=True) merge_data.shape ###Output _____no_output_____ ###Markdown Drop unnecessary columns ###Code #change the label into numerical merge_data["label"][merge_data["label"]=="Azdias"]=1 merge_data["label"][merge_data["label"]=="Customers"]=0 label=merge_data["label"] #drop the unnecessary columns merge_data.drop("label",axis=1,inplace=True) merge_data.drop("PRODUCT_GROUP",axis=1,inplace=True) merge_data.drop("LNR",axis=1,inplace=True) ###Output /home/fafun/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy /home/fafun/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:3: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy This is separate from the ipykernel package so we can avoid doing imports until ###Markdown One hot encodingFor some variables, like "very high, high, average, low, very low", we could use number to indicate the strength. However, for some category variables, we need use one hot coding to encoding them. ###Code dummy_list=["CJT_GESAMTTYP","D19_KONSUMTYP","D19_KK_KUNDENTYP","GEBAEUDETYP","GFK_URLAUBERTYP","LP_FAMILIE_FEIN", "LP_STATUS_FEIN","PRAEGENDE_JUGENDJAHRE","TITEL_KZ","ZABEOTYP"] cols=merge_data.columns.values for name in dummy_list: if name in cols: print(name) dummies=pd.get_dummies(merge_data[name]) merge_data = pd.concat([merge_data, dummies], axis=1) merge_data.drop(name,axis=1,inplace=True) ###Output CJT_GESAMTTYP D19_KONSUMTYP GEBAEUDETYP GFK_URLAUBERTYP LP_FAMILIE_FEIN LP_STATUS_FEIN PRAEGENDE_JUGENDJAHRE TITEL_KZ ZABEOTYP ###Markdown Normalization ###Code x = merge_data.values #returns a numpy array scaler = StandardScaler() scaler.fit(x) x_scaled=scaler.transform(x) ###Output _____no_output_____ ###Markdown PCA dimension reduction ###Code #Fitting the PCA algorithm with our Data pca = PCA().fit(x_scaled) #Plotting the Cumulative Summation of the Explained Variance plt.figure() plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('Number of Components') plt.ylabel('Variance (%)') #for each component plt.title('Dataset Explained Variance') plt.grid() plt.show() ###Output _____no_output_____ ###Markdown t-SNE visualization ###Code #When choose a large number of the principle components, it takes very long time to fitting. Here I set the n_components as 10. pca = PCA(n_components=10) dataset = pca.fit_transform(x_scaled) # Defining Model model = TSNE(learning_rate=100) # Fitting Model transformed = model.fit_transform(dataset) # Plotting 2d t-Sne x_axis = transformed[:, 0] y_axis = transformed[:, 1] plt.figure(figsize=(10,10)) plt.scatter(x_axis, y_axis, c=label.values,s=1) plt.show() ###Output _____no_output_____ ###Markdown K-means clustering ###Code #PCA reduction pca = PCA(n_components=300) dataset = pca.fit_transform(x_scaled) # to determine the number of clusters costs=[] for k in range(3,40): kmeans = KMeans(n_clusters=k, random_state=0,n_jobs=-1).fit(dataset) costs.append(kmeans.inertia_) #change of distance with the number of culsters plt.plot(range(3,30),costs) plt.xlabel("number of cluster") plt.ylabel("Sum of distance") plt.show() ###Output _____no_output_____ ###Markdown Based on the elbow point above, I set 8 as the number of cluster ###Code #Training the model """ training the K-means model, make prediction and plot out the distribution in each cluster PARAMETERS: n_clusters(INT): number of culsters """ n_clusters=8 kmeans = KMeans(n_clusters=n_clusters, random_state=0,n_jobs=-1).fit(dataset) #extract the customers and general population from dataset, predict them by the trained model Customers=dataset[label.values==0] Azdias=dataset[label.values==1] cus=kmeans.predict(Customers) azd=kmeans.predict(Azdias) plt.hist([cus, azd],label=['Customers', 'Azdias'],bins=n_clusters,normed=True) plt.legend(loc=0) plt.xlabel("Cluster") plt.ylabel("Percent(%)") plt.show() ###Output /home/fafun/anaconda3/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6462: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg. warnings.warn("The 'normed' kwarg is deprecated, and has been " ###Markdown The figure above shows the distribution of customers and general population in each cluster. It is interesting to find that most individual in cluster 3 are general population and large portion of people in cluster 5 and 6 are mail-order customers. Compare the different between cluster 3 and cluster 6 ###Code # extract the original data Customers_originaldata=merge_data[label.values==0] Azdias_originaldata=merge_data[label.values==1] # As we see in the data visulization part, the distribution of some variable is different variables_list=["ALTER_HH","CJT_TYP_5","D19_KONSUMTYP_MAX","D19_SOZIALES", "FINANZ_ANLEGER","KBA05_GBZ","KOMBIALTER","VK_ZG11"] #compare the difference in cluster 5 and cluster 7 for name in variables_list: print(name) Customers_cluster=Customers_originaldata[cus==5] Azdias_cluster=Azdias_originaldata[azd==3] plt.hist([Customers_cluster[name],Azdias_cluster[name]],normed=1,label=['Customers','Azdias']) plt.legend() plt.xlabel(name) plt.ylabel("percent") plt.show() ###Output ALTER_HH ###Markdown Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code from termcolor import colored from sklearn.model_selection import train_test_split import lightgbm as lgb ###Output _____no_output_____ ###Markdown Read data ###Code mailout_train = pd.read_csv('./data/Udacity_MAILOUT_052018_TRAIN.csv', sep=',') mailout_train.drop('Unnamed: 0',axis=1,inplace=True) print("There are {} individuals in the traing dataset".format(len(mailout_train))) print("There are {} individuals response the mail-out in the traing dataset".format(len(mailout_train[mailout_train.RESPONSE==1]))) mailout_test = pd.read_csv('./data/Udacity_MAILOUT_052018_TEST.csv', sep=',') mailout_test.drop('Unnamed: 0',axis=1,inplace=True) ###Output /home/fafun/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py:2698: DtypeWarning: Columns (19,20) have mixed types. Specify dtype option on import or set low_memory=False. interactivity=interactivity, compiler=compiler, result=result) ###Markdown Merge files ###Code merge_data=pd.concat([mailout_train, mailout_test]) len_train=len(mailout_train) ###Output _____no_output_____ ###Markdown Check types ###Code temp=merge_data.dtypes for i in temp[temp=="object"].index: print(colored(i, 'red')," : \n" ,merge_data[i].unique()) ###Output CAMEO_DEUG_2015 : [5.0 2.0 7.0 4.0 nan 3.0 6.0 1.0 8.0 9.0 '4' '6' '2' '9' '8' '7' '3' '1' '5' 'X'] CAMEO_DEU_2015 : ['5D' '5B' '2D' '7B' '4C' '5C' nan '3D' '5A' '2C' '4A' '6B' '1A' '8D' '4B' '7A' '4E' '3A' '7C' '9D' '8A' '5E' '8B' '3C' '6E' '4D' '2B' '3B' '7E' '2A' '6C' '1C' '6D' '7D' '1D' '8C' '9A' '9B' '9C' '9E' '6F' '1E' '6A' '5F' '1B' 'XX'] CAMEO_INTL_2015 : [34.0 32.0 14.0 41.0 24.0 33.0 nan 25.0 31.0 22.0 43.0 13.0 55.0 23.0 54.0 51.0 45.0 12.0 44.0 35.0 15.0 52.0 '23' '44' '14' '55' '51' '45' '43' '22' '54' '24' '25' '13' '12' '35' '33' '41' '15' '52' '31' '32' '34' 'XX'] D19_LETZTER_KAUF_BRANCHE : ['D19_UNBEKANNT' 'D19_TELKO_MOBILE' 'D19_LEBENSMITTEL' 'D19_BEKLEIDUNG_GEH' 'D19_BUCH_CD' nan 'D19_NAHRUNGSERGAENZUNG' 'D19_SCHUHE' 'D19_SONSTIGE' 'D19_HAUS_DEKO' 'D19_FREIZEIT' 'D19_ENERGIE' 'D19_VOLLSORTIMENT' 'D19_BANKEN_REST' 'D19_VERSICHERUNGEN' 'D19_KINDERARTIKEL' 'D19_TECHNIK' 'D19_DROGERIEARTIKEL' 'D19_BEKLEIDUNG_REST' 'D19_WEIN_FEINKOST' 'D19_HANDWERK' 'D19_GARTEN' 'D19_BANKEN_DIREKT' 'D19_DIGIT_SERV' 'D19_REISEN' 'D19_SAMMELARTIKEL' 'D19_BANKEN_GROSS' 'D19_VERSAND_REST' 'D19_TELKO_REST' 'D19_BILDUNG' 'D19_BANKEN_LOKAL' 'D19_TIERARTIKEL' 'D19_BIO_OEKO' 'D19_RATGEBER' 'D19_LOTTO' 'D19_KOSMETIK'] EINGEFUEGT_AM : ['1992-02-10 00:00:00' '1997-05-14 00:00:00' '1995-05-24 00:00:00' ..., '2001-07-09 00:00:00' '2003-08-06 00:00:00' '2002-10-14 00:00:00'] OST_WEST_KZ : ['W' 'O' nan] ###Markdown Data cleaning -- using a function defined in part 1 ###Code merge_data=data_cleaning(merge_data) ###Output /home/fafun/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:15: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy from ipykernel import kernelapp as app /home/fafun/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:32: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy ###Markdown Check missing values, and delete the columns which have more than 25% missing values ###Code temp_desc=merge_data.describe() rate=temp_desc.loc["count",:]/len(merge_data) #print the columns which has more than 20% missing values rate[rate<0.75].index drop_list=['AGER_TYP', 'ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4', 'EXTSEL992', 'KK_KUNDENTYP',] merge_data.drop(drop_list,axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Fill nan values with -1 ###Code #merge_data=merge_data.fillna(merge_data.mean()) merge_data=merge_data.fillna(-1) ###Output _____no_output_____ ###Markdown one hot encoding ###Code dummy_list=["CJT_GESAMTTYP","D19_KONSUMTYP","D19_KK_KUNDENTYP","GEBAEUDETYP","GFK_URLAUBERTYP","LP_FAMILIE_FEIN", "LP_STATUS_FEIN","PRAEGENDE_JUGENDJAHRE","TITEL_KZ","ZABEOTYP","CAMEO_DEU_2015","CAMEO_INTL_2015"] cols=merge_data.columns.values for name in dummy_list: if name in cols: print(name) dummies=pd.get_dummies(merge_data[name],prefix=name) merge_data = pd.concat([merge_data, dummies], axis=1) merge_data.drop(name,axis=1,inplace=True) ###Output CJT_GESAMTTYP D19_KONSUMTYP GEBAEUDETYP GFK_URLAUBERTYP LP_FAMILIE_FEIN LP_STATUS_FEIN PRAEGENDE_JUGENDJAHRE TITEL_KZ ZABEOTYP CAMEO_DEU_2015 CAMEO_INTL_2015 ###Markdown Drop unnecessary columns ###Code Y_train=merge_data.RESPONSE.values[:len_train] merge_data.drop("RESPONSE",axis=1,inplace=True) merge_data.drop("LNR",axis=1,inplace=True) ###Output _____no_output_____ ###Markdown Split the training and testing dataset ###Code X=merge_data.iloc[:len_train,:] Test=merge_data.iloc[len_train:,:] X_train, X_test, y_train, y_test = train_test_split(X, Y_train, test_size=0.1, random_state=42) X_train.shape ###Output _____no_output_____ ###Markdown Modeling ###Code model = lgb.LGBMRegressor(boosting_type='gbdt', num_leaves=31, max_depth=-1, learning_rate=0.01, n_estimators=200, max_bin=255, subsample_for_bin=50000, objective=None, min_split_gain=0, min_child_weight=3, min_child_samples=10, subsample=1, subsample_freq=1, colsample_bytree=1, reg_alpha=0.1, reg_lambda=0, seed=17, silent=False, nthread=-1) model.fit(X_train, y_train, eval_metric='rmse', eval_set=[(X_test,y_test )], verbose = False) ###Output _____no_output_____ ###Markdown Feature importance ###Code df_importanct=pd.DataFrame(model.feature_importances_.T,index=merge_data.columns.values,columns=["importance",]) df_importanct.sort_values(by="importance",ascending=False,inplace=True) plt.figure(figsize=(6,10)) sns.barplot(x=df_importanct.importance[:20],y=df_importanct.index[:20]) plt.show() ###Output _____no_output_____ ###Markdown Grid search for better parameters ###Code from sklearn.model_selection import GridSearchCV from sklearn.metrics import roc_auc_score parameters = { "num_leaves":[31,20,], "learning_rate":[0.01,0.05,], "n_estimators":[100,200,], 'max_bin':[128,255], 'subsample_for_bin':[5000,100000], 'min_split_gain':[0,0.1,0.01], 'min_child_weight':[1,3,], 'min_child_samples':[10,20,], 'subsample':[1,2,], 'subsample_freq':[1,2,], "bagging_fraction":[1,], 'reg_alpha':[0.1,] , 'nthread':[-1,] } model_lgb = lgb.LGBMRegressor() model = GridSearchCV(model_lgb, parameters) model.fit(X_train, y_train) #make prediction y_pre=model.predict(X_test) ### calculate ROC AUC score roc_auc_score(y_test, y_pre) ###Output _____no_output_____ ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code #mailout_test = pd.read_csv('./data/Udacity_MAILOUT_052018_TEST.csv', sep=',') #mailout_test.drop('Unnamed: 0',axis=1,inplace=True) test_pre=model.predict(Test) test_pre result=pd.DataFrame() result["LNR"]=mailout_test.LNR result["RESPONSE"]=test_pre result.head() ###Output _____no_output_____ ###Markdown Save data to submission file ###Code result.to_csv("submission.csv",index=False) ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code !pip install xgboost # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.preprocessing import StandardScaler, MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.model_selection import StratifiedKFold from sklearn.model_selection import GridSearchCV from sklearn.decomposition import PCA from sklearn.preprocessing import Imputer from sklearn.cluster import KMeans from sklearn.cluster import DBSCAN from sklearn.cluster import MeanShift from sklearn.ensemble import AdaBoostRegressor # Adaptive Boosting from sklearn.ensemble import GradientBoostingRegressor # Gradient Tree Boosting from xgboost.sklearn import XGBRegressor # Extreme Gradient Boosting import xgboost as xgb from sklearn.metrics import roc_auc_score, fbeta_score, accuracy_score, precision_score, recall_score from scipy import stats from time import time # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # load in the data azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';') customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') # Set pandas display to be able to scroll through all columns and rows pd.set_option('display.max_columns', 200) pd.set_option('display.max_rows',200) # Be sure to add in a lot more cells (both markdown and code) to document your # approach and findings! azdias.head() list(azdias.columns) customers.head() list(customers.columns) differences = [] for i in customers.columns: if i not in azdias: differences.append(i) print(differences) # Structure of dataframe; followed by investigation cells azdias.shape print('Number of rows:', azdias.shape[0]) print('Number of columns', azdias.shape[1]) azdias.info() azdias['AGER_TYP'].unique() azdias.describe().transpose() ###Output _____no_output_____ ###Markdown Complications cleaning the data: 1. The amount of data: Working with a big amount of data in a jupyter notebook is a little hard, because you could have problems like out of memory or you can’t see all the columns or rows in a view. 2. How to know what is the best way to handle the missing data: One of the biggest question that i have, is how to know what is the best option to handling missing data, here are some tips: get the mean of missing values per column, get the mean of missing values per rows, if the amount of missing values is not a big quantity, you can fill the missing values with the most repetitive value per column. 3. Pacience and backups: Working with Jupyter noteebok sometimes is hard, because you have to run instructions that some times took a lot of time, i recommend do backups of your results in time periods, using the library pickle. Handling Missing Data ###Code # find data that is 'naturally missing' in dataset # get number of nan/null data in azdias before any processing is applied # total number of fields with null - 'naturally' azdias_null = azdias.isnull().sum() azdias_null_percent = azdias_null / len(azdias) * 100 # visualise natually missing data (azdias_null.sort_values(ascending=False)[:50].plot(kind='bar', figsize=(20,8), fontsize=13)) # get dstirbution of empty data in fields by percentage plt.figure(figsize=(10,5)) plt.hist(azdias_null_percent, bins = np.linspace(10,100,19), facecolor='g', alpha=0.75) plt.xlabel('% of missing value') plt.ylabel('# of Columns') plt.title('Distribution of missing data in each column') plt.grid(True) plt.show() # % of missing data in columns print('% of missing data in columns','\n',azdias_null_percent.sort_values(ascending=False)) # above we have already identified % of nan/null values columnwise # let's visualise the distribution trend column_nans = azdias.isnull().mean() plt.hist(column_nans, bins = np.arange(0,1+.05,.05)) plt.ylabel('# of features') plt.xlabel('prop. of missing values') # From review of data in last 2 cells, we can find that % of null data in columns ranges from 0.* to 17% # but after that there is drastic difference in % of null fields clearly highlighting outliers. # Let's print missing % distribution manually to understand if outlier is evidently visible or further analysis/deep dive required print('columns having missing values >0% : ',len(azdias_null_percent[azdias_null_percent>0].index)) print('columns having missing values >10%: ',len(azdias_null_percent[azdias_null_percent>10].index)) print('columns having missing values >20%: ',len(azdias_null_percent[azdias_null_percent>20].index)) print('columns having missing values >30%: ',len(azdias_null_percent[azdias_null_percent>30].index)) print('columns having missing values >40%: ',len(azdias_null_percent[azdias_null_percent>40].index)) print('columns having missing values >60%: ',len(azdias_null_percent[azdias_null_percent>60].index)) print('columns having missing values >65%: ',len(azdias_null_percent[azdias_null_percent>65].index)) print('columns having missing values >80%: ',len(azdias_null_percent[azdias_null_percent>80].index)) print('columns having missing values >90%: ',len(azdias_null_percent[azdias_null_percent>90].index)) # distribution analysis on row level row_nans = azdias.isnull().sum(axis=1) plt.hist(row_nans, bins = np.arange(-0.5,row_nans.max()+1,1)) plt.yticks(np.arange(0,300000+100000,100000),['0','100k','200k','300k']) plt.ylabel('# of data points') plt.xlabel('# of missing values') # as 0 has most of the data, we are unable to see rest of the trend in detail. So plotting distribution without 0 missin values # from below chart we can that around 9ish trend starts changing. row_nans = azdias[azdias.isnull().sum(axis=1) > 0].isnull().sum(axis=1) plt.hist(row_nans, bins = np.arange(-0.5,row_nans.max()+1,1)) plt.ylabel('# of data points') plt.xlabel('# of missing values') # let's see this in a slightly different way. # Calculate percentage of data kept for rows with * or less missing data points # from below we can see that % of data kept for rows with various number of missing data points (in asc order) # from this we can see that around 16% this more or less stagnates and increase in number of columns does not directly increase % of data kept. # so we will use this as an indicator to help us choose the threshold limit. print("Percentage of data kept:",round(azdias.isnull().sum(axis=1).value_counts().sort_index().cumsum()[:30]/azdias.isnull().sum(axis=1).shape[0]*100,0)) azdias = azdias[azdias.isnull().sum(axis=1) <= 16].reset_index(drop=True) print('number of rows in new dataset: ',azdias.shape[0]) # from above we can see that there is significant difference on % of data missing. >65 is only 6 columns and this is a significant missing data # so let's drop these 6 columns drop_cols = azdias.columns[column_nans > 0.65] print('columns to drop: ', drop_cols) print('number of rows in new dataset: ',azdias.shape[0]) # Before dropping data on azdias lets preprocess customers dataset and get it ready for further processing # Drop the extra column of customers dataset. customers.drop(columns=['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], inplace=True) print('# of column in azdias before dropping: ', len(azdias.columns)) azdias = azdias.drop(drop_cols,axis=1) print('# of column in azdias after dropping: ', len(azdias.columns)) print('# of column in customers before dropping: ', len(customers.columns)) customers = customers.drop(drop_cols,axis=1) print('# of column in customers after dropping: ', len(customers.columns)) print('number of rows in new dataset: ',azdias.shape) print('number of rows in new dataset: ',customers.shape) # object field EINGEFUEGT_AM has too many different items. Dropping from dataset azdias = azdias.drop(['EINGEFUEGT_AM'],axis=1) customers = customers.drop(['EINGEFUEGT_AM'],axis=1) %%time # introducing this new clean up step - as without this we end up with 406 columns after one-hot encoding # reduce number of columns further by trying to removing highly correlated columns # idea and approach from Chris Albon's website https://chrisalbon.com/machine_learning/feature_selection/drop_highly_correlated_features/ # find correlation matrix corr_matrix = azdias.corr().abs() upper_limit = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool)) # identify columns to drop based on threshold limit drop_columns = [column for column in upper_limit.columns if any(upper_limit[column] > .7)] # drop columns from azdias azdias = azdias.drop(drop_columns, axis=1) print('number of columns', len(azdias.columns)) # repeat for customers # find correlation matrix corr_matrix = customers.corr().abs() upper_limit = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool)) # identify columns to drop based on threshold limit drop_columns = [column for column in upper_limit.columns if any(upper_limit[column] > .7)] # drop columns from azdias customers = customers.drop(drop_columns, axis=1) print('number of columns', len(customers.columns)) print('number of rows in new dataset: ',azdias.shape) print('number of rows in new dataset: ',customers.shape) # we have removed columns that has mostly missing values and do not add value. Let's explore columns with object data type. azdias.select_dtypes(include=['object']) print('number of columns', len(azdias.columns)) customers.select_dtypes(include=['object']).head() # before going ahead with encoding we need to find categorical fields - below 1 hack option to do it cols = azdias.columns num_cols = azdias._get_numeric_data().columns print('num_cols: ',num_cols) print('categorical: ',list(set(cols) - set(num_cols))) # we need to fill missing values here. We will fill missing values with -1 indicating unknown as in the description. azdias[['CAMEO_DEUG_2015','CAMEO_INTL_2015']] = azdias[['CAMEO_DEUG_2015','CAMEO_INTL_2015']].replace(['X','XX'],-1) customers[['CAMEO_DEUG_2015','CAMEO_INTL_2015']] = customers[['CAMEO_DEUG_2015','CAMEO_INTL_2015']].replace(['X','XX'],-1) azdias[['CAMEO_DEUG_2015','CAMEO_INTL_2015']] = azdias[['CAMEO_DEUG_2015','CAMEO_INTL_2015']].fillna(-1) customers[['CAMEO_DEUG_2015','CAMEO_INTL_2015']] = customers[['CAMEO_DEUG_2015','CAMEO_INTL_2015']].fillna(-1) azdias[['CAMEO_DEUG_2015','CAMEO_INTL_2015']] = azdias[['CAMEO_DEUG_2015','CAMEO_INTL_2015']].astype(int) customers[['CAMEO_DEUG_2015','CAMEO_INTL_2015']] = customers[['CAMEO_DEUG_2015','CAMEO_INTL_2015']].astype(int) azdias[['CAMEO_DEU_2015','OST_WEST_KZ']]=azdias[['CAMEO_DEU_2015','OST_WEST_KZ']].fillna(-1) customers[['CAMEO_DEU_2015','OST_WEST_KZ']]=customers[['CAMEO_DEU_2015','OST_WEST_KZ']].fillna(-1) customers.isnull().sum() azdias.isnull().sum() # fillna with 9 for fields that has 9 marked as unknown azdias[azdias.columns[(azdias==9).any()]] = azdias[azdias.columns[(azdias==9).any()]].fillna(9) customers[customers.columns[(customers==9).any()]] = customers[customers.columns[(customers==9).any()]].fillna(9) azdias[azdias.columns[(azdias==0).any()]] = azdias[azdias.columns[(azdias==0).any()]].fillna(0) customers[customers.columns[(customers==0).any()]] = customers[customers.columns[(customers==0).any()]].fillna(0) # fillna with -1 for fields that has 0 marked as unknown azdias[azdias.columns[(azdias==-1).any()]] = azdias[azdias.columns[(azdias==-1).any()]].fillna(-1) customers[customers.columns[(customers==-1).any()]] = customers[customers.columns[(customers==-1).any()]].fillna(-1) #with all null data now handled, we should focus on getting objects/categorical variables to numbers via one hot encoding azdias = pd.get_dummies(azdias) customers = pd.get_dummies(customers) print('number of rows in new dataset: ',azdias.shape) print('number of rows in new dataset: ',customers.shape) print(azdias.columns) print(customers.columns) azdias_columns = azdias.columns customers_columns = customers.columns # impute nans using mode value imputer = Imputer(missing_values='NaN',strategy='most_frequent',axis=0) azdias = imputer.fit_transform(azdias) azdias = pd.DataFrame(azdias) print('imputed azdias: ', azdias.head(5)) customers = imputer.fit_transform(customers) customers = pd.DataFrame(customers) print('imputed customers: ', customers.head(5)) print('number of rows in new dataset: ',azdias.shape) print('number of rows in new dataset: ',customers.shape) # convert to int azdias = azdias.astype(int) customers = customers.astype(int) %%time # detect and exclude outliers in dataframe # as mentioned in https://stackoverflow.com/questions/23199796/detect-and-exclude-outliers-in-pandas-data-frame # remove all rows that have outliers in at least one column azdias = azdias[(np.abs(stats.zscore(azdias)) < 6).all(axis=1)] customers = customers[(np.abs(stats.zscore(customers)) < 6).all(axis=1)] print('number of rows in new dataset: ',azdias.shape) print('number of rows in new dataset: ',customers.shape) azdias.to_pickle('azdias_before_scaling') customers.to_pickle('customers_before_scaling') %%time # load in the data azdias = pd.read_pickle('azdias_before_scaling') customers = pd.read_pickle('customers_before_scaling') ###Output CPU times: user 265 ms, sys: 1.41 s, total: 1.67 s Wall time: 2.79 s ###Markdown Implementation So far we have done various analysis and testing of data processing procedures. Now let us finalize the custom processing steps required to clean datasets related to this project and get the data ready for training and/or prediction ###Code def data_preprocess_2(df, for_clustering, df_name=None): if for_clustering: if df_name == 'azdias': df = df[df.isnull().sum(axis=1) <= 16].reset_index(drop=True) elif df_name == 'customers': df.drop(columns=['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], inplace=True) #column_nans = df.isnull().mean() drop_cols = ['ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4', 'EXTSEL992','KK_KUNDENTYP'] df = df.drop(drop_cols,axis=1) df = df.drop(['EINGEFUEGT_AM'],axis=1) df = df.drop(['D19_LETZTER_KAUF_BRANCHE'],axis=1) # find correlation matrix corr_matrix = df.corr().abs() upper_limit = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool)) # identify columns to drop based on threshold limit drop_columns = [column for column in upper_limit.columns if any(upper_limit[column] > .7)] # drop columns from df df = df.drop(drop_columns, axis=1) print('shape after corr', df.shape) # we need to fill missing values here. We will fill missing values with -1 indicating unknown as in the description. df[['CAMEO_DEUG_2015','CAMEO_INTL_2015']] = df[['CAMEO_DEUG_2015','CAMEO_INTL_2015']].replace(['X','XX'],-1) df[['CAMEO_DEUG_2015','CAMEO_INTL_2015']] = df[['CAMEO_DEUG_2015','CAMEO_INTL_2015']].fillna(-1) df[['CAMEO_DEUG_2015','CAMEO_INTL_2015']] = df[['CAMEO_DEUG_2015','CAMEO_INTL_2015']].astype(int) df[['CAMEO_DEU_2015','OST_WEST_KZ']]=df[['CAMEO_DEU_2015','OST_WEST_KZ']].fillna(-1) # fillna with 9 for fields that has 9 marked as unknown df[df.columns[(df==9).any()]] = df[df.columns[(df==9).any()]].fillna(9) # fillna with 0 for fields that has 0 marked as unknown df[df.columns[(df==0).any()]] = df[df.columns[(df==0).any()]].fillna(0) # fillna with -1 for fields that has 0 marked as unknown df[df.columns[(df==-1).any()]] = df[df.columns[(df==-1).any()]].fillna(-1) #print('col name before: ', df.columns) #with all null data now handled, we should focus on getting objects/categorical variables to numbers via one hot encoding df = pd.get_dummies(df) #print('col name after: ', df.columns) print('shape after one-hot', df.shape) df_columns = list(df.columns.values) # impute nans using mode value imputer = Imputer(missing_values='NaN',strategy='most_frequent',axis=0) df = imputer.fit_transform(df) df = pd.DataFrame(df) #print('imputed dataframe: ', df.head(5)) print('shape after impute', df.shape) # convert to int df = df.astype(int) # detect and exclude outliers in dataframe # as mentioned in https://stackoverflow.com/questions/23199796/detect-and-exclude-outliers-in-pandas-data-frame # remove all rows that have outliers in at least one column if for_clustering: print('inside outliers if') df = df[(np.abs(stats.zscore(df)) < 6).all(axis=1)] print('shape before scaling', df.shape) # scale the data scale = StandardScaler(copy=False) scaled = scale.fit_transform(df) df = pd.DataFrame(scaled,columns= df_columns) print('shape after scaling', df.shape) #else: # df.columns = df_columns df = df.set_index('LNR') return df ###Output _____no_output_____ ###Markdown Clean azdias - general population dataset ###Code azdias = data_preprocess_2(azdias, True, 'azdias') print(azdias.shape) print(azdias.head(5)) ###Output shape after corr (733227, 238) shape after one-hot (733227, 284) shape after impute (733227, 284) inside outliers if shape before scaling (415405, 284) shape after scaling (415405, 284) (415405, 283) AGER_TYP AKT_DAT_KL ALTER_HH ALTERSKATEGORIE_FEIN \ LNR 1.044527 -0.549413 1.155132 0.831893 0.911269 1.044589 -0.549413 -1.017213 1.223909 0.285868 1.044600 2.747309 -1.017213 -0.082810 -0.547999 1.044616 -0.549413 1.155132 -1.389529 -0.756466 1.044666 -0.549413 -1.017213 0.439878 0.285868 ANZ_HAUSHALTE_AKTIV ANZ_HH_TITEL ANZ_KINDER ANZ_PERSONEN \ LNR 1.044527 0.170790 -0.142864 -0.281792 -0.610158 1.044589 -0.488492 -0.142864 -0.281792 2.240974 1.044600 -0.300126 -0.142864 -0.281792 -0.610158 1.044616 -0.394309 -0.142864 -0.281792 -0.610158 1.044666 -0.205943 -0.142864 -0.281792 -0.610158 ANZ_TITEL ARBEIT ... CAMEO_DEU_2015_8C \ LNR ... 1.044527 0.0 -0.255250 ... -0.263318 1.044589 0.0 0.716291 ... -0.263318 1.044600 0.0 -1.226790 ... 3.797696 1.044616 0.0 0.716291 ... -0.263318 1.044666 0.0 -1.226790 ... -0.263318 CAMEO_DEU_2015_8D CAMEO_DEU_2015_9A CAMEO_DEU_2015_9B \ LNR 1.044527 0.0 0.0 -0.243513 1.044589 0.0 0.0 -0.243513 1.044600 0.0 0.0 -0.243513 1.044616 0.0 0.0 -0.243513 1.044666 0.0 0.0 -0.243513 CAMEO_DEU_2015_9C CAMEO_DEU_2015_9D CAMEO_DEU_2015_9E \ LNR 1.044527 -0.228966 -0.248766 0.0 1.044589 -0.228966 -0.248766 0.0 1.044600 -0.228966 -0.248766 0.0 1.044616 -0.228966 -0.248766 0.0 1.044666 -0.228966 -0.248766 0.0 CAMEO_DEU_2015_XX OST_WEST_KZ_O OST_WEST_KZ_W LNR 1.044527 0.0 -0.509642 0.509642 1.044589 0.0 -0.509642 0.509642 1.044600 0.0 -0.509642 0.509642 1.044616 0.0 -0.509642 0.509642 1.044666 0.0 -0.509642 0.509642 [5 rows x 283 columns] ###Markdown Clean customers dataset ###Code %%time customers = data_preprocess_2(customers, True, 'customers') print(customers.shape) print(customers.head(5)) ###Output shape after corr (191652, 256) shape after one-hot (191652, 303) shape after impute (191652, 303) inside outliers if shape before scaling (100341, 303) shape after scaling (100341, 303) (100341, 302) AGER_TYP AKT_DAT_KL ALTER_HH ALTERSKATEGORIE_FEIN \ LNR -1.556361 -0.738505 1.010105 0.168897 -0.209002 0.872952 0.777112 -1.037635 -0.468300 -0.545546 0.873622 0.777112 -1.037635 -0.043502 -0.209002 0.118344 0.777112 -1.037635 2.080488 1.137176 0.118561 0.777112 -1.037635 -0.043502 0.127543 ANZ_HAUSHALTE_AKTIV ANZ_HH_TITEL ANZ_KINDER ANZ_PERSONEN \ LNR -1.556361 0.808794 0.992832 -0.193253 -0.686881 0.872952 -1.330334 0.992832 -0.193253 -1.545501 0.873622 -1.092653 -1.008534 -0.193253 -0.973088 0.118344 -1.092653 -1.008534 -0.193253 -0.686881 0.118561 -1.092653 -1.008534 -0.193253 -1.259294 ANZ_TITEL ARBEIT ... CAMEO_DEU_2015_8D \ LNR ... -1.556361 0.0 0.985621 ... 0.0 0.872952 0.0 -1.487406 ... 0.0 0.873622 0.0 -0.869149 ... 0.0 0.118344 0.0 -0.869149 ... 0.0 0.118561 0.0 -1.178277 ... 0.0 CAMEO_DEU_2015_9A CAMEO_DEU_2015_9B CAMEO_DEU_2015_9C \ LNR -1.556361 0.0 0.0 0.0 0.872952 0.0 0.0 0.0 0.873622 0.0 0.0 0.0 0.118344 0.0 0.0 0.0 0.118561 0.0 0.0 0.0 CAMEO_DEU_2015_9D CAMEO_DEU_2015_9E CAMEO_DEU_2015_XX \ LNR -1.556361 0.0 0.0 0.0 0.872952 0.0 0.0 0.0 0.873622 0.0 0.0 0.0 0.118344 0.0 0.0 0.0 0.118561 0.0 0.0 0.0 OST_WEST_KZ_-1 OST_WEST_KZ_O OST_WEST_KZ_W LNR -1.556361 1.013798 -0.164221 -0.961907 0.872952 -0.986389 -0.164221 1.039601 0.873622 -0.986389 -0.164221 1.039601 0.118344 -0.986389 -0.164221 1.039601 0.118561 -0.986389 -0.164221 1.039601 [5 rows x 302 columns] CPU times: user 1min 33s, sys: 4.09 s, total: 1min 37s Wall time: 3min 13s ###Markdown backup ###Code azdias.to_pickle('azdias.picke') customers.to_pickle('customers.picke') # load in the data azdias = pd.read_pickle('azdias.picke') customers = pd.read_pickle('customers.picke') ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. After data preprocessing step we could find that general population data (azdias) now has 415405 rows and 283 columns. Even though we have dropped not-so important features and outlier data, this is still high dimensional data and this is where we will be using Principal Component to reduce dimension. ###Code %%time pca = PCA().fit(azdias) plt.figure(figsize=(20,10)) plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance') plt.show() def print_weights(n): ''' n: number of principal component ''' components = pd.DataFrame(np.round(pca.components_[n - 1: n], 4), columns = azdias.keys()) components.index = ['Weights'] components = components.sort_values(by = 'Weights', axis = 1, ascending=False) components = components.T print(components) return components ###Output _____no_output_____ ###Markdown With PCA we want to make our data has high variance. This way we do not lose critical information from dataset while reducing dimensions. Based on above chart we can see that at around 220 components, cumulative variance is still high. Let us reduce our data with 220 components ###Code def reduce_data(df,n=220): pca = PCA(n_components=n).fit(df) reduced_data = pca.transform(df) reduced_data = pd.DataFrame(reduced_data) print(pca.explained_variance_ratio_.sum()) return reduced_data reduced_azdias = reduce_data(azdias) reduced_customers = reduce_data(customers) print('number of rows in new dataset: ',reduced_azdias.shape) print('number of rows in new dataset: ',reduced_customers.shape) ###Output number of rows in new dataset: (415405, 220) number of rows in new dataset: (100341, 220) ###Markdown Clustering With dimension now reduced, let's do clustering. To decide on number of clusters, we will try using elbow method ###Code def score(data, k): kmeans_k = KMeans(k) model_k = kmeans_k.fit(data) return abs(model_k.score(data)) centers = np.linspace(1,21,21) centers scores = [] for i in range(1, 21): scores.append(score(reduced_azdias.sample(20000), i)) centers = np.linspace(1,20,20) plt.plot(centers, scores, linestyle='-', marker='o', color='orange') centers = np.linspace(1,20,20) plt.figure(figsize=(14,6)) plt.plot(centers, scores, linestyle='-', marker='o', color='orange') plt.xticks(list(range(1,22,2))) plt.ylabel('Average Within-Cluster Distances') plt.xlabel('Number of Clusters') ###Output _____no_output_____ ###Markdown From above chart we can see that at around 12 clusters, average distance within cluster almost flattens. We will use 12 as number of clusters ###Code %%time kmeans_k = KMeans(12) model_k = kmeans_k.fit(reduced_azdias) prediction_azdias = model_k.predict(reduced_azdias) azdias_clustered = pd.DataFrame(prediction_azdias, columns = ['Cluster']) prediction_customers = model_k.predict(reduced_customers) customers_clustered = pd.DataFrame(prediction_customers, columns = ['Cluster']) ###Output _____no_output_____ ###Markdown Analysis of data in clusters and also comparison between clusters of general population and customer data ###Code # Count number of predictions for each customer segment# Count n customer_clusters = pd.Series(prediction_customers) cc = customer_clusters.value_counts().sort_index() # Count number in each population segment population_clusters = pd.Series(prediction_azdias) pc = population_clusters.value_counts().sort_index() # Create a dataframe from population and customer segments df_stat = pd.concat([pc, cc], axis=1).reset_index() df_stat.columns = ['cluster','population','customer'] df_stat['difference'] = (df_stat['customer']/df_stat['customer'].sum()*100) - (df_stat['population']/df_stat['population'].sum()*100) df_stat # Compare the proportion of data in each cluster for the customer data to the # proportion of data in each cluster for the general population. # Add ratio and ratio difference for each cluster to the dataframe df_stat['pop_percent'] = (df_stat['population']/df_stat['population'].sum()*100).round(2) df_stat['cust_percent'] = (df_stat['customer']/df_stat['customer'].sum()*100).round(2) fig = plt.figure(figsize=(12,5)) ax = fig.add_subplot(111) ax = df_stat['pop_percent'].plot(x=df_stat['cluster'],width=-0.3,align='edge',color='blue',kind='bar',position=0) ax = df_stat['cust_percent'].plot(kind='bar',color='orange',width = 0.3, align='edge',position=1) ax.set_xlabel('Clusters', fontsize=15) ax.set_ylabel('Ratio %', fontsize=15) ax.xaxis.set(ticklabels=range(20)) ax.tick_params(axis = 'x', which = 'major', labelsize = 13) ax.margins(x=0.5,y=0.1) plt.legend(('Gen Population', 'Customer'),fontsize=15) plt.title(('Ratio of Gen Population Vs Customer segments as % of total per cluster')) plt.show() # Show Highest Positive and Negative weights when a PComponent and Weight is passed def pca_weights(pc,weight_num): ratio = pd.DataFrame(pca.explained_variance_ratio_,columns = ['EXPLAINED_VARIANCE']) ratio = ratio.round(3) weights = pd.DataFrame(pca.components_, columns = azdias.columns.values) weights = weights.round(3) result = pd.concat([ratio, weights], axis = 1, join_axes=[ratio.index]) result[:5] print("Principal Component: ", (pc)) print('\n') print("Highest Positive weights:") print(result.iloc[(pc)-1].sort_values(ascending=False)[:weight_num]) print('\n') print("Negative weights:") print(result.iloc[(pc)-1].sort_values()[:weight_num]) # Show highest positive and negative weights for 10 cluster (over representation of Customer) pca_weights(10,10) # Show lowest positive and negative weights for 3 cluster (under representation of Customer) pca_weights(3,10) # What kinds of people are part of a cluster that is overrepresented in the # customer data compared to the general population? # Analysis of principal components of cluster 2 with over-representation in customer segment. CC = model_k.cluster_centers_[2] CC = pd.Series(CC) CC.index = CC.index +1 print(CC.sort_values(ascending=False).head(5)) # Transform cluster 2 to original feature values. CC_inv = scale.inverse_transform(pca.inverse_transform(CC)) CC_inv = pd.Series(CC_inv).round(2) CC_inv.index = azdias_subset_columns CC_inv ###Output _____no_output_____ ###Markdown Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') X = mailout_train.drop('RESPONSE',axis=1) y = mailout_train['RESPONSE'] # preprocess data df_mailout_train = data_preprocess_2(X, False) df_mailout_train.shape y.shape df_mailout_train.head(5) # Split the dataset into Train/Validation/Test X_train, X_val, y_train, y_val = train_test_split(df_mailout_train, y, stratify=y, test_size=0.2, random_state=42) xg_reg = xgb.XGBRegressor(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1, max_depth = 5, alpha = 10, n_estimators = 10) xg_reg.fit(X_train,y_train) preds = xg_reg.predict(X_val) preds print("ROC score on validation data: {:.4f}".format(roc_auc_score(y_val, preds))) ###Output ROC score on validation data: 0.5000 ###Markdown Model Evaluation and Validation In terms of evaluation metric to use, I have tried accuracy, precision, recall and fscore but due to very high imbalance (i.e. In MAILOUT_TRAIN dataset, we can find among 43000 individuals, only 532 people response to the mail-out campaign which means the training data is highly imbalanced.), none of these were a good way to measure and then finalised on AUC and ROC as the evaluation metric to proceed. ###Code def train_predict(learner, X_train, y_train, X_test, y_test): ''' inputs: - learner: the learning algorithm to be trained and predicted on - sample_size: the size of samples (number) to be drawn from training set - X_train: features training set - y_train: income training set - X_test: features testing set - y_test: income testing set ''' results = {} # TODO: Fit the learner to the training data using slicing with 'sample_size' using .fit(training_features[:], training_labels[:]) start = time() # Get start time learner = learner.fit(X_train, y_train) end = time() # Get end time # TODO: Calculate the training time #results['train_time'] = end - start # TODO: Get the predictions on the test set(X_test), # then get predictions on the first 300 training samples(X_train) using .predict() start = time() # Get start time predictions_test = learner.predict(X_test) predictions_train = learner.predict(X_train) end = time() # Get end time # print('unique predictions_train: ', set(predictions_train)) # TODO: Calculate the total prediction time #results['pred_time'] = end - start # TODO: Compute accuracy on the first 300 training samples which is y_train[:300] #results['acc_train'] = accuracy_score(y_train, predictions_train) # TODO: Compute accuracy on test set using accuracy_score() #results['acc_test'] = accuracy_score(y_test, predictions_test) #results['prec_train'] = precision_score(y_train, predictions_train) #results['recall_train'] = recall_score(y_train, predictions_train) #results['prec_test'] = precision_score(y_test, predictions_test) #results['recall_test'] = recall_score(y_test, predictions_test) # TODO: Compute F-score on the the first 300 training samples using fbeta_score() #results['f_train'] = fbeta_score(y_train, predictions_train, beta=1) # TODO: Compute F-score on the test set which is y_test #results['f_test'] = fbeta_score(y_test, predictions_test, beta=1) # Success #print("{} trained on samples.".format(learner.__class__.__name__)) #results['roc'] = roc_auc_score(y_test, predictions_test) roc = roc_auc_score(y_test, predictions_test) # Return the results return roc # Initialize 5 stratified folds skf = StratifiedKFold(n_splits=5, random_state=42) skf.get_n_splits(X, y) ###Output _____no_output_____ ###Markdown In this project we try 3 algorithms and use evaluation metrics to finalise on the best algorithm to use ###Code alg_abr = AdaBoostRegressor(random_state=42) alg_gbr = GradientBoostingRegressor(random_state=42) alg_xgb = XGBRegressor(random_state=42) result_list = [] for alg in [alg_abr, alg_gbr, alg_xgb]: alg_name = alg.__class__.__name__ j=0 for train_index, val_index in skf.split(df_mailout_train, y): j+=1 #print('Fold {}...'.format(j)) result = {} result['alg_name'] = alg_name result['fold'] = j # Split the data into training and test sets X_train, X_val = df_mailout_train.iloc[train_index], df_mailout_train.iloc[val_index] y_train, y_val = y.iloc[train_index], y.iloc[val_index] result['roc'] = train_predict(alg, X_train, y_train, X_val, y_val) result_list.append(result) print (result) #return result_list print('result_list: ', result_list) df_scores = pd.DataFrame(result_list) df_scores ###Output _____no_output_____ ###Markdown Comparing all the scores ###Code df_scores.groupby('alg_name')['roc'].mean() ###Output _____no_output_____ ###Markdown Tuning With GradientBoostingRegressor now finalised, now we need to test and finalise the hyperparameters best suited for our project. Using GridSearchCV to finalise on hyper parameters to use. Parameters to use: 1. Learning rate: In this parameter i have used the default value 0.1 learning rate shrinks the contribution of each tree by learning_rate. There is a trade-off between learning_rate and n_estimators. 2. N Estimators: I have increased the default value to 500, is the number of boosting stages to perform. Gradient boosting is fairly robust to over-fitting so a large number usually results in better performance. 3. Subsample: I have increased the default value to 0.6, the fraction of samples to be used for fitting the individual base learners. If smaller than 1.0 this results in Stochastic Gradient Boosting. subsample interacts with the parameter n_estimators. Choosing subsample < 1.0 leads to a reduction of variance and an increase in bias. 4. Max Depth: In this parameter i have used the default value of 3, is maximum depth of the individual regression estimators. The maximum depth limits the number of nodes in the tree. Tune this parameter for best performance; the best value depends on the interaction of the input variables. ###Code parameters = { 'learning_rate' : [0.1], 'n_estimators' :[500], 'subsample' : [0.6], 'max_depth' : [3] } # Perform grid search on the classifier using 'scorer' as the scoring method cv = GridSearchCV(alg_gbr, parameters, scoring = 'roc_auc', n_jobs= -1) # Fit the grid search object to the training data and find the optimal parameters grid_fit = cv.fit(X_train, y_train) cv.grid_scores_, cv.best_params_, cv.best_score_ # Get the estimator and predict best_clf = grid_fit.best_estimator_ #predictions = (best_clf.fit(X_train, y_train)).predict(X_test) best_predictions = best_clf.predict(X_val) roc_auc_score(y_val, best_predictions) ###Output _____no_output_____ ###Markdown Why i am using Roc: In our model we have a lot of values or categories that could affect if a person would be a potencial customer or not, having this objective and knowing that we need to know what categories have most probability on affect our prediction, it is better to use AUC which averages over all possible thresholds. However, if the objective of classification just needs to classify between two possible classes and doesn't require how likely each class is predicted by the model, it is more appropriate to rely on F-score using a particular threshold. So the best way here is use Roc. Some more testing to find best hyper parameters ###Code alg_test = GradientBoostingRegressor(learning_rate = 0.1, n_estimators = 500, subsample = 0.6, max_depth = 3) #from sklearn.cross_validation import KFold, StratifiedKFold cv = StratifiedKFold(n_splits=2,shuffle=True, random_state=42) roc_score = [] for train,test in cv.split(X_train,y_train): preds = alg_test.fit(X_train,y_train) predictions_test = preds.predict(X_val) roc_score.append(roc_auc_score(y_val, predictions_test)) print(roc_score) ###Output [0.70145945152726585, 0.7133480671827962] ###Markdown Final model for scoring and kaggle submission ###Code clf_final = GradientBoostingRegressor(learning_rate = 0.1, n_estimators = 500, subsample = 0.6, max_depth = 3) preds = clf_final.fit(X_train,y_train) predictions_test = preds.predict(X_val) print(roc_auc_score(y_val, predictions_test)) ###Output 0.66946839895 ###Markdown We have the best results using GradientBoostingRegressor with GridSearchCV ###Code parameters = { 'learning_rate' : [0.1], 'n_estimators' :[500], 'subsample' : [0.6], 'max_depth' : [3] } # Perform grid search on the classifier using 'scorer' as the scoring method cv = GridSearchCV(alg_gbr, parameters, scoring = 'roc_auc', n_jobs= -1) # Fit the grid search object to the training data and find the optimal parameters grid_fit = cv.fit(X_train, y_train) cv.grid_scores_, cv.best_params_, cv.best_score_ # Get the estimator and predict best_clf = grid_fit.best_estimator_ #predictions = (best_clf.fit(X_train, y_train)).predict(X_test) best_predictions = best_clf.predict(X_val) print(roc_auc_score(y_val, best_predictions)) ###Output 0.723765336025 ###Markdown Identify and understand important features from supervised learning ###Code feat_importance = clf_final.feature_importances_ feat_importance num_feat = 5 indices = np.argsort(feat_importance)[::-1] columns = X_train.columns.values[indices[:num_feat]] values = feat_importance[indices][:num_feat] #print((indices)) print(columns) print(values) plt.title('Feature Importances') plt.barh(np.arange(num_feat), values, color='b', align='center', label = "Feature Weight") #plt.barh(np.arange(num_feat), np.cumsum(values), color='b', align='center',label = "Cumsum Weight") plt.yticks(np.arange(num_feat), columns) plt.xlabel('Relative Importance') plt.show() #for name, importance in zip(X_train.column, feat_importance): # print(name, "=", importance) ###Output ['KBA13_ANZAHL_PKW' 'D19_SOZIALES' 'VERDICHTUNGSRAUM' 'ANZ_HAUSHALTE_AKTIV' 'MIN_GEBAEUDEJAHR'] [ 0.04412244 0.02574745 0.02354454 0.02262528 0.02084254] ###Markdown Analyse this most imp feature from what we learnt earlier in unsupervised learning ###Code # earlier fit of data was done within method so it could not be used now for inverse_transform. # as we have actual cleaned customer data, lets quickly do pca. customers_pca = PCA(n_components=220).fit(customers) customers_pca_data = customers_pca.transform(customers) ###Output _____no_output_____ ###Markdown Idea is to do comparison between what we identified in supervised learning with what we earlier identified in unsupervised learningIn unsupervised learning we identified various clusters in which some clusters are over represented and some under representedCheck how important feature that we identified in GBRegressor is placed in over represented and under represented clusterSteps:1) find items in required cluster2) final actual data (reduced) for those items3) do inverse_transform to get full data from reduced data4) find distribution of required column in that dataframe ###Code def get_feat_dist_in_cluster(cluster_number, feature): # find items in required clustered final_items_in_cluster = customers_clustered[customers_clustered['Cluster'] == cluster_number].index # get data of items in the identified cluster final_reduced_data = reduced_customers.loc[final_items_in_cluster] final_data_list = customers_pca.inverse_transform(final_reduced_data) final_dataframe = pd.DataFrame(final_data_list, columns=customers.columns.values) final_dataframe[feature].hist() # 10 is the cluster with under representation of customer data in comparison to general population get_feat_dist_in_cluster(10, 'KBA13_ANZAHL_PKW') ###Output _____no_output_____ ###Markdown In Cluster 10 the over represented cluster of customer, there is a single bar ###Code # 3 is the cluster with over representation of customer data in comparison to general population get_feat_dist_in_cluster(3, 'KBA13_ANZAHL_PKW') ###Output _____no_output_____ ###Markdown In Cluster 3 under represented cluster of customer there is a distribution and not a consolidation in 1 particular value. Conclusion: KBA13_ANZAHL_PKW is the numbers of cars in PLZ8 (sub-postcode), so the conclution could be people who has own car or families that shares a car, are most likely to respond to the market campaign and become customer of the mail-order company, maybe because the want to save in gas or avoid drive. Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') # as we want to make prediction using model trained with mailout_train, check/make sure this dataset is not different missing = list(np.setdiff1d(mailout_train.columns, mailout_test.columns)) missing print('before preprocessing mailout_test.shape: ', mailout_test.shape) mailout_test_clean = data_preprocess_2(mailout_test, False) print('after preprocessing mailout_test_clean.shape: ', mailout_test_clean.shape) prediction_for_kaggle = clf_final.predict(mailout_test_clean) df_kaggle = pd.DataFrame(index=mailout_test['LNR'].astype('int32'), data=prediction_for_kaggle) df_kaggle.rename(columns={0: "RESPONSE"}, inplace=True) df_kaggle.head(10) df_kaggle.to_csv('submission.csv') ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from collections import Counter from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import Pipeline from sklearn.metrics import roc_auc_score, accuracy_score, f1_score from sklearn.utils import resample from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.cluster import KMeans, MiniBatchKMeans # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # load in the data azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';') customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') # Some statistics about our dataset azdias.describe() customers.describe() #Understand data, read head and tail of each dataset azdias.head() azdias.tail() customers.head() customers.tail() #count how many Nan values we have in each column nan_values_azdias = ((azdias.isnull().sum(axis = 0) / azdias.shape[0]) *100).sort_values(ascending=False) nan_values_azdias #count how many Nan values we have in each column nan_values_customers = ((customers.isnull().sum(axis = 0) / customers.shape[0]) *100).sort_values(ascending=False) nan_values_customers azdias.describe(include=['O']) #Statistics for categorical variables #See unique values and how they are spread in the dataset for i in azdias.columns: print(i,'\n', azdias[i].unique(), len(azdias[i].unique())) #Just take a look to see if there's some reason to not drop this columns with more than 85% of Nan values not_null = azdias[azdias['ALTER_KIND4'].notnull() >= 1] not_null.head() #Lets analyse columns with Nan values between 50 and 85 percent azdias_missing_features_50_85 = nan_values_azdias[(nan_values_azdias > 50) & ( nan_values_azdias < 85)].index azdias[azdias_missing_features_50_85].head(20) #EXTSEL992 has a big variaty of values so it's better to just drop it, KK_KUNDENTYP will be dropped but maybe we can have #another approach if it's necessary in the future print(azdias['KK_KUNDENTYP'].describe()) azdias['KK_KUNDENTYP'].hist() #Lets analyse variable with Nan values between 20 to 50 percent azdias_missing_features_20_50 = nan_values_azdias[(nan_values_azdias > 20) &( nan_values_azdias < 50)].index azdias[azdias_missing_features_20_50].head() #see how many nan values we have and how it's spread in all categories plt.barh(azdias[azdias_missing_features_20_50].isna().sum().index, azdias[azdias_missing_features_20_50].isna().sum()) plt.show() #Open the features and attributes dataset to understand what zero means values = pd.read_excel('DIAS Attributes - Values 2017.xlsx') attributes = pd.read_excel('DIAS Information Levels - Attributes 2017.xlsx') values.head(10) #fill the cell withou value with the value from the lasts cell, this will make it easier to filter the other columns values["Description"] = values["Description"].ffill() values["Attribute"] = values["Attribute"].ffill() values.drop('Unnamed: 0', axis=1, inplace=True) values.head() values.set_index('Attribute', inplace=True) values.head() #Lets see how many of the values are zeros, maybe even not being Nan we can drop all column percent_20_50 = pd.DataFrame((azdias[azdias_missing_features_20_50].isin([0]).sum() / azdias.shape[0]) *100, columns=['Percent']) percent_20_50 not_found = [] for val in azdias_missing_features_20_50: if attributes[attributes['Attribute'] == val]['Description'].empty: not_found.append(val) print("Not found on list of attributes:", pd.DataFrame(not_found)) #See the columns we have a description on values dataset for val in azdias_missing_features_20_50: try: print(pd.DataFrame(values.loc[[val],['Value','Meaning']])) except: pass #See all kind of values in each column for i in azdias_missing_features_20_50: print(i,'\n', azdias[i].unique()) #See the percentage of every value on the attributes we found description #the most person didnt make internet transactions but a good part of them made 100 percent online valid_attribute_list = ['D19_BANKEN_ONLINE_QUOTE_12', 'D19_GESAMT_ONLINE_QUOTE_12', 'D19_KONSUMTYP', 'D19_VERSAND_ONLINE_QUOTE_12'] list_percent = [] for num in range(11): percentage = (azdias[valid_attribute_list].isin([num]).sum() / azdias.shape[0]) *100 list_percent.append(percentage) # print('Percentage of ',num, 'values:\n', percentage) for i in valid_attribute_list: print(i, azdias[i].unique()) pd.DataFrame(list_percent) ###Output D19_BANKEN_ONLINE_QUOTE_12 [ nan 0. 10. 8. 5. 9. 7. 6. 3. 4. 2. 1.] D19_GESAMT_ONLINE_QUOTE_12 [ nan 0. 10. 7. 9. 5. 8. 6. 3. 4. 2. 1.] D19_KONSUMTYP [ nan 9. 1. 4. 3. 6. 5. 2.] D19_VERSAND_ONLINE_QUOTE_12 [ nan 0. 10. 7. 5. 9. 3. 8. 6. 4. 2. 1.] ###Markdown Most of the values are 0 or 10, except in KOMSUMTYP column, 0 is about no online transaction and 10 is 100 percent online transactions ###Code #Now lets see the columns with more than 10 and less than 20 percent of Nan azdias_missing_features_10_20 = nan_values_azdias[(nan_values_azdias > 10) & ( nan_values_azdias < 20)].index azdias[azdias_missing_features_10_20].head() #Lets find out the description of this columns, maybe there's a reason for Nan or maybe we can just drop this rows for val in azdias_missing_features_10_20: try: print(values.loc[[val],['Value','Meaning']]) except: pass ###Output Value Meaning Attribute KBA05_MOTOR -1, 9 unknown KBA05_MOTOR 1 very small engine KBA05_MOTOR 2 small engine KBA05_MOTOR 3 average engine KBA05_MOTOR 4 big engine Value Meaning Attribute KBA05_MOD8 -1, 9 unknown KBA05_MOD8 0 none KBA05_MOD8 1 low KBA05_MOD8 2 average KBA05_MOD8 3 high Value Meaning Attribute KBA05_MOD4 -1, 9 unknown KBA05_MOD4 0 none KBA05_MOD4 1 very low KBA05_MOD4 2 low KBA05_MOD4 3 average KBA05_MOD4 4 high KBA05_MOD4 5 very high Value Meaning Attribute KBA05_MOD3 -1, 9 unknown KBA05_MOD3 1 very low KBA05_MOD3 2 low KBA05_MOD3 3 average KBA05_MOD3 4 high KBA05_MOD3 5 very high Value Meaning Attribute KBA05_MOD2 -1, 9 unknown KBA05_MOD2 1 very low KBA05_MOD2 2 low KBA05_MOD2 3 average KBA05_MOD2 4 high KBA05_MOD2 5 very high Value Meaning Attribute KBA05_SEG1 -1, 9 unknown KBA05_SEG1 0 none KBA05_SEG1 1 low KBA05_SEG1 2 average KBA05_SEG1 3 high Value Meaning Attribute KBA05_MOD1 -1, 9 unknown KBA05_MOD1 0 none KBA05_MOD1 1 low KBA05_MOD1 2 average KBA05_MOD1 3 high KBA05_MOD1 4 very high Value Meaning Attribute KBA05_MAXVORB -1, 9 unknown KBA05_MAXVORB 1 no preowner KBA05_MAXVORB 2 1 preowner KBA05_MAXVORB 3 2 or more preowner Value Meaning Attribute KBA05_MAXSEG -1, 9 unknown KBA05_MAXSEG 1 small car KBA05_MAXSEG 2 lower middleclass car KBA05_MAXSEG 3 middle class car KBA05_MAXSEG 4 upper class car Value Meaning Attribute KBA05_MAXHERST -1, 9 unknown KBA05_MAXHERST 1 Top-German KBA05_MAXHERST 2 VW-Audi KBA05_MAXHERST 3 Ford/Opel KBA05_MAXHERST 4 European KBA05_MAXHERST 5 Asian Value Meaning Attribute KBA05_MAXBJ -1, 9 unknown KBA05_MAXBJ 1 before 1994 KBA05_MAXBJ 2 1994 - 1997 KBA05_MAXBJ 3 1998 - 2000 KBA05_MAXBJ 4 since 2001 Value Meaning Attribute KBA05_MAXAH -1, 9 unknown KBA05_MAXAH 1 below 30 years KBA05_MAXAH 2 30 - 40 years KBA05_MAXAH 3 40 - 50 years KBA05_MAXAH 4 50 - 60 years KBA05_MAXAH 5 elder than 60 years Value Meaning Attribute KBA05_KW3 -1, 9 unknown KBA05_KW3 0 none KBA05_KW3 1 low KBA05_KW3 2 average KBA05_KW3 3 high KBA05_KW3 4 very high Value Meaning Attribute KBA05_MOTRAD -1, 9 unknown KBA05_MOTRAD 0 none KBA05_MOTRAD 1 some KBA05_MOTRAD 2 some more KBA05_MOTRAD 3 very many Value Meaning Attribute KBA05_SEG3 -1, 9 unknown KBA05_SEG3 1 very low KBA05_SEG3 2 low KBA05_SEG3 3 average KBA05_SEG3 4 high KBA05_SEG3 5 very high Value Meaning Attribute KBA05_SEG10 -1, 9 unknown KBA05_SEG10 0 none KBA05_SEG10 1 very low KBA05_SEG10 2 low KBA05_SEG10 3 average KBA05_SEG10 4 high Value Meaning Attribute KBA05_SEG2 -1, 9 unknown KBA05_SEG2 1 very low KBA05_SEG2 2 low KBA05_SEG2 3 average KBA05_SEG2 4 high KBA05_SEG2 5 very high Value Meaning Attribute KBA05_ZUL2 -1, 9 unknown KBA05_ZUL2 1 very low KBA05_ZUL2 2 low KBA05_ZUL2 3 average KBA05_ZUL2 4 high KBA05_ZUL2 5 very high Value Meaning Attribute KBA05_SEG4 -1, 9 unknown KBA05_SEG4 1 very low KBA05_SEG4 2 low KBA05_SEG4 3 average KBA05_SEG4 4 high KBA05_SEG4 5 very high Value Meaning Attribute KBA05_KW1 -1, 9 unknown KBA05_KW1 1 very low KBA05_KW1 2 low KBA05_KW1 3 average KBA05_KW1 4 high KBA05_KW1 5 very high Value Meaning Attribute KBA05_SEG5 -1, 9 unknown KBA05_SEG5 0 none KBA05_SEG5 1 very low KBA05_SEG5 2 low KBA05_SEG5 3 average KBA05_SEG5 4 high Value Meaning Attribute KBA05_SEG6 -1, 9 unknown KBA05_SEG6 0 none KBA05_SEG6 1 some Value Meaning Attribute KBA05_SEG7 -1, 9 unknown KBA05_SEG7 0 none KBA05_SEG7 1 low KBA05_SEG7 2 average KBA05_SEG7 3 high Value Meaning Attribute KBA05_SEG8 -1, 9 unknown KBA05_SEG8 0 none KBA05_SEG8 1 low KBA05_SEG8 2 average KBA05_SEG8 3 high Value Meaning Attribute KBA05_SEG9 -1, 9 unknown KBA05_SEG9 0 none KBA05_SEG9 1 low KBA05_SEG9 2 average KBA05_SEG9 3 high Value Meaning Attribute KBA05_VORB0 -1, 9 unknown KBA05_VORB0 1 very low KBA05_VORB0 2 low KBA05_VORB0 3 average KBA05_VORB0 4 high KBA05_VORB0 5 very high Value Meaning Attribute KBA05_VORB1 -1, 9 unknown KBA05_VORB1 1 very low KBA05_VORB1 2 low KBA05_VORB1 3 average KBA05_VORB1 4 high KBA05_VORB1 5 very high Value Meaning Attribute KBA05_VORB2 -1, 9 unknown KBA05_VORB2 0 none KBA05_VORB2 1 very low KBA05_VORB2 2 low KBA05_VORB2 3 average KBA05_VORB2 4 high KBA05_VORB2 5 very high Value Meaning Attribute KBA05_ZUL1 -1, 9 unknown KBA05_ZUL1 1 very low KBA05_ZUL1 2 low KBA05_ZUL1 3 average KBA05_ZUL1 4 high KBA05_ZUL1 5 very high Value Meaning Attribute KBA05_KW2 -1, 9 unknown KBA05_KW2 1 very low KBA05_KW2 2 low KBA05_KW2 3 average KBA05_KW2 4 high KBA05_KW2 5 very high Value Meaning Attribute KBA05_KRSHERST1 -1, 9 unknown KBA05_KRSHERST1 1 way below average KBA05_KRSHERST1 2 below average KBA05_KRSHERST1 3 average KBA05_KRSHERST1 4 above average KBA05_KRSHERST1 5 way above average Value Meaning Attribute KBA05_KRSZUL -1, 9 unknown KBA05_KRSZUL 1 below average KBA05_KRSZUL 2 average KBA05_KRSZUL 3 above average Value Meaning Attribute KBA05_CCM3 -1, 9 unknown KBA05_CCM3 1 very low KBA05_CCM3 2 low KBA05_CCM3 3 average KBA05_CCM3 4 high KBA05_CCM3 5 very high Value Meaning Attribute KBA05_ALTER1 -1, 9 unknown KBA05_ALTER1 0 none KBA05_ALTER1 1 low KBA05_ALTER1 2 average KBA05_ALTER1 3 high KBA05_ALTER1 4 very high Value Meaning Attribute KBA05_ALTER2 -1, 9 unknown KBA05_ALTER2 1 very low KBA05_ALTER2 2 low KBA05_ALTER2 3 average KBA05_ALTER2 4 high KBA05_ALTER2 5 very high Value Meaning Attribute KBA05_ALTER3 -1, 9 unknown KBA05_ALTER3 1 very low KBA05_ALTER3 2 low KBA05_ALTER3 3 average KBA05_ALTER3 4 high KBA05_ALTER3 5 very high Value Meaning Attribute KBA05_ALTER4 -1, 9 unknown KBA05_ALTER4 0 none KBA05_ALTER4 1 very low KBA05_ALTER4 2 low KBA05_ALTER4 3 average KBA05_ALTER4 4 high KBA05_ALTER4 5 very high Value Meaning Attribute KBA05_ANHANG -1, 9 unknown KBA05_ANHANG 0 none KBA05_ANHANG 1 some KBA05_ANHANG 2 some more KBA05_ANHANG 3 very many Value Meaning Attribute KBA05_ANTG1 -1 unknown KBA05_ANTG1 0 no 1-2 family homes KBA05_ANTG1 1 lower share of 1-2 family homes KBA05_ANTG1 2 average share of 1-2 family homes KBA05_ANTG1 3 high share of 1-2 family homes KBA05_ANTG1 4 very high share of 1-2 family homes Value Meaning Attribute KBA05_ANTG2 -1 unknown KBA05_ANTG2 0 no 3-5 family homes KBA05_ANTG2 1 lower share of 3-5 family homes KBA05_ANTG2 2 average share of 3-5 family homes KBA05_ANTG2 3 high share of 3-5 family homes KBA05_ANTG2 4 very high share of 3-5 family homes Value Meaning Attribute KBA05_ANTG3 -1 unknown KBA05_ANTG3 0 no 6-10 family homes KBA05_ANTG3 1 lower share of 6-10 family homes KBA05_ANTG3 2 average share of 6-10 family homes KBA05_ANTG3 3 high share of 6-10 family homes Value Meaning Attribute KBA05_ANTG4 -1 unknown KBA05_ANTG4 0 no >10 family homes KBA05_ANTG4 1 lower share of >10 family homes KBA05_ANTG4 2 high share of >10 family homes Value Meaning Attribute KBA05_AUTOQUOT 1 very low car quote KBA05_AUTOQUOT 2 low car quote KBA05_AUTOQUOT 3 average car quote KBA05_AUTOQUOT 4 high car quote KBA05_AUTOQUOT 5 very high car quote KBA05_AUTOQUOT -1, 9 unknown Value Meaning Attribute KBA05_BAUMAX -1, 0 unknown KBA05_BAUMAX 1 mainly 1-2 family homes in the microcell KBA05_BAUMAX 2 mainly 3-5 family homes in the microcell KBA05_BAUMAX 3 mainly 6-10 family homes in the microcell KBA05_BAUMAX 4 mainly>10 family homes in the microcell KBA05_BAUMAX 5 mainly business buildings in the microcell Value Meaning Attribute KBA05_CCM1 -1, 9 unknown KBA05_CCM1 1 very low KBA05_CCM1 2 low KBA05_CCM1 3 average KBA05_CCM1 4 high KBA05_CCM1 5 very high Value Meaning Attribute KBA05_CCM2 -1, 9 unknown KBA05_CCM2 1 very low KBA05_CCM2 2 low KBA05_CCM2 3 average KBA05_CCM2 4 high KBA05_CCM2 5 very high Value Meaning Attribute KBA05_CCM4 -1, 9 unknown KBA05_CCM4 0 none KBA05_CCM4 1 low KBA05_CCM4 2 average KBA05_CCM4 3 high KBA05_CCM4 4 very high Value Meaning Attribute KBA05_KRSVAN -1, 9 unknown KBA05_KRSVAN 1 below average KBA05_KRSVAN 2 average KBA05_KRSVAN 3 above average Value Meaning Attribute KBA05_DIESEL -1, 9 unknown KBA05_DIESEL 0 none KBA05_DIESEL 1 very low KBA05_DIESEL 2 low KBA05_DIESEL 3 average KBA05_DIESEL 4 high Value Meaning Attribute KBA05_FRAU -1, 9 unknown KBA05_FRAU 1 very low KBA05_FRAU 2 low KBA05_FRAU 3 average KBA05_FRAU 4 high KBA05_FRAU 5 very high Value Meaning Attribute KBA05_GBZ -1, 0 unknown KBA05_GBZ 1 1-2 buildings KBA05_GBZ 2 3-4 buildings KBA05_GBZ 3 5-16 buildings KBA05_GBZ 4 17-22 buildings KBA05_GBZ 5 >=23 buildings Value Meaning Attribute KBA05_HERST1 -1, 9 unknown KBA05_HERST1 0 none KBA05_HERST1 1 very low KBA05_HERST1 2 low KBA05_HERST1 3 average KBA05_HERST1 4 high KBA05_HERST1 5 very high Value Meaning Attribute KBA05_HERST2 -1, 9 unknown KBA05_HERST2 0 none KBA05_HERST2 1 very low KBA05_HERST2 2 low KBA05_HERST2 3 average KBA05_HERST2 4 high KBA05_HERST2 5 very high Value Meaning Attribute KBA05_HERST3 -1, 9 unknown KBA05_HERST3 0 none KBA05_HERST3 1 very low KBA05_HERST3 2 low KBA05_HERST3 3 average KBA05_HERST3 4 high KBA05_HERST3 5 very high Value Meaning Attribute KBA05_HERST4 -1, 9 unknown KBA05_HERST4 0 none KBA05_HERST4 1 very low KBA05_HERST4 2 low KBA05_HERST4 3 average KBA05_HERST4 4 high KBA05_HERST4 5 very high Value Meaning Attribute KBA05_HERST5 -1, 9 unknown KBA05_HERST5 0 none KBA05_HERST5 1 very low KBA05_HERST5 2 low KBA05_HERST5 3 average KBA05_HERST5 4 high KBA05_HERST5 5 very high Value Meaning Attribute KBA05_KRSAQUOT -1, 9 unknown KBA05_KRSAQUOT 1 way below average KBA05_KRSAQUOT 2 below average KBA05_KRSAQUOT 3 average KBA05_KRSAQUOT 4 above average KBA05_KRSAQUOT 5 way above average Value Meaning Attribute KBA05_KRSHERST2 -1, 9 unknown KBA05_KRSHERST2 1 way below average KBA05_KRSHERST2 2 below average KBA05_KRSHERST2 3 average KBA05_KRSHERST2 4 above average KBA05_KRSHERST2 5 way above average Value Meaning Attribute KBA05_KRSHERST3 -1, 9 unknown KBA05_KRSHERST3 1 way below average KBA05_KRSHERST3 2 below average KBA05_KRSHERST3 3 average KBA05_KRSHERST3 4 above average KBA05_KRSHERST3 5 way above average Value Meaning Attribute KBA05_KRSKLEIN -1, 9 unknown KBA05_KRSKLEIN 1 below average KBA05_KRSKLEIN 2 average KBA05_KRSKLEIN 3 above average Value Meaning Attribute KBA05_KRSOBER -1, 9 unknown KBA05_KRSOBER 1 below average KBA05_KRSOBER 2 average KBA05_KRSOBER 3 above average Value Meaning Attribute KBA05_ZUL4 -1, 9 unknown KBA05_ZUL4 0 none KBA05_ZUL4 1 very low KBA05_ZUL4 2 low KBA05_ZUL4 3 average KBA05_ZUL4 4 high KBA05_ZUL4 5 very high Value Meaning Attribute KBA05_ZUL3 -1, 9 unknown KBA05_ZUL3 0 none KBA05_ZUL3 1 very low KBA05_ZUL3 2 low KBA05_ZUL3 3 average KBA05_ZUL3 4 high KBA05_ZUL3 5 very high Value Meaning Attribute MOBI_REGIO 1 very high mobility MOBI_REGIO 2 high mobility MOBI_REGIO 3 middle mobility MOBI_REGIO 4 low mobility MOBI_REGIO 5 very low mobility MOBI_REGIO 6 none Value Meaning Attribute KKK -1, 0 unknown KKK 1 very high KKK 2 high KKK 3 average KKK 4 low Value Meaning Attribute REGIOTYP -1, 0 unknown REGIOTYP 1 upper class REGIOTYP 2 conservatives REGIOTYP 3 upper middle class REGIOTYP 4 middle class REGIOTYP 5 lower middle class REGIOTYP 6 traditional workers REGIOTYP 7 marginal groups Value Meaning Attribute PLZ8_ANTG1 -1 unknown PLZ8_ANTG1 0 none PLZ8_ANTG1 1 low share PLZ8_ANTG1 2 average share PLZ8_ANTG1 3 high share PLZ8_ANTG1 4 very high share Value Meaning Attribute PLZ8_BAUMAX 1 mainly 1-2 family homes PLZ8_BAUMAX 2 mainly 3-5 family homes PLZ8_BAUMAX 3 mainly 6-10 family homes PLZ8_BAUMAX 4 mainly >10 family homes PLZ8_BAUMAX 5 mainly business building Value Meaning Attribute PLZ8_ANTG2 -1 unknown PLZ8_ANTG2 0 none PLZ8_ANTG2 1 low share PLZ8_ANTG2 2 average share PLZ8_ANTG2 3 high share PLZ8_ANTG2 4 very high share Value Meaning Attribute PLZ8_ANTG3 -1 unknown PLZ8_ANTG3 0 none PLZ8_ANTG3 1 low share PLZ8_ANTG3 2 average share PLZ8_ANTG3 3 high share Value Meaning Attribute PLZ8_ANTG4 -1 unknown PLZ8_ANTG4 0 none PLZ8_ANTG4 1 low share PLZ8_ANTG4 2 high share Value Meaning Attribute PLZ8_GBZ -1 unknown PLZ8_GBZ 1 less than 60 buildings PLZ8_GBZ 2 60-129 buildings PLZ8_GBZ 3 130-299 buildings PLZ8_GBZ 4 300-449 buildings PLZ8_GBZ 5 more than 449 buildings Value Meaning Attribute PLZ8_HHZ -1 unknown PLZ8_HHZ 1 less than 130 households PLZ8_HHZ 2 131-299 households PLZ8_HHZ 3 300-599 households PLZ8_HHZ 4 600-849 households PLZ8_HHZ 5 more than 849 households Value Meaning Attribute W_KEIT_KIND_HH -1, 0 unknown W_KEIT_KIND_HH 1 most likely W_KEIT_KIND_HH 2 very likely W_KEIT_KIND_HH 3 likely W_KEIT_KIND_HH 4 average W_KEIT_KIND_HH 5 unlikely W_KEIT_KIND_HH 6 very unlikely Value Meaning Attribute KBA13_BMW -1 unknown KBA13_BMW 0 none KBA13_BMW 1 very low KBA13_BMW 2 low KBA13_BMW 3 average KBA13_BMW 4 high KBA13_BMW 5 very high Value Meaning Attribute KBA13_CCM_1400 -1 unknown KBA13_CCM_1400 0 none KBA13_CCM_1400 1 very low KBA13_CCM_1400 2 low KBA13_CCM_1400 3 average KBA13_CCM_1400 4 high KBA13_CCM_1400 5 very high Value Meaning Attribute KBA13_CCM_1200 -1 unknown KBA13_CCM_1200 0 none KBA13_CCM_1200 1 very low KBA13_CCM_1200 2 low KBA13_CCM_1200 3 average KBA13_CCM_1200 4 high KBA13_CCM_1200 5 very high Value Meaning Attribute KBA13_CCM_1000 -1 unknown KBA13_CCM_1000 0 none KBA13_CCM_1000 1 very low KBA13_CCM_1000 2 low KBA13_CCM_1000 3 average KBA13_CCM_1000 4 high KBA13_CCM_1000 5 very high Value Meaning Attribute KBA13_CCM_0_1400 -1 unknown KBA13_CCM_0_1400 0 none KBA13_CCM_0_1400 1 very low KBA13_CCM_0_1400 2 low KBA13_CCM_0_1400 3 average KBA13_CCM_0_1400 4 high KBA13_CCM_0_1400 5 very high Value Meaning Attribute KBA13_SEG_SPORTWAGEN -1 unknown KBA13_SEG_SPORTWAGEN 0 none KBA13_SEG_SPORTWAGEN 1 very low KBA13_SEG_SPORTWAGEN 2 low KBA13_SEG_SPORTWAGEN 3 average KBA13_SEG_SPORTWAGEN 4 high KBA13_SEG_SPORTWAGEN 5 very high Value Meaning Attribute KBA13_BJ_2009 -1 unknown KBA13_BJ_2009 0 none KBA13_BJ_2009 1 very low KBA13_BJ_2009 2 low KBA13_BJ_2009 3 average KBA13_BJ_2009 4 high KBA13_BJ_2009 5 very high Value Meaning Attribute KBA13_BJ_2008 -1 unknown KBA13_BJ_2008 0 none KBA13_BJ_2008 1 very low KBA13_BJ_2008 2 low KBA13_BJ_2008 3 average KBA13_BJ_2008 4 high KBA13_BJ_2008 5 very high Value Meaning Attribute KBA13_BJ_2006 -1 unknown KBA13_BJ_2006 0 none KBA13_BJ_2006 1 very low KBA13_BJ_2006 2 low KBA13_BJ_2006 3 average KBA13_BJ_2006 4 high KBA13_BJ_2006 5 very high Value Meaning Attribute KBA13_BJ_2004 -1 unknown KBA13_BJ_2004 0 none KBA13_BJ_2004 1 very low KBA13_BJ_2004 2 low KBA13_BJ_2004 3 average KBA13_BJ_2004 4 high KBA13_BJ_2004 5 very high Value Meaning Attribute KBA13_CCM_1500 -1 unknown KBA13_CCM_1500 0 none KBA13_CCM_1500 1 very low KBA13_CCM_1500 2 low KBA13_CCM_1500 3 average KBA13_CCM_1500 4 high KBA13_CCM_1500 5 very high Value Meaning Attribute KBA13_CCM_3001 -1 unknown KBA13_CCM_3001 0 none KBA13_CCM_3001 1 very low KBA13_CCM_3001 2 low KBA13_CCM_3001 3 average KBA13_CCM_3001 4 high KBA13_CCM_3001 5 very high Value Meaning Attribute KBA13_CCM_1600 -1 unknown KBA13_CCM_1600 0 none KBA13_CCM_1600 1 very low KBA13_CCM_1600 2 low KBA13_CCM_1600 3 average KBA13_CCM_1600 4 high KBA13_CCM_1600 5 very high Value Meaning Attribute KBA13_CCM_1800 -1 unknown KBA13_CCM_1800 0 none KBA13_CCM_1800 1 very low KBA13_CCM_1800 2 low KBA13_CCM_1800 3 average KBA13_CCM_1800 4 high KBA13_CCM_1800 5 very high Value Meaning Attribute KBA13_CCM_2000 -1 unknown KBA13_CCM_2000 0 none KBA13_CCM_2000 1 very low KBA13_CCM_2000 2 low KBA13_CCM_2000 3 average KBA13_CCM_2000 4 high KBA13_CCM_2000 5 very high Value Meaning Attribute KBA13_CCM_2500 -1 unknown KBA13_CCM_2500 0 none KBA13_CCM_2500 1 very low KBA13_CCM_2500 2 low KBA13_CCM_2500 3 average KBA13_CCM_2500 4 high KBA13_CCM_2500 5 very high Value Meaning Attribute KBA13_SEG_UTILITIES -1 unknown KBA13_SEG_UTILITIES 0 none KBA13_SEG_UTILITIES 1 very low KBA13_SEG_UTILITIES 2 low KBA13_SEG_UTILITIES 3 average KBA13_SEG_UTILITIES 4 high KBA13_SEG_UTILITIES 5 very high Value Meaning Attribute KBA13_CCM_3000 -1 unknown KBA13_CCM_3000 0 none KBA13_CCM_3000 1 very low KBA13_CCM_3000 2 low KBA13_CCM_3000 3 average KBA13_CCM_3000 4 high KBA13_CCM_3000 5 very high Value Meaning Attribute KBA13_BJ_1999 -1 unknown KBA13_BJ_1999 0 none KBA13_BJ_1999 1 very low KBA13_BJ_1999 2 low KBA13_BJ_1999 3 average KBA13_BJ_1999 4 high KBA13_BJ_1999 5 very high Value Meaning Attribute KBA13_FAB_ASIEN -1 unknown KBA13_FAB_ASIEN 0 none KBA13_FAB_ASIEN 1 very low KBA13_FAB_ASIEN 2 low KBA13_FAB_ASIEN 3 average KBA13_FAB_ASIEN 4 high KBA13_FAB_ASIEN 5 very high Value Meaning Attribute KBA13_FAB_SONSTIGE -1 unknown KBA13_FAB_SONSTIGE 0 none KBA13_FAB_SONSTIGE 1 very low KBA13_FAB_SONSTIGE 2 low KBA13_FAB_SONSTIGE 3 average KBA13_FAB_SONSTIGE 4 high KBA13_FAB_SONSTIGE 5 very high Value Meaning Attribute KBA13_FIAT -1 unknown KBA13_FIAT 0 none KBA13_FIAT 1 very low KBA13_FIAT 2 low KBA13_FIAT 3 average KBA13_FIAT 4 high KBA13_FIAT 5 very high Value Meaning Attribute KBA13_FORD -1 unknown KBA13_FORD 0 none KBA13_FORD 1 very low KBA13_FORD 2 low KBA13_FORD 3 average KBA13_FORD 4 high KBA13_FORD 5 very high Value Meaning Attribute KBA13_BJ_2000 -1 unknown KBA13_BJ_2000 0 none KBA13_BJ_2000 1 very low KBA13_BJ_2000 2 low KBA13_BJ_2000 3 average KBA13_BJ_2000 4 high KBA13_BJ_2000 5 very high Value Meaning Attribute KBA13_SEG_WOHNMOBILE -1 unknown KBA13_SEG_WOHNMOBILE 0 none KBA13_SEG_WOHNMOBILE 1 very low KBA13_SEG_WOHNMOBILE 2 low KBA13_SEG_WOHNMOBILE 3 average KBA13_SEG_WOHNMOBILE 4 high KBA13_SEG_WOHNMOBILE 5 very high Value Meaning Attribute KBA13_VW -1 unknown KBA13_VW 0 none KBA13_VW 1 very low KBA13_VW 2 low KBA13_VW 3 average KBA13_VW 4 high KBA13_VW 5 very high Value Meaning Attribute KBA13_VORB_3 -1 unknown KBA13_VORB_3 0 none KBA13_VORB_3 1 very low KBA13_VORB_3 2 low KBA13_VORB_3 3 average KBA13_VORB_3 4 high KBA13_VORB_3 5 very high Value Meaning Attribute KBA13_VORB_2 -1 unknown KBA13_VORB_2 0 none KBA13_VORB_2 1 very low KBA13_VORB_2 2 low KBA13_VORB_2 3 average KBA13_VORB_2 4 high KBA13_VORB_2 5 very high Value Meaning Attribute KBA13_SEG_SONSTIGE -1 unknown KBA13_SEG_SONSTIGE 0 none KBA13_SEG_SONSTIGE 1 very low KBA13_SEG_SONSTIGE 2 low KBA13_SEG_SONSTIGE 3 average KBA13_SEG_SONSTIGE 4 high KBA13_SEG_SONSTIGE 5 very high Value Meaning Attribute KBA13_VORB_1 -1 unknown KBA13_VORB_1 0 none KBA13_VORB_1 1 very low KBA13_VORB_1 2 low KBA13_VORB_1 3 average KBA13_VORB_1 4 high KBA13_VORB_1 5 very high Value Meaning Attribute KBA13_VORB_0 -1 unknown KBA13_VORB_0 0 none KBA13_VORB_0 1 very low KBA13_VORB_0 2 low KBA13_VORB_0 3 average KBA13_VORB_0 4 high KBA13_VORB_0 5 very high Value Meaning Attribute KBA13_TOYOTA -1 unknown KBA13_TOYOTA 0 none KBA13_TOYOTA 1 very low KBA13_TOYOTA 2 low KBA13_TOYOTA 3 average KBA13_TOYOTA 4 high KBA13_TOYOTA 5 very high Value Meaning Attribute KBA13_SITZE_6 -1 unknown KBA13_SITZE_6 0 none KBA13_SITZE_6 1 very low KBA13_SITZE_6 2 low KBA13_SITZE_6 3 average KBA13_SITZE_6 4 high KBA13_SITZE_6 5 very high Value Meaning Attribute KBA13_SITZE_5 -1 unknown KBA13_SITZE_5 0 none KBA13_SITZE_5 1 very low KBA13_SITZE_5 2 low KBA13_SITZE_5 3 average KBA13_SITZE_5 4 high KBA13_SITZE_5 5 very high Value Meaning Attribute KBA13_SITZE_4 -1 unknown KBA13_SITZE_4 0 none KBA13_SITZE_4 1 very low KBA13_SITZE_4 2 low KBA13_SITZE_4 3 average KBA13_SITZE_4 4 high KBA13_SITZE_4 5 very high Value Meaning Attribute KBA13_SEG_VAN -1 unknown KBA13_SEG_VAN 0 none KBA13_SEG_VAN 1 very low KBA13_SEG_VAN 2 low KBA13_SEG_VAN 3 average KBA13_SEG_VAN 4 high KBA13_SEG_VAN 5 very high Value Meaning Attribute KBA13_AUTOQUOTE -1 unknown KBA13_AUTOQUOTE 0 none KBA13_AUTOQUOTE 1 very low KBA13_AUTOQUOTE 2 low KBA13_AUTOQUOTE 3 average KBA13_AUTOQUOTE 4 high KBA13_AUTOQUOTE 5 very high Value Meaning Attribute KBA13_ALTERHALTER_30 -1 unknown KBA13_ALTERHALTER_30 0 none KBA13_ALTERHALTER_30 1 very low KBA13_ALTERHALTER_30 2 low KBA13_ALTERHALTER_30 3 average KBA13_ALTERHALTER_30 4 high KBA13_ALTERHALTER_30 5 very high Value Meaning Attribute KBA13_ALTERHALTER_45 -1 unknown KBA13_ALTERHALTER_45 0 none KBA13_ALTERHALTER_45 1 very low KBA13_ALTERHALTER_45 2 low KBA13_ALTERHALTER_45 3 average KBA13_ALTERHALTER_45 4 high KBA13_ALTERHALTER_45 5 very high Value Meaning Attribute KBA13_ALTERHALTER_60 -1 unknown KBA13_ALTERHALTER_60 0 none KBA13_ALTERHALTER_60 1 very low KBA13_ALTERHALTER_60 2 low KBA13_ALTERHALTER_60 3 average KBA13_ALTERHALTER_60 4 high KBA13_ALTERHALTER_60 5 very high Value Meaning Attribute KBA13_ALTERHALTER_61 -1 unknown KBA13_ALTERHALTER_61 0 none KBA13_ALTERHALTER_61 1 very low KBA13_ALTERHALTER_61 2 low KBA13_ALTERHALTER_61 3 average KBA13_ALTERHALTER_61 4 high KBA13_ALTERHALTER_61 5 very high Value Meaning Attribute KBA13_HALTER_20 -1 unknown KBA13_HALTER_20 0 none KBA13_HALTER_20 1 very low KBA13_HALTER_20 2 low KBA13_HALTER_20 3 average KBA13_HALTER_20 4 high KBA13_HALTER_20 5 very high Value Meaning Attribute KBA13_ANZAHL_PKW … numeric value Value Meaning Attribute KBA13_AUDI -1 unknown KBA13_AUDI 0 none KBA13_AUDI 1 very low KBA13_AUDI 2 low KBA13_AUDI 3 average KBA13_AUDI 4 high KBA13_AUDI 5 very high Value Meaning Attribute KBA13_VORB_1_2 -1 unknown KBA13_VORB_1_2 0 none KBA13_VORB_1_2 1 very low KBA13_VORB_1_2 2 low KBA13_VORB_1_2 3 average KBA13_VORB_1_2 4 high KBA13_VORB_1_2 5 very high Value Meaning Attribute KBA13_HALTER_25 -1 unknown KBA13_HALTER_25 0 none KBA13_HALTER_25 1 very low KBA13_HALTER_25 2 low KBA13_HALTER_25 3 average KBA13_HALTER_25 4 high KBA13_HALTER_25 5 very high Value Meaning Attribute KBA13_KRSZUL_NEU -1 unknown KBA13_KRSZUL_NEU 0 none KBA13_KRSZUL_NEU 1 low KBA13_KRSZUL_NEU 2 average KBA13_KRSZUL_NEU 3 high Value Meaning Attribute KBA13_KW_110 -1 unknown KBA13_KW_110 0 none KBA13_KW_110 1 very low KBA13_KW_110 2 low KBA13_KW_110 3 average KBA13_KW_110 4 high KBA13_KW_110 5 very high Value Meaning Attribute KBA13_KW_120 -1 unknown KBA13_KW_120 0 none KBA13_KW_120 1 very low KBA13_KW_120 2 low KBA13_KW_120 3 average KBA13_KW_120 4 high KBA13_KW_120 5 very high Value Meaning Attribute KBA13_KW_121 -1 unknown KBA13_KW_121 0 none KBA13_KW_121 1 very low KBA13_KW_121 2 low KBA13_KW_121 3 average KBA13_KW_121 4 high KBA13_KW_121 5 very high Value Meaning Attribute KBA13_KW_30 -1 unknown KBA13_KW_30 0 none KBA13_KW_30 1 very low KBA13_KW_30 2 low KBA13_KW_30 3 average KBA13_KW_30 4 high KBA13_KW_30 5 very high Value Meaning Attribute KBA13_KW_40 -1 unknown KBA13_KW_40 0 none KBA13_KW_40 1 very low KBA13_KW_40 2 low KBA13_KW_40 3 average KBA13_KW_40 4 high KBA13_KW_40 5 very high Value Meaning Attribute KBA13_KW_50 -1 unknown KBA13_KW_50 0 none KBA13_KW_50 1 very low KBA13_KW_50 2 low KBA13_KW_50 3 average KBA13_KW_50 4 high KBA13_KW_50 5 very high Value Meaning Attribute KBA13_KW_60 -1 unknown KBA13_KW_60 0 none KBA13_KW_60 1 very low KBA13_KW_60 2 low KBA13_KW_60 3 average KBA13_KW_60 4 high KBA13_KW_60 5 very high Value Meaning Attribute KBA13_KW_61_120 -1 unknown KBA13_KW_61_120 0 none KBA13_KW_61_120 1 very low KBA13_KW_61_120 2 low KBA13_KW_61_120 3 average KBA13_KW_61_120 4 high KBA13_KW_61_120 5 very high Value Meaning Attribute KBA13_KW_70 -1 unknown KBA13_KW_70 0 none KBA13_KW_70 1 very low KBA13_KW_70 2 low KBA13_KW_70 3 average KBA13_KW_70 4 high KBA13_KW_70 5 very high Value Meaning Attribute KBA13_KW_80 -1 unknown KBA13_KW_80 0 none KBA13_KW_80 1 very low KBA13_KW_80 2 low KBA13_KW_80 3 average KBA13_KW_80 4 high KBA13_KW_80 5 very high Value Meaning Attribute KBA13_KW_90 -1 unknown KBA13_KW_90 0 none KBA13_KW_90 1 very low KBA13_KW_90 2 low KBA13_KW_90 3 average KBA13_KW_90 4 high KBA13_KW_90 5 very high Value Meaning Attribute KBA13_MAZDA -1 unknown KBA13_MAZDA 0 none KBA13_MAZDA 1 very low KBA13_MAZDA 2 low KBA13_MAZDA 3 average KBA13_MAZDA 4 high KBA13_MAZDA 5 very high Value Meaning Attribute KBA13_MERCEDES -1 unknown KBA13_MERCEDES 0 none KBA13_MERCEDES 1 very low KBA13_MERCEDES 2 low KBA13_MERCEDES 3 average KBA13_MERCEDES 4 high KBA13_MERCEDES 5 very high Value Meaning Attribute KBA13_MOTOR -1 unknown KBA13_MOTOR 0 none KBA13_MOTOR 1 mainly small engines KBA13_MOTOR 2 mainly medium sized engines KBA13_MOTOR 3 mainly high engines KBA13_MOTOR 4 mainly very big engines Value Meaning Attribute KBA13_NISSAN -1 unknown KBA13_NISSAN 0 none KBA13_NISSAN 1 very low KBA13_NISSAN 2 low KBA13_NISSAN 3 average KBA13_NISSAN 4 high KBA13_NISSAN 5 very high Value Meaning Attribute KBA13_OPEL -1 unknown KBA13_OPEL 0 none KBA13_OPEL 1 very low KBA13_OPEL 2 low KBA13_OPEL 3 average KBA13_OPEL 4 high KBA13_OPEL 5 very high Value Meaning Attribute KBA13_PEUGEOT -1 unknown KBA13_PEUGEOT 0 none KBA13_PEUGEOT 1 very low KBA13_PEUGEOT 2 low KBA13_PEUGEOT 3 average KBA13_PEUGEOT 4 high KBA13_PEUGEOT 5 very high Value Meaning Attribute KBA13_RENAULT -1 unknown KBA13_RENAULT 0 none KBA13_RENAULT 1 very low KBA13_RENAULT 2 low KBA13_RENAULT 3 average KBA13_RENAULT 4 high KBA13_RENAULT 5 very high Value Meaning Attribute KBA13_SEG_GELAENDEWAGEN -1 unknown KBA13_SEG_GELAENDEWAGEN 0 none KBA13_SEG_GELAENDEWAGEN 1 very low KBA13_SEG_GELAENDEWAGEN 2 low KBA13_SEG_GELAENDEWAGEN 3 average KBA13_SEG_GELAENDEWAGEN 4 high KBA13_SEG_GELAENDEWAGEN 5 very high Value Meaning Attribute KBA13_SEG_GROSSRAUMVANS -1 unknown KBA13_SEG_GROSSRAUMVANS 0 none KBA13_SEG_GROSSRAUMVANS 1 very low KBA13_SEG_GROSSRAUMVANS 2 low KBA13_SEG_GROSSRAUMVANS 3 average KBA13_SEG_GROSSRAUMVANS 4 high KBA13_SEG_GROSSRAUMVANS 5 very high Value Meaning Attribute KBA13_SEG_KLEINST -1 unknown KBA13_SEG_KLEINST 0 none KBA13_SEG_KLEINST 1 very low KBA13_SEG_KLEINST 2 low KBA13_SEG_KLEINST 3 average KBA13_SEG_KLEINST 4 high KBA13_SEG_KLEINST 5 very high Value Meaning Attribute KBA13_SEG_KLEINWAGEN -1 unknown KBA13_SEG_KLEINWAGEN 0 none KBA13_SEG_KLEINWAGEN 1 very low KBA13_SEG_KLEINWAGEN 2 low KBA13_SEG_KLEINWAGEN 3 average KBA13_SEG_KLEINWAGEN 4 high KBA13_SEG_KLEINWAGEN 5 very high Value Meaning Attribute KBA13_SEG_KOMPAKTKLASSE -1 unknown KBA13_SEG_KOMPAKTKLASSE 0 none KBA13_SEG_KOMPAKTKLASSE 1 very low KBA13_SEG_KOMPAKTKLASSE 2 low KBA13_SEG_KOMPAKTKLASSE 3 average KBA13_SEG_KOMPAKTKLASSE 4 high KBA13_SEG_KOMPAKTKLASSE 5 very high Value Meaning Attribute KBA13_SEG_MINIVANS -1 unknown KBA13_SEG_MINIVANS 0 none KBA13_SEG_MINIVANS 1 very low KBA13_SEG_MINIVANS 2 low KBA13_SEG_MINIVANS 3 average KBA13_SEG_MINIVANS 4 high KBA13_SEG_MINIVANS 5 very high Value Meaning Attribute KBA13_SEG_MINIWAGEN -1 unknown KBA13_SEG_MINIWAGEN 0 none KBA13_SEG_MINIWAGEN 1 very low KBA13_SEG_MINIWAGEN 2 low KBA13_SEG_MINIWAGEN 3 average KBA13_SEG_MINIWAGEN 4 high KBA13_SEG_MINIWAGEN 5 very high Value Meaning Attribute KBA13_HALTER_30 -1 unknown KBA13_HALTER_30 0 none KBA13_HALTER_30 1 very low KBA13_HALTER_30 2 low KBA13_HALTER_30 3 average KBA13_HALTER_30 4 high KBA13_HALTER_30 5 very high Value Meaning Attribute KBA13_SEG_MITTELKLASSE -1 unknown KBA13_SEG_MITTELKLASSE 0 none KBA13_SEG_MITTELKLASSE 1 very low KBA13_SEG_MITTELKLASSE 2 low KBA13_SEG_MITTELKLASSE 3 average KBA13_SEG_MITTELKLASSE 4 high KBA13_SEG_MITTELKLASSE 5 very high Value Meaning Attribute KBA13_SEG_OBEREMITTELKLASSE -1 unknown KBA13_SEG_OBEREMITTELKLASSE 0 none KBA13_SEG_OBEREMITTELKLASSE 1 very low KBA13_SEG_OBEREMITTELKLASSE 2 low KBA13_SEG_OBEREMITTELKLASSE 3 average KBA13_SEG_OBEREMITTELKLASSE 4 high KBA13_SEG_OBEREMITTELKLASSE 5 very high Value Meaning Attribute KBA13_SEG_OBERKLASSE -1 unknown KBA13_SEG_OBERKLASSE 0 none KBA13_SEG_OBERKLASSE 1 very low KBA13_SEG_OBERKLASSE 2 low KBA13_SEG_OBERKLASSE 3 average KBA13_SEG_OBERKLASSE 4 high KBA13_SEG_OBERKLASSE 5 very high Value Meaning Attribute KBA13_KW_0_60 -1 unknown KBA13_KW_0_60 0 none KBA13_KW_0_60 1 very low KBA13_KW_0_60 2 low KBA13_KW_0_60 3 average KBA13_KW_0_60 4 high KBA13_KW_0_60 5 very high Value Meaning Attribute KBA13_CCM_2501 -1 unknown KBA13_CCM_2501 0 none KBA13_CCM_2501 1 very low KBA13_CCM_2501 2 low KBA13_CCM_2501 3 average KBA13_CCM_2501 4 high KBA13_CCM_2501 5 very high Value Meaning Attribute KBA13_KRSSEG_VAN -1 unknown KBA13_KRSSEG_VAN 0 none KBA13_KRSSEG_VAN 1 low KBA13_KRSSEG_VAN 2 average KBA13_KRSSEG_VAN 3 high Value Meaning Attribute KBA13_HERST_EUROPA -1 unknown KBA13_HERST_EUROPA 0 none KBA13_HERST_EUROPA 1 very low KBA13_HERST_EUROPA 2 low KBA13_HERST_EUROPA 3 average KBA13_HERST_EUROPA 4 high KBA13_HERST_EUROPA 5 very high Value Meaning Attribute KBA13_HERST_ASIEN -1 unknown KBA13_HERST_ASIEN 0 none KBA13_HERST_ASIEN 1 very low KBA13_HERST_ASIEN 2 low KBA13_HERST_ASIEN 3 average KBA13_HERST_ASIEN 4 high KBA13_HERST_ASIEN 5 very high Value Meaning Attribute KBA13_HALTER_66 -1 unknown KBA13_HALTER_66 0 none KBA13_HALTER_66 1 very low KBA13_HALTER_66 2 low KBA13_HALTER_66 3 average KBA13_HALTER_66 4 high KBA13_HALTER_66 5 very high Value Meaning Attribute KBA13_HALTER_65 -1 unknown KBA13_HALTER_65 0 none KBA13_HALTER_65 1 very low KBA13_HALTER_65 2 low KBA13_HALTER_65 3 average KBA13_HALTER_65 4 high KBA13_HALTER_65 5 very high Value Meaning Attribute KBA13_HERST_FORD_OPEL -1 unknown KBA13_HERST_FORD_OPEL 0 none KBA13_HERST_FORD_OPEL 1 very low KBA13_HERST_FORD_OPEL 2 low KBA13_HERST_FORD_OPEL 3 average KBA13_HERST_FORD_OPEL 4 high KBA13_HERST_FORD_OPEL 5 very high Value Meaning Attribute KBA13_HALTER_60 -1 unknown KBA13_HALTER_60 0 none KBA13_HALTER_60 1 very low KBA13_HALTER_60 2 low KBA13_HALTER_60 3 average KBA13_HALTER_60 4 high KBA13_HALTER_60 5 very high Value Meaning Attribute KBA13_HALTER_55 -1 unknown KBA13_HALTER_55 0 none KBA13_HALTER_55 1 very low KBA13_HALTER_55 2 low KBA13_HALTER_55 3 average KBA13_HALTER_55 4 high KBA13_HALTER_55 5 very high Value Meaning Attribute KBA13_HALTER_50 -1 unknown KBA13_HALTER_50 0 none KBA13_HALTER_50 1 very low KBA13_HALTER_50 2 low KBA13_HALTER_50 3 average KBA13_HALTER_50 4 high KBA13_HALTER_50 5 very high Value Meaning Attribute KBA13_HALTER_40 -1 unknown KBA13_HALTER_40 0 none KBA13_HALTER_40 1 very low KBA13_HALTER_40 2 low KBA13_HALTER_40 3 average KBA13_HALTER_40 4 high KBA13_HALTER_40 5 very high Value Meaning Attribute KBA13_HALTER_45 -1 unknown KBA13_HALTER_45 0 none KBA13_HALTER_45 1 very low KBA13_HALTER_45 2 low KBA13_HALTER_45 3 average KBA13_HALTER_45 4 high KBA13_HALTER_45 5 very high Value Meaning Attribute KBA13_HALTER_35 -1 unknown KBA13_HALTER_35 0 none KBA13_HALTER_35 1 very low KBA13_HALTER_35 2 low KBA13_HALTER_35 3 average KBA13_HALTER_35 4 high KBA13_HALTER_35 5 very high Value Meaning Attribute KBA13_KRSSEG_OBER -1 unknown KBA13_KRSSEG_OBER 0 none KBA13_KRSSEG_OBER 1 low KBA13_KRSSEG_OBER 2 average KBA13_KRSSEG_OBER 3 high Value Meaning Attribute KBA13_HERST_AUDI_VW -1 unknown KBA13_HERST_AUDI_VW 0 none KBA13_HERST_AUDI_VW 1 very low KBA13_HERST_AUDI_VW 2 low KBA13_HERST_AUDI_VW 3 average KBA13_HERST_AUDI_VW 4 high KBA13_HERST_AUDI_VW 5 very high Value Meaning Attribute KBA13_HERST_SONST -1 unknown KBA13_HERST_SONST 0 none KBA13_HERST_SONST 1 very low KBA13_HERST_SONST 2 low KBA13_HERST_SONST 3 average KBA13_HERST_SONST 4 high KBA13_HERST_SONST 5 very high Value Meaning Attribute KBA13_HERST_BMW_BENZ -1 unknown KBA13_HERST_BMW_BENZ 0 none KBA13_HERST_BMW_BENZ 1 very low KBA13_HERST_BMW_BENZ 2 low KBA13_HERST_BMW_BENZ 3 average KBA13_HERST_BMW_BENZ 4 high KBA13_HERST_BMW_BENZ 5 very high Value Meaning Attribute KBA13_KMH_0_140 -1 unknown KBA13_KMH_0_140 0 none KBA13_KMH_0_140 1 very low KBA13_KMH_0_140 2 low KBA13_KMH_0_140 3 average KBA13_KMH_0_140 4 high KBA13_KMH_0_140 5 very high Value Meaning Attribute KBA13_KMH_211 -1 unknown KBA13_KMH_211 0 none KBA13_KMH_211 1 very low KBA13_KMH_211 2 low KBA13_KMH_211 3 average KBA13_KMH_211 4 high KBA13_KMH_211 5 very high Value Meaning Attribute KBA13_KRSSEG_KLEIN -1 unknown KBA13_KRSSEG_KLEIN 0 none KBA13_KRSSEG_KLEIN 1 low KBA13_KRSSEG_KLEIN 2 average KBA13_KRSSEG_KLEIN 3 high Value Meaning Attribute KBA13_KRSHERST_FORD_OPEL -1 unknown KBA13_KRSHERST_FORD_OPEL 0 none KBA13_KRSHERST_FORD_OPEL 1 very low KBA13_KRSHERST_FORD_OPEL 2 low KBA13_KRSHERST_FORD_OPEL 3 average KBA13_KRSHERST_FORD_OPEL 4 high KBA13_KRSHERST_FORD_OPEL 5 very high Value Meaning Attribute KBA13_KRSHERST_BMW_BENZ -1 unknown KBA13_KRSHERST_BMW_BENZ 0 none KBA13_KRSHERST_BMW_BENZ 1 very low KBA13_KRSHERST_BMW_BENZ 2 low KBA13_KRSHERST_BMW_BENZ 3 average KBA13_KRSHERST_BMW_BENZ 4 high KBA13_KRSHERST_BMW_BENZ 5 very high Value Meaning Attribute KBA13_KRSHERST_AUDI_VW -1 unknown KBA13_KRSHERST_AUDI_VW 0 none KBA13_KRSHERST_AUDI_VW 1 very low KBA13_KRSHERST_AUDI_VW 2 low KBA13_KRSHERST_AUDI_VW 3 average KBA13_KRSHERST_AUDI_VW 4 high KBA13_KRSHERST_AUDI_VW 5 very high Value Meaning Attribute KBA13_KRSAQUOT -1 unknown KBA13_KRSAQUOT 0 none KBA13_KRSAQUOT 1 very low KBA13_KRSAQUOT 2 low KBA13_KRSAQUOT 3 average KBA13_KRSAQUOT 4 high KBA13_KRSAQUOT 5 very high Value Meaning Attribute KBA13_KMH_251 -1 unknown KBA13_KMH_251 0 none KBA13_KMH_251 1 very low KBA13_KMH_251 2 low KBA13_KMH_251 3 average KBA13_KMH_251 4 high KBA13_KMH_251 5 very high Value Meaning Attribute KBA13_KMH_110 -1 unknown KBA13_KMH_110 0 none KBA13_KMH_110 1 very low KBA13_KMH_110 2 low KBA13_KMH_110 3 average KBA13_KMH_110 4 high KBA13_KMH_110 5 very high Value Meaning Attribute KBA13_KMH_250 -1 unknown KBA13_KMH_250 0 none KBA13_KMH_250 1 very low KBA13_KMH_250 2 low KBA13_KMH_250 3 average KBA13_KMH_250 4 high KBA13_KMH_250 5 very high Value Meaning Attribute KBA13_KMH_180 -1 unknown KBA13_KMH_180 0 none KBA13_KMH_180 1 very low KBA13_KMH_180 2 low KBA13_KMH_180 3 average KBA13_KMH_180 4 high KBA13_KMH_180 5 very high Value Meaning Attribute KBA13_KMH_140_210 -1 unknown KBA13_KMH_140_210 0 none KBA13_KMH_140_210 1 very low KBA13_KMH_140_210 2 low KBA13_KMH_140_210 3 average KBA13_KMH_140_210 4 high KBA13_KMH_140_210 5 very high Value Meaning Attribute KBA13_KMH_140 -1 unknown KBA13_KMH_140 0 none KBA13_KMH_140 1 very low KBA13_KMH_140 2 low KBA13_KMH_140 3 average KBA13_KMH_140 4 high KBA13_KMH_140 5 very high Value Meaning Attribute CAMEO_DEU_2015 1A Work-Life-Balance CAMEO_DEU_2015 1B Wealthy Best Ager CAMEO_DEU_2015 1C Successful Songwriter CAMEO_DEU_2015 1D Old Nobility CAMEO_DEU_2015 1E City Nobility CAMEO_DEU_2015 2A Cottage Chic CAMEO_DEU_2015 2B Noble Jogger CAMEO_DEU_2015 2C Established gourmet CAMEO_DEU_2015 2D Fine Management CAMEO_DEU_2015 3A Career & Family CAMEO_DEU_2015 3B Powershopping Families CAMEO_DEU_2015 3C Rural Neighborhood CAMEO_DEU_2015 3D Secure Retirement CAMEO_DEU_2015 4A Family Starter CAMEO_DEU_2015 4B Family Life CAMEO_DEU_2015 4C String Trimmer CAMEO_DEU_2015 4D Empty Nest CAMEO_DEU_2015 4E Golden Ager CAMEO_DEU_2015 5A Younger Employees CAMEO_DEU_2015 5B Suddenly Family CAMEO_DEU_2015 5C Family First CAMEO_DEU_2015 5D Stock Market Junkies CAMEO_DEU_2015 5E Coffee Rider CAMEO_DEU_2015 5F Active Retirement CAMEO_DEU_2015 6A Jobstarter CAMEO_DEU_2015 6B Petty Bourgeois CAMEO_DEU_2015 6C Long-established CAMEO_DEU_2015 6D Sportgardener CAMEO_DEU_2015 6E Urban Parents CAMEO_DEU_2015 6F Frugal Aging CAMEO_DEU_2015 7A Journeymen CAMEO_DEU_2015 7B Mantaplatte CAMEO_DEU_2015 7C Factory Worker CAMEO_DEU_2015 7D Rear Window CAMEO_DEU_2015 7E Interested Retirees CAMEO_DEU_2015 8A Multi-culteral CAMEO_DEU_2015 8B Young & Mobile CAMEO_DEU_2015 8C Prefab CAMEO_DEU_2015 8D Town Seniors CAMEO_DEU_2015 9A First Shared Apartment CAMEO_DEU_2015 9B Temporary Workers CAMEO_DEU_2015 9C Afternoon Talk Show CAMEO_DEU_2015 9D Mini-Jobber CAMEO_DEU_2015 9E Socking Away Value Meaning Attribute CAMEO_DEUG_2015 -1 unknown CAMEO_DEUG_2015 1 upper class CAMEO_DEUG_2015 2 upper middleclass CAMEO_DEUG_2015 3 established middleclasse CAMEO_DEUG_2015 4 consumption-oriented middleclass CAMEO_DEUG_2015 5 active middleclass CAMEO_DEUG_2015 6 low-consumption middleclass CAMEO_DEUG_2015 7 lower middleclass CAMEO_DEUG_2015 8 working class CAMEO_DEUG_2015 9 urban working class Value Meaning Attribute RELAT_AB 1 very low RELAT_AB 2 low RELAT_AB 3 average RELAT_AB 4 high RELAT_AB 5 very high RELAT_AB -1, 9 unknown Value Meaning Attribute ORTSGR_KLS9 -1 unknown ORTSGR_KLS9 1 <= 2.000 inhabitants ORTSGR_KLS9 2 2.001 to 5.000 inhabitants ORTSGR_KLS9 3 5.001 to 10.000 inhabitants ORTSGR_KLS9 4 10.001 to 20.000 inhabitants ORTSGR_KLS9 5 20.001 to 50.000 inhabitants ORTSGR_KLS9 6 50.001 to 100.000 inhabitants ORTSGR_KLS9 7 100.001 to 300.000 inhabitants ORTSGR_KLS9 8 300.001 to 700.000 inhabitants ORTSGR_KLS9 9 > 700.000 inhabitants Value Meaning Attribute ANZ_HH_TITEL … numeric value (typically coded from 1-10) Value Meaning Attribute INNENSTADT -1 unknown INNENSTADT 1 city centre INNENSTADT 2 distance to the city centre 3 km INNENSTADT 3 distance to the city centre 3-5 km INNENSTADT 4 distance to the city centre 5-10 km INNENSTADT 5 distance to the city centre 10-20 km INNENSTADT 6 distance to the city centre 20-30 km INNENSTADT 7 distance to the city centre 30-40 km INNENSTADT 8 distance to the city centre > 40 km Value Meaning Attribute EWDICHTE -1 unknown EWDICHTE 1 less than 34 HH/km² EWDICHTE 2 34 - 89 HH/km² EWDICHTE 3 90 - 149 HH/km² EWDICHTE 4 150 - 319 HH/km² EWDICHTE 5 320 - 999 HH/km² EWDICHTE 6 more than 999 HH/² Value Meaning Attribute BALLRAUM -1 unknown BALLRAUM 1 till 10 km BALLRAUM 2 10 - 20 km BALLRAUM 3 20 - 30 km BALLRAUM 4 30 - 40 km BALLRAUM 5 40 - 50 km BALLRAUM 6 50-100 km BALLRAUM 7 more than 100 km Value Meaning Attribute GEBAEUDETYP_RASTER 1 business cell GEBAEUDETYP_RASTER 2 mixed cell with high business share GEBAEUDETYP_RASTER 3 mixed cell with middle business share GEBAEUDETYP_RASTER 4 mixed cell with low business share GEBAEUDETYP_RASTER 5 residential cell Value Meaning Attribute MIN_GEBAEUDEJAHR … numeric value Value Meaning Attribute WOHNLAGE -1 unknown WOHNLAGE 0 no score calculated WOHNLAGE 1 very good neighbourhood WOHNLAGE 2 good neighbourhood WOHNLAGE 3 average neighbourhood WOHNLAGE 4 poor neighbourhood WOHNLAGE 5 very poor neighbourhood WOHNLAGE 7 rural neighbourhood WOHNLAGE 8 new building in rural neighbourhood Value Meaning Attribute ANZ_HAUSHALTE_AKTIV … numeric value (typically coded from 1-10) Value Meaning Attribute KBA05_MODTEMP -1, 9 unknown KBA05_MODTEMP 1 promoted KBA05_MODTEMP 2 stayed upper level KBA05_MODTEMP 3 stayed lower/average level KBA05_MODTEMP 4 demoted KBA05_MODTEMP 5 new building Value Meaning Attribute GEBAEUDETYP -1, 0 unknown GEBAEUDETYP 1 residental building GEBAEUDETYP 2 residental building buildings without actually... GEBAEUDETYP 3 mixed (=residential and company) building GEBAEUDETYP 4 mixed building without actually known househol... GEBAEUDETYP 5 company building w/o known company GEBAEUDETYP 6 mixed building without actually known household GEBAEUDETYP 7 company building GEBAEUDETYP 8 mixed building without actually known company Value Meaning Attribute KBA05_HERSTTEMP -1, 9 unknown KBA05_HERSTTEMP 1 promoted KBA05_HERSTTEMP 2 stayed upper level KBA05_HERSTTEMP 3 stayed lower/average level KBA05_HERSTTEMP 4 demoted KBA05_HERSTTEMP 5 new building Value Meaning Attribute OST_WEST_KZ -1 unknown OST_WEST_KZ O East (GDR) OST_WEST_KZ W West (FRG) ###Markdown There's a mix of 0, 9 and -1, will change all of it to -1 as unknown and none can be deal as the same ###Code #all Nan became -1 and we can see here EINGEFUEGT_AM and OST_WESTKZ, both have to be treat azdias[azdias_missing_features_10_20].head() #Now lets see the columns with more than 0 and less than 10 percent of Nan azdias_missing_features_0_10 = nan_values_azdias[(nan_values_azdias > 0) & ( nan_values_azdias < 10)].index azdias[azdias_missing_features_0_10].head() #Lets find out the description of this columns for val in azdias_missing_features_0_10: try: print(values.loc[[val],['Description','Value','Meaning']]) except: pass #count how many Nan values we have in each column count_nan = ((azdias[azdias_missing_features_0_10].isnull().sum(axis = 0) / azdias.shape[0]) *100).sort_values(ascending=False) count_nan #Some colums we don't have meaning for 0 values, but we have 0's anyway, probably being use for unknown values columns_with_description =['HH_EINKOMMEN_SCORE', 'CJT_GESAMTTYP', 'LP_FAMILIE_FEIN', 'LP_FAMILIE_GROB', 'LP_LEBENSPHASE_GROB', 'LP_STATUS_FEIN', 'LP_STATUS_GROB', 'ONLINE_AFFINITAET', 'LP_LEBENSPHASE_FEIN', 'RETOURTYP_BK_S', 'GFK_URLAUBERTYP'] for column in columns_with_description: print(column) print(azdias[column].describe()) print(azdias[azdias[column] == 0][column].count()) #Lets find out the description of this columns, maybe there's a reason for Nan or maybe we can just drop this rows for val in azdias_missing_features_0_10: try: print(values.loc[[val],['Description','Value','Meaning']]) except: pass azdias.info() #analyse and change the columns that is object type azdias_object_columns = azdias.select_dtypes(include=['object']).columns azdias[azdias_object_columns].head() #Lets find out the description of this columns for val in azdias_object_columns: try: print(values.loc[[val],['Description','Value','Meaning']]) except: pass #See unique values to understand why two columns with number are objects for i in azdias_object_columns: print(i,'\n', azdias[i].unique()) #The X will become -1 and it's necessary to change some values to int azdias['CAMEO_DEUG_2015'].value_counts() #See unique values for i in azdias.columns: print(i,'\n', azdias[i].unique()) #Found the columns wich we have description on values file and more than 10 unique values for val in azdias.columns: if len(azdias[val].unique()) > 10: try: print(values.loc[[val],['Description','Value','Meaning']]) print(azdias[val].unique()) except: pass #Customers has 3 columns plus the others from uzdias, it's good to see how they are distributed extra_columns = ['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'] customers[extra_columns].head() #Some statistics about them customers['ONLINE_PURCHASE'].hist() #How many unique values we have for i in extra_columns: print(customers[i].unique()) #How many Nan values customers[extra_columns].isna().sum() for i in extra_columns: customers[i].value_counts().plot(kind='bar') plt.show() ###Output _____no_output_____ ###Markdown After the analyses it's time to define a function to clean azdias data and reply it to customers data ###Code def clean_columns_azdias(data_1, threshold = 50): #We have two kind of object values OST_WEST_KZ and -1, wich is unknown values data_1['OST_WEST_KZ'].replace(['W', 'O'], [2, 1], inplace=True) #filter the nan values and order it nan_values = ((data_1.isnull().sum(axis = 0) / data_1.shape[0]) *100).sort_values(ascending=False) #values under the threshold will be dropped under_threshold = nan_values[nan_values > threshold].index #columns with Nan's under the threshold clean_1 = data_1.drop(under_threshold, axis=1, inplace=True) #drop this columns #columns with Nans between half threshold and threshold itself half_threshold = nan_values[(nan_values > (threshold / 2)) & (nan_values < threshold)].index #Start droping columns, specified in half_threshold, with Nan values showed in more than 4 columns data_1.dropna(subset=half_threshold, thresh=4, inplace=True) #For columns with small amount of Nan, they will be unknow values: -1 for col in data_1.columns: data_1[col].fillna(-1, inplace=True) #Cleaning the columns with float float_columns = data_1.select_dtypes(include=['float64']).columns #Clean float values for col in float_columns: data_1[col] = data_1[col].astype('int') #Replace X with -1 and change this object column to int data_1['CAMEO_DEUG_2015'].replace('X', -1, inplace=True) data_1['CAMEO_DEUG_2015'] = data_1['CAMEO_DEUG_2015'].astype('int') #For simplicity we will just deal with columns with less than 11 unique values for column in data_1.columns: if len(data_1[column].unique()) > 11: data_1.drop(column, axis=1, inplace=True) elif ((data_1[column].isin([0]).sum() / data_1.shape[0]) *100) > 90: data_1.drop(column, axis=1, inplace=True) #standarize the unknown values as -1, replacing the 0's for col in data_1.columns: data_1[col].replace(0, -1, inplace=True) #For columns with unique variables between 0 and 8, replace the 9's with -1 for col in data_1.columns: if len(data_1[col].unique()) < 9: data_1[col].replace(9, -1, inplace=True) def drop_different_columns(data_1, data_2): #Specific columns from customers dataset extra_col = pd.Index(['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP']) #Columns not dropped from data_1 cleaned_columns = data_1.columns #The columns we will need in data_2 all_col = cleaned_columns.append(extra_col) # columns_data_2 = data_2.columns #Drop all columns no included in all_col for i in data_2.columns: if i not in all_col: data_2.drop(columns=i, inplace=True) def clean_columns_customers(data_2): #We have two kind of object values OST_WEST_KZ and -1, wich is unknown values data_2['OST_WEST_KZ'].replace(['W', 'O'], [2, 1], inplace=True) #For columns with small amount of Nan, they will be unknow values: -1 for col in data_2.columns: data_2[col].fillna(-1, inplace=True) #Cleaning the columns with float float_columns_2 = data_2.select_dtypes(include=['float64']).columns #Clean float values for col in float_columns_2: data_2[col] = data_2[col].astype('int') #Replace X with -1 and change this object column to int data_2['CAMEO_DEUG_2015'].replace('X', -1, inplace=True) data_2['CAMEO_DEUG_2015'] = data_2['CAMEO_DEUG_2015'].astype('int') #standarize the unknown values as -1, replacing the 0's for col in data_2.columns: data_2[col].replace(0, -1, inplace=True) #For columns with unique variables between 0 and 8, replace the 9's with -1 for col in data_2.columns: if len(data_2[col].unique()) < 9: data_2[col].replace(9, -1, inplace=True) clean_columns_azdias(azdias) azdias.head() drop_different_columns(azdias, customers) clean_columns_customers(customers) azdias.head() #Save the cleaned datasets azdias.to_csv('azdias_cleaned.csv') customers.to_csv('customers_cleaned.csv') ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. ###Code azdias_cleaned = pd.read_csv('azdias_cleaned.csv', index_col=0) customers_cleaned = pd.read_csv('customers_cleaned.csv', index_col=0) #To apply data into PCA we need to scale the values(mean = 0 and variance = 1) scaler = StandardScaler() # Apply transform azdias_scaled = pd.DataFrame(scaler.fit_transform(azdias_cleaned), columns = azdias_cleaned.columns) #To apply scaler and PCA on customers dataset, we need to drop the object columns customers_cleaned.drop(columns=['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], inplace=True) #Apply scale to customers customers_scaled = pd.DataFrame(scaler.transform(customers_cleaned), columns = customers_cleaned.columns) # performing principle components analysis on the scaled data #instantiate pca = PCA() #Fit azdias scaled into PCA pca.fit(azdias_scaled) plt.plot(np.cumsum(pca.explained_variance_ratio_)) #Based on this graph, I will keep 120 components (explaining 90% of the data) n_components = 120 # performing principle components analysis on the scaled data #instantiate pca = PCA(n_components=n_components) #Fit and transform azdias scaled into PCA azdias_pca = pca.fit_transform(azdias_scaled) #Tranform customers with the values from azdias fitting customers_pca = pca.transform(customers_scaled) PCA_components = pd.DataFrame(azdias_pca) print(pd.DataFrame(pca.components_,columns=azdias_scaled.columns)) #Using the elbow technic to choose the number of k sse = {} for k in range(2, 40, 1): mini_kmeans = MiniBatchKMeans(n_clusters=k, batch_size=300).fit(azdias_pca) sse[k] = mini_kmeans.inertia_ # Inertia: Sum of distances of samples to their closest cluster center print(k, sse[k]) plt.figure() plt.plot(list(sse.keys()), list(sse.values())) plt.xlabel("Number of cluster") plt.ylabel("SSE") plt.show() #Will you 7 clustes, after that the distances from each value start to get too close kmeans_7 = KMeans(n_clusters=7) azdias_pred = kmeans_7.fit_predict(azdias_pca) customers_pred = kmeans_7.predict(customers_pca) # View how azdias is distributed on the clusters azdias_count = Counter(azdias_pred) x = azdias_count.keys() y = np.array(list(azdias_count.values())) / len(azdias_pred) * 100 plt.bar(x, y) # View how customers is distributed on the clusters customers_count = Counter(customers_pred) x = customers_count.keys() y = np.array(list(customers_count.values())) / len(customers_pred) * 100 plt.bar(x, y) ###Output _____no_output_____ ###Markdown We can see that azdias and customers has the 3 cluster as a possible target for future customers Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') mailout_train.info() #See the first lines from our dataset mailout_train.head() plt.hist(mailout_train['RESPONSE'], bins=2) plt.ylabel('Count') plt.xlabel('Values') plt.show() print(mailout_train.groupby('RESPONSE')['RESPONSE'].count()) ###Output _____no_output_____ ###Markdown It's possible to see we have an imbalanced data ###Code mailout_train.describe() #split data into X and y y_train = mailout_train['RESPONSE'] X_train = mailout_train.drop(labels=['RESPONSE'], axis=1) X_train.head() X_train.info() #count how many Nan values we have in each column nan_values_X_train = ((X_train.isnull().sum(axis = 0) / X_train.shape[0]) *100).sort_values(ascending=False) nan_values_X_train #See unique values and how they are spread in the dataset for i in X_train.columns: print(i,'\n', X_train[i].unique(), len(X_train[i].unique())) #Drop the same columns we dropped from azdias and customers dataset cleaned_columns = azdias_cleaned.columns cleaned_columns X_train.drop(columns=[col for col in X_train if col not in cleaned_columns ], inplace=True) #Replace object values with int X_train['OST_WEST_KZ'].replace(['W', 'O'], [2, 1], inplace=True) #Replace X with -1 on CAMEO_DEUG_2015 X_train['CAMEO_DEUG_2015'].replace('X', -1, inplace=True) #Standardize the values filling Nan, change to intm replace 0 with -1 and replace 9 with -1 if unique values less than 9 for col in X_train.columns: X_train[col].fillna(-1, inplace=True) X_train[col] = X_train[col].astype('int') X_train[col].replace(0, -1, inplace=True) if len(X_train[col].unique()) < 9: X_train[col].replace(9, -1, inplace=True) ###Output _____no_output_____ ###Markdown We have an inbalanced data, for that it's necessary to resample all to improve the algorithm ###Code # setting up testing and training sets X_train_res, X_valid_res, y_train_res, y_valid_res = train_test_split(X_train, y_train, test_size=0.25, random_state=27) # concatenate our training data back together X = pd.concat([X_train_res, y_train_res], axis=1) # separate minority and majority classes not_customer = X[X.RESPONSE == 0] customer = X[X.RESPONSE == 1] # upsample minority upsampled = resample(customer, replace=True, # sample with replacement n_samples=len(not_customer), # match number in majority class random_state=27) # reproducible results # combine majority and upsampled minority upsampled = pd.concat([not_customer, upsampled]) upsampled.RESPONSE.value_counts() # trying logistic regression with the balanced dataset y_train = upsampled.RESPONSE X_train = upsampled.drop('RESPONSE', axis=1) LogReg = LogisticRegression(solver='liblinear').fit(X_train, y_train) y_pred = LogReg.predict(X_valid_res) y_pred # Checking accuracy print(accuracy_score(y_valid_res, y_pred)) # f1 score print(f1_score(y_valid_res, y_pred)) #auc print(roc_auc_score(y_valid_res, y_pred)) # RandomForestClassifier rfc = RandomForestClassifier().fit(X_train, y_train) y_pred = rfc.predict(X_valid_res) y_pred # Checking accuracy print(accuracy_score(y_valid_res, y_pred)) # f1 score print(f1_score(y_valid_res, y_pred)) #auc print(roc_auc_score(y_valid_res, y_pred)) ###Output 0.944884089005 0.023102310231 0.503485836119 ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') mailout_test.head() lnr = mailout_test.LNR #Drop the same columns we dropped from azdias and customers dataset cleaned_columns = azdias_cleaned.columns cleaned_columns mailout_test.drop(columns=[col for col in mailout_test if col not in cleaned_columns ], inplace=True) #Replace object values with int mailout_test['OST_WEST_KZ'].replace(['W', 'O'], [2, 1], inplace=True) #Replace X with -1 on CAMEO_DEUG_2015 mailout_test['CAMEO_DEUG_2015'].replace('X', -1, inplace=True) #Standardize the values filling Nan, change to intm replace 0 with -1 and replace 9 with -1 if unique values less than 9 for col in mailout_test.columns: mailout_test[col].fillna(-1, inplace=True) mailout_test[col] = mailout_test[col].astype('int') mailout_test[col].replace(0, -1, inplace=True) if len(mailout_test[col].unique()) < 9: mailout_test[col].replace(9, -1, inplace=True) #verify if there's any Nan value mailout_test.isna().sum() #Predicting the probabilitie of the value response = LogReg.predict_proba(mailout_test) # generate result file for the competition result = pd.DataFrame({'LNR':lnr, 'RESPONSE':response[:,0]}) result.to_csv('result.csv', index=False) result.head(10) ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; # add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 1: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # load in the general population data azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';') ###Output /opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py:2785: DtypeWarning: Columns (18,19) have mixed types. Specify dtype option on import or set low_memory=False. interactivity=interactivity, compiler=compiler, result=result) ###Markdown Step 1.1: Exploring the data ###Code #List top 5 rows of the data azdias.head() #Drop the "LNR" column as it represent the user id and not needed now azdias = azdias.drop("LNR", axis=1) #Understand the data shape azdias.shape #Explore the data columns types azdias.dtypes.value_counts() ###Output _____no_output_____ ###Markdown Step 1.2: Data SamplingThe general population data is too large which makes Kernel takes very long time to run and unable to complete the project, so we will take sample from the data around 10% to work on ###Code #Run sample with fraction 10% azdias_sample=azdias.sample(frac=0.1) #Get the new sample shape azdias_sample.shape #descriptive statistics of the general population dataset azdias_sample.describe() #Understand the NAN empty fields azdias_sample.isna().sum() azdias_sample.isna().sum().sum() #To explore the object type columns azdias_sample.select_dtypes(include='object') ###Output _____no_output_____ ###Markdown Part 2: Preprocessing Step 2.1: Explore Missing DataIn order to assess the missing data and to be able to clean the data we will use the DIAS Attributes file which contains attributes and proposerties of each data column. But in order to be able to use such file to do the mapping with our data file we will need to do some manipulation to forumlate the file to be able to map it to our data ###Code #Load the attributes file features=pd.read_excel('DIAS Attributes - Values 2017.xlsx') features.head() #Remove un-needed column del features['Unnamed: 0'] features.head() #Now we will filter the features dataframe to include the rows where meaning is 'unknown' or contains 'no' to represent missing values features_missing=features.loc[features['Meaning'].str.contains("unknown") | features['Meaning'].str.contains("no ")] features_missing.head() ###Output _____no_output_____ ###Markdown Step 2.2: Manipulate the Value column with the Meaning Column to represent all missing values once ###Code #Now we will use forward fill to fill-in the missing values in the dataframe features_fillin = features_missing['Attribute'].fillna(method='ffill') features_missing['Attribute'] = features_fillin features_missing.head() #Now we will create new column to represent all missing value attributes mappping to the equivalent attributes NANs = [] for attr in features_missing['Attribute'].unique(): lst = features_missing.loc[features_missing['Attribute'] == attr, 'Value'].astype(str).str.cat(sep=',') lst = lst.split(',') lst=list(map(int, lst)) NANs.append(lst) #Now create new dataframe consist of the missing/unknown values with the equivaluent attributes features_final = pd.concat([pd.Series(features_missing['Attribute'].unique()), pd.Series(NANs)], axis=1) features_final.columns = ['attribute', 'missing_or_unknown'] features_final.head() ###Output _____no_output_____ ###Markdown Step 2.3: Convert Missing Value Codes to NaNsNow we will use the column of the 'missing_or_unknown' with its values (`[-1,0]`) to make use of it to identify and clean the data. Convert data that matches a 'missing' or 'unknown' value code into a numpy NaN value ###Code att_index = features_final.set_index('attribute') #create copy of the sample general population dataset to work on and make the convertion azdias_sample_NAN = azdias_sample[:] #Now we will iterate over the dataframe columns and map each attribute in the feature list to convert its missing value with NAN for item in features_final['attribute']: if item in azdias_sample_NAN.columns: print (item) azdias_sample_NAN[item].replace(att_index.loc[item].loc['missing_or_unknown'],np.NaN,inplace=True) else: print("not found") continue #Confirm the view after conversion azdias_sample_NAN.head() ###Output _____no_output_____ ###Markdown Step 2.4: Assess Missing Data in Each ColumnNow we will need to assess how much missing data is present in each column as there are a few columns that are outliers in terms of the proportion of values that are missing.We will visualize the distribution of missing value counts to find these columns and we might remove them from the dataframe ###Code #Calculate the missing values portion per feature feature_null_percent=(azdias_sample_NAN.isna().sum()* 100 / len(azdias_sample_NAN)) #Visualize the percent of null values per feature plt.title("Missing values distribution in general population dataset",fontsize=13,fontweight="bold") plt.xlabel("feature name",fontsize=13) plt.ylabel("% of missing values",fontsize=13) (feature_null_percent.sort_values(ascending=False)[:50].plot(kind='bar', figsize=(20,8), fontsize=13)); #Distribution of missing values by rows row_nul_percent=azdias_sample_NAN.isna().sum(axis=1) plt.hist(row_nul_percent) plt.title("Number of missing values by row",fontsize=10,fontweight="bold") plt.xlabel("count of missings",fontsize=10) plt.ylabel("count",fontsize=10) #Since many of the columns have missing values above 40%, we will drop these columns cols_to_drop = azdias_sample_NAN.columns[azdias_sample_NAN.isnull().mean() > .4] print(cols_to_drop) #Now we will create new dataframe after dropping the previous columns azdias_sample_filterd = azdias_sample_NAN.loc[:, azdias_sample_NAN.isnull().mean() < .4] (azdias_sample_filterd.isna().sum()* 100 / len(azdias_sample_filterd)).reset_index(name="n").plot(kind='bar', x='index', y='n',figsize=(20,10)) azdias_sample_filterd.head() ###Output _____no_output_____ ###Markdown Step 2.5: Select and Re-Encode FeaturesSince unsupervised learning techniques only work on data that is encoded numerically, we need to make a few encoding changes. So we will check the categorical and mixed-type features and make a decision on each of them, whether we will keep, drop, or re-encode each.- For binary (two-level) categoricals that take numeric values, we will keep them without change- For binary variable that takes on non-numeric values we will re-encode the values as numbers or create a dummy variable.- For multi-level categoricals (three or more values) we will just drop them from the analysis. ###Code #Filter the categorical/mixed type data columns to explore azsiad_sample_categ=azdias_sample_filterd.columns[azdias_sample_filterd.dtypes == "object"] azsiad_sample_categ # Re-encode categorical variable(s) to be kept in the analysis. we will create two lists, one for binary and one for multi-level binary=[] multi_level=[] for i in azsiad_sample_categ: if azdias_sample_filterd[i].nunique() == 2: binary.append(i) elif azdias_sample_filterd[i].nunique() > 2: multi_level.append(i) multi_level #Drop the multi-level categorical columns and create dataframe azdias_sample_binary=azdias_sample_filterd.drop(multi_level, axis=1) #Explore the binary categorical column values for i in binary: print(azdias_sample_binary[i].value_counts()) #Now we will replace "W" value with 1 to have both numerical shape to be able to convert the whole column to numerical azdias_sample_binary.replace({'OST_WEST_KZ' : { 'W' : 1}}, inplace=True) for i in binary: print(azdias_sample_binary[i].value_counts()) #Now we will convert the categorical column to numerical to eb able to include in our model azdias_sample_binary['OST_WEST_KZ'] = pd.to_numeric(azdias_sample_binary['OST_WEST_KZ'], errors='coerce') azdias_sample_binary.isna().sum() # Now we will clean the dataset of all NaN values #First will create copy of the dataset azdias_sample_clean=azdias_sample_binary[:] #We will use imputer function to fillin the NAN values with mean from sklearn.preprocessing import Imputer miss_mean_imputer = Imputer(missing_values='NaN', strategy='mean', axis=0) final_azdias = pd.DataFrame(miss_mean_imputer.fit_transform(azdias_sample_clean)) final_clean_azdias=pd.DataFrame(final_azdias) #To check that no more NAN values final_clean_azdias.isna().sum() ###Output _____no_output_____ ###Markdown Step 2.6: Create a Cleaning FunctionNow since we've finished cleaning up the general population demographics data, we'll need to perform the same cleaning steps on the customer demographics data.So we will create a cleaning function include all the above steps to apply cleaning in one step ###Code def clean_data(df): """ Perform feature trimming, re-encoding, and engineering for demographics data INPUT: Demographics DataFrame OUTPUT: Trimmed and cleaned demographics DataFrame """ # Put in code here to execute all main cleaning steps: # convert missing value codes into NaNs, ... att_index = features_final.set_index('attribute') for item in features_final['attribute']: if item in df.columns: df[item].replace(att_index.loc[item].loc['missing_or_unknown'],np.NaN,inplace=True) else: continue # select, re-encode, and engineer column values. df = df.loc[:, df.isnull().mean() < .4] categ=df.columns[df.dtypes == "object"] binary=[] multi_level=[] for i in categ: if df[i].nunique() == 2: binary.append(i) elif df[i].nunique() > 2: multi_level.append(i) df=df.drop(multi_level, axis=1) df.replace({'OST_WEST_KZ' : { 'W' : 1}}, inplace=True) df['OST_WEST_KZ'] = pd.to_numeric(df['OST_WEST_KZ'], errors='coerce') cols = df.columns miss_mean_imputer = Imputer(missing_values='NaN', strategy='mean', axis=0) df = miss_mean_imputer.fit_transform(df) df=pd.DataFrame(df) # Return the cleaned dataframe. return df ###Output _____no_output_____ ###Markdown Part 3: Feature Transformation Step 3.1: Feature ScalingBefore we apply dimensionality reduction techniques to the data, we need to perform feature scaling so that the principal component vectors are not influenced by the natural differences in scale for features. We will use StandardScaler to scale each feature to mean 0 and standard deviation 1. ###Code # Apply feature scaling to the general population sample demographics data. from sklearn.preprocessing import StandardScaler scaler = StandardScaler() standrd_azdians=scaler.fit_transform(final_clean_azdias) standrd_azdians=pd.DataFrame(standrd_azdians) standrd_azdians.head() standrd_azdians.shape ###Output _____no_output_____ ###Markdown Step 3.2: Dimensionality ReductionOn the scaled data, we will apply dimensionality reduction techniques as follow:We will Use PCA class to apply principal component analysis on the data to find the vectors of maximal variance in the data. Then we will check the ratio of variance explained by each principal component as well as the cumulative variance explained. We will plot the cumulative or sequential values to help in selecting a value for the number of transformed features to retain for the clustering part. With the number of components to keep, we will re-fit a PCA instance to perform the decided-on transformation. ###Code # Apply PCA to the data. from sklearn.decomposition import PCA def do_pca(n_components, data): pca = PCA(n_components) X_pca = pca.fit_transform(data) return pca, X_pca pca,X_pca= do_pca(334, standrd_azdians) def pca_plot(pca, cumulative=True, figsize=(8,10)): ''' Creates a pca plot associated with the principal components INPUT: pca - the result of instantian of PCA in scikit learn OUTPUT: None ''' components_numbers = len(pca.explained_variance_ratio_) indx = np.arange(components_numbers) values = pca.explained_variance_ratio_ plt.figure(figsize=(20, 6)) ax = plt.subplot(111) cumvals = np.cumsum(values) ax.bar(indx, values) ax.plot(indx, cumvals) for i in range(components_numbers): ax.annotate(r"%s%%" % ((str(values[i]*100)[:4])), (indx[i]+0.2, values[i]), va="bottom", ha="center", fontsize=12) ax.xaxis.set_tick_params(width=0) ax.yaxis.set_tick_params(width=2, length=12) ax.set_xlabel("Principal Component") ax.set_ylabel("Variance Explained (%)") plt.title('Explained Variance Per Principal Component') pca_plot(pca) '''Re-apply PCA to the data while selecting for number of components to retain. We will select 150 components as represent almost 90% of the variance in the dataset ''' pca,X_pca= do_pca(150, standrd_azdians) X_pca.shape ###Output _____no_output_____ ###Markdown Part 4: Clustering Step 4.1: Apply Clustering to General Population SampleNow we will apply k-means clustering to the dataset and use the average within-cluster distances from each point to their assigned cluster's centroid to decide on a number of clusters to keep.- Use sklearn's KMeans class to perform k-means clustering on the PCA-transformed data.- Compute the average difference from each point to its assigned cluster's center.- We will perform the above two steps for a number of different cluster counts. You can then see how the average distance decreases with an increasing number of clusters. - Once we select a final number of clusters to use, we will re-fit KMeans to perform the clustering operation. ###Code # Over a number of different cluster counts... from sklearn.cluster import KMeans def get_kmeans_score(data, center): ''' returns the kmeans score regarding SSE for points to centers INPUT: data - the dataset you want to fit kmeans to center - the number of centers you want (the k value) OUTPUT: score - the SSE score for the kmeans model fit to the data ''' #instantiate kmeans kmeans = KMeans(n_clusters=center) # Then fit the model to your data using the fit method model = kmeans.fit(data) # Obtain a score related to the model fit score = np.abs(model.score(data)) return score scores = [] centers = list(range(5,30,5)) for center in centers: scores.append(get_kmeans_score(X_pca, center)) scores # We will visualize the change in within-cluster distance across number of clusters. plt.plot(centers, scores, linestyle='--', marker='o', color='b'); plt.xlabel('K'); plt.ylabel('SSE'); plt.title('SSE vs. K'); # Re-fit the k-means model with the selected number of clusters and obtain # cluster predictions for the general population demographics data sample and we will use 15 clusters. kmeans_mod = KMeans(n_clusters=15) model_pop = kmeans_mod.fit(X_pca) pred_pop = model_pop.predict(X_pca) pred_pop ###Output _____no_output_____ ###Markdown Part 5: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. Step 5.1: Apply All Steps to the Customer DataNow we have clusters and cluster centers for the general population, we will work on check how the customer data maps on to those clusters.- We will when load the customers data.- We will take 10% sampling ratio as we did with the genral population data.- Apply the same feature wrangling, selection, and engineering steps to the customer demographics using the clean function created earlier. ###Code #Load Customers data customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') customers.head() #Drop the "LNR" column as it represent the user id and not needed now customers = customers.drop("LNR", axis=1) #Check the data shape customers.shape #Check the data types counts customers.dtypes.value_counts() # Take sample from the customers dataset to be same proportion with the general population data. customers_sample = customers.sample(frac=0.1) #Check the extra columns compared to the general popoulation data customers_sample.select_dtypes(include='object') #We will drop the extra columns to match the general population shape customers_sample.drop(['PRODUCT_GROUP','CUSTOMER_GROUP'], axis=1, inplace=True) customers_sample.select_dtypes(include='object') #Now we will apply the clean function on the customers data cleaned_customers=clean_data(customers_sample) #Recheck the clean data shape to ensure its matching with the general population shape cleaned_customers.shape #Ensure no missing values in customers data cleaned_customers.isna().sum() ###Output _____no_output_____ ###Markdown Step 5.2: Compare Customer Data to Demographics DataHere, we will compare the two cluster distributions for the data demographics of the general population and the customer data for a mail-order sales company to see where the strongest customer base for the company is.We will check the proportion of persons in each cluster for the general population, and the proportions for the customers. If there are only particular segments of the population that are interested in the company's products, then we should see a mismatch from one to the other. If there is a higher proportion of persons in a cluster for the customer data compared to the general population then that suggests the people in that cluster to be a target audience for the company. On the other hand, the proportion of the data in a cluster being larger in the general population than the customer data suggests that group of persons to be outside of the target demographics.What we will do is to compute the proportion of data points in each cluster for the general population and the customer data. Visualizations will be useful here: both for the individual dataset proportions, but also to visualize the ratios in cluster representation between groups. ###Code #Apply standard scaler to the customer sample data customers_stand=scaler.fit_transform(cleaned_customers) cust_pca=pca.fit_transform(customers_stand) pred_cust=model_pop.predict(cust_pca) pred_cust # Compare the proportion of data in each cluster for the customer data to the # proportion of data in each cluster for the general population. pop_prop = [] cust_prop = [] x = [i+1 for i in range(15)] for i in range(15): pop_prop.append((pred_pop == i).sum()/len(pred_pop)) cust_prop.append((pred_cust == i).sum()/len(pred_cust)) prop_data = pd.DataFrame({'clusters' : x, 'population_prop' : pop_prop, 'customers_prop':cust_prop}) prop_data.plot(x='clusters', y = ['population_prop', 'customers_prop'], kind='bar', figsize=(15,8)) plt.show() ###Output _____no_output_____ ###Markdown ObservationFrom the above distirbutions its clear that the company's customer base is not universal as the cluster assignment proportions not similar between the two. Its clear that company should target clusters like 14,12 and 15 where customers resides the most and neglect clusters like 3,4,and 8 where customers are least representative Part 6: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. Step 6.1: Apply All Steps to the Mailout Train Data- We will when load the Mailout Train data.- We will take 10% sampling ratio as we did earlier for sake of proportional comparison- Apply the same feature wrangling, selection, and engineering steps to the Mailout data using the clean function created earlier. ###Code Mailout_data_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') Mailout_data_train.head() Mailout_data_train.shape Mailout_data_train = Mailout_data_train.drop("LNR", axis=1) Mailout_data_sample=Mailout_data_train.sample(frac=0.1) Mailout_data_sample['RESPONSE'].value_counts() mailout_label_train = Mailout_data_sample["RESPONSE"] mailout_data_train = Mailout_data_sample.drop("RESPONSE", axis=1) mailout_data_train.shape mailout_data_train.select_dtypes(include='object') #Apply the clean function clean_mailout_train=clean_data(mailout_data_train) clean_mailout_train.shape #We will test and split our data to prepare the model from sklearn.model_selection import train_test_split X_train, X_val, y_train, y_val = train_test_split(clean_mailout_train, mailout_label_train, test_size=0.2, random_state=1) #We will apply Supervised logistic regression model from sklearn import model_selection from sklearn.linear_model import LogisticRegression from sklearn.model_selection import GridSearchCV #kfold = model_selection.KFold(n_splits=10, random_state=7, shuffle=True) #lr = LogisticRegression(random_state=1) param_grid = {'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000] } clf = GridSearchCV(LogisticRegression(penalty='l2'), param_grid) GridSearchCV(cv=None, estimator=LogisticRegression(C=1.0, intercept_scaling=1, dual=False, fit_intercept=True, penalty='l2', tol=0.0001), param_grid={'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000]}) grid_fit = clf.fit(X_train, y_train) print(clf.best_estimator_.get_params()) print(clf.best_score_) best_clf = grid_fit.best_estimator_ preds = best_clf.predict(X_val) #Calculate the model prediction accuracy on the train data from sklearn.metrics import roc_auc_score print("ROC score on validation data: {:.4f}".format(roc_auc_score(y_val, preds))) !pip install cmake --upgrade !pip install xgboost # Import necessary packages from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV, RandomizedSearchCV from sklearn.model_selection import train_test_split from sklearn.ensemble import AdaBoostClassifier from sklearn.metrics import * from xgboost.sklearn import XGBRegressor # Extreme Gradient Boosting import xgboost as xgb # Initialze the estimators rf = RandomForestClassifier(random_state=42) # Initiaze the hyperparameters for each dictionary param1 = {} param1['classifier__n_estimators'] = [10, 50, 100, 250] param1['classifier__max_depth'] = [5, 10, 20] param1['classifier__class_weight'] = [None, {0:1,1:5}, {0:1,1:10}, {0:1,1:25}] param1['classifier'] = [rf] pipeline1 = Pipeline([('classifier', rf)]) rf1 = GridSearchCV(pipeline1, param1, cv=3, n_jobs=-1, scoring='roc_auc').fit(X_train, y_train) print (rf1.best_params_) print(rf1.best_score_) bestrf = rf1.best_estimator_ preds_rf = bestrf.predict(X_val) print("ROC score on validation data: {:.4f}".format(roc_auc_score(y_val, preds_rf))) parameters = { "loss":["deviance"], "learning_rate": [0.01, 0.05, 0.1, 0.2], "max_depth":[3,5,8], "max_features":["log2","sqrt"], "subsample":[0.5, 0.8, 1.0], "n_estimators":[10] } #passing the scoring function in the GridSearchCV gbst = GridSearchCV(GradientBoostingClassifier(), parameters,refit=False,cv=3, n_jobs=-1, scoring='roc_auc').fit(X_train, y_train) print (gbst.best_params_) print(gbst.best_score_) p_test3 = {'learning_rate':[0.2], 'n_estimators':[10]} tuning = GridSearchCV(estimator =GradientBoostingClassifier(max_depth=3, subsample=0.5, random_state=42, loss='deviance', max_features= 'log2'), param_grid = p_test3, scoring='roc_auc',n_jobs=4,iid=False, cv=5) best_gbst=tuning.fit(X_train,y_train) preds_gbst = best_gbst.predict(X_val) print("ROC score on validation data: {:.4f}".format(roc_auc_score(y_val, preds_gbst))) xgb_param_grid = {"max_depth": [5,10,20,30], "learning_rate": [0.01,0.1,0.5,0.9,1.], "gamma":[0.1,0.5,1.0], "n_estimators":[50,100,150,200] } gs3 = GridSearchCV(estimator = xgb.XGBClassifier(objective="binary:logistic", n_jobs=-1, eval_metric="auc", random_state=42), param_grid = xgb_param_grid, scoring = "roc_auc", cv = 3, n_jobs = -1).fit(X_train, y_train) print (gs3.best_params_) print(gs3.best_score_) bestclf = gs3.best_estimator_ best_predictions = bestclf.predict(X_val) best_predictions = best_clf.predict(X_val) roc_auc_score(y_val, best_predictions) ###Output _____no_output_____ ###Markdown Step 6.2: Apply All Steps & Model chosen to the Mailout Test Data ###Code Mailout_data_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') Mailout_data_test.shape Mailout_data_test.head() Mailout_data_test_sample=Mailout_data_test.sample(frac=0.1) LNR = Mailout_data_test_sample['LNR'].tolist() Mailout_data_test_sample= Mailout_data_test_sample.drop("LNR", axis=1) clean_mailout_test=clean_data(Mailout_data_test_sample) clean_mailout_test.shape model = LogisticRegression(random_state=1) model.fit(X_val, y_val) results = model.predict(clean_mailout_test) results.shape results = pd.DataFrame({'RESPONSE':results}) results['LNR'] = LNR result = results[['LNR', 'RESPONSE']] result.head() result['RESPONSE'].value_counts() ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from time import time # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # load in the data azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', delimiter=';') customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', delimiter=';') azdias.shape df = pd.DataFrame(azdias.isnull().sum() *100 / azdias.shape[0], index = azdias.columns, columns = ['value']) df[df.value >= 30] cols_to_drop = ['ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4', 'EXTSEL992', 'KK_KUNDENTYP'] azdias.drop(cols_to_drop, axis = 1, inplace = True) cols = azdias.columns cols_int = cols.difference(['CAMEO_DEUG_2015', 'CAMEO_DEU_2015', 'CAMEO_INTL_2015', 'OST_WEST_KZ', 'D19_LETZTER_KAUF_BRANCHE', 'EINGEFUEGT_AM', 'WOHNLAGE']) for col in cols_int: print(col) print(azdias[col].dtype) start = time() azdias[col] = azdias[col].fillna(-9).astype('int') finish = time() print(col, finish - start) feat_info = pd.read_csv('type_feat.csv') feat_info.head() import re def to_list(text): matches = re.findall(r'[0-9-]+', text) #text.replace('[', '').replace(']', '').split(',') result = [-9] for i in matches: result.append(int(i)) return result feat_info['missing'] = feat_info['missing_or_unknown'].apply(to_list) feat_info.head() cols = azdias.columns.tolist() for i in range(feat_info.shape[0]): start = time() variable = feat_info.iloc[i, 0] liste = feat_info.iloc[i, 4] if variable in cols: azdias.loc[azdias[variable].isin(liste), variable] = np.NaN final = time() print(i, final - start) azdias.head() azdias['AKT_DAT_KL'].dtype azdias['new'] = azdias['W_KEIT_KIND_HH'].fillna(-9).astype(object).astype('int') liste = [-1, 0] liste2 = [-1, 0, -9] from time import time start = time() azdias.loc[azdias['W_KEIT_KIND_HH'].isin(liste), 'W_KEIT_KIND_HH'] = np.NaN final = time() print(final - start) start2 = time() azdias.loc[azdias['new'].isin(liste2), 'new'] = np.NaN final2 = time() print(final2 - start2) cols = azdias.columns cols_int = cols.difference(['CAMEO_DEUG_2015', 'CAMEO_DEU_2015', 'CAMEO_INTL_2015', 'OST_WEST_KZ', 'D19_LETZTER_KAUF_BRANCHE', 'EINGEFUEGT_AM', 'WOHNLAGE']) #azdias[cols_int] = azdias[cols_int].fillna(-9).astype(object).astype('int') for col in cols_int: print(col) print(azdias[col].dtype) start = time() azdias[col] = azdias[col].fillna(-9).astype('int') finish = time() print(col, finish - start) feat_info = pd.read_csv('type_feat.csv') feat_info.head() import re def to_list(text): matches = re.findall(r'[0-9-]+', text) #text.replace('[', '').replace(']', '').split(',') result = [-9] for i in matches: result.append(int(i)) return result feat_info['missing'] = feat_info['missing_or_unknown'].apply(to_list) feat_info.head() azdias['ANREDE_KZ'] = azdias['ANREDE_KZ'].replace(-9, np.nan) azdias['CJT_GESAMTTYP'] = azdias['CJT_GESAMTTYP'].replace(-9, np.nan) azdias.head() azdias.iloc[:, 1:].replace(to_replace = -9, value = np.nan, inplace = True) azdias.loc[azdias['AKT_DAT_KL'] == -9, 'AKT_DAT_KL'] = np.nan azdias.iloc[:, 1:] = azdias.iloc[:, 1:].replace(-9, np.nan) azdias['CJT_GESAMTTYP'].isnull().sum() azdias['ANREDE_KZ'].isnull().sum() s = time() azdias.loc[azdias['AGER_TYP'].isin([-9, -1, 0]), 'AGER_TYP'] = np.NaN f = time() print(f - s) azdias['AGER_TYP'].isnull().sum() azdias.head() azdias.columns[azdias.columns.str.startswith('ALTER')] pd.DataFrame(azdias.isnull().sum() *100 / azdias.shape[0], index = azdias.columns).hist() df = pd.DataFrame(azdias.isnull().sum() *100 / azdias.shape[0], index = azdias.columns, columns = ['value']) df.head() df[df.value >= 30] cols_to_drop = df[df.value >= 30].index.tolist() azdias.drop(cols_to_drop, axis = 1, inplace = True) azdias.head() cols = azdias.columns.tolist() cols azdias['AGER_TYP'].dtype def convert_to_nan(): feat_info[feat_info.attribute.str.startswith('CAMEO')] feat_info.iloc[11] cols = azdias.columns.tolist() cols[17] azdias[azdias['CJT_TYP_1'] == np.NaN] cols[18] azdias.to_csv(index = False) customers.head() customers.shape azdias[azdias.LNR == 143874] # Be sure to add in a lot more cells (both markdown and code) to document your # approach and findings! ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') mailout_lnr = mailout_train.LNR.tolist() azdias_lnr = azdias.LNR.tolist() common_lnr = [i for i in mailout_lnr if i in azdias_lnr] len(common_lnr) mailout_train.columns.get_loc('RESPONSE') mailout_train.iloc[:, 364].sum() mailout_train.shape ###Output _____no_output_____ ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler from sklearn.cluster import KMeans, MiniBatchKMeans from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.decomposition import PCA from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import LogisticRegression from datetime import datetime # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. Data load and Pre-processing ###Code # load in the data # azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';') # customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') azdias = pd.read_csv('Udacity_AZDIAS_052018.csv', sep=';') ###Output /home/sumit/.local/lib/python3.8/site-packages/IPython/core/interactiveshell.py:3062: DtypeWarning: Columns (18,19) have mixed types.Specify dtype option on import or set low_memory=False. has_raised = await self.run_ast_nodes(code_ast.body, cell_name, ###Markdown It appears there is data of mixed types in these two columns and pandas could not convert successfully and saved them as objects instead. We will look at these later. First, let's verify the data against the spreadsheet provided with valid values ('DIAS Attributes - Values 2017.xlsx'). ###Code mix_type_columns = azdias.columns[[18,19]] print(mix_type_columns) print(azdias.shape) azdias.head(n=5) # Let's load the spreadsheet azdias_features = pd.read_excel('DIAS Attributes - Values 2017.xlsx', header=1, usecols="B:E") azdias_features.head(n=5) # We can use forward fill to update Attribute and Description columns azdias_features = azdias_features.fillna(method='ffill') # Let's find all attribute values that have valid values i.e. ignore Unknowns azdias_features = azdias_features[~(azdias_features.Meaning.str.contains('no ') | azdias_features.Meaning.str.contains('unknown'))] azdias_features.head() # Create a dataframe with valid values for the features available in the spreadsheet ('DIAS Attributes - Values 2017.xlsx') azdias_valid_feature_values = pd.DataFrame() for feature in azdias_features.Attribute.unique(): values = list(azdias_features[azdias_features.Attribute==feature]['Value']) azdias_valid_feature_values = azdias_valid_feature_values.append(pd.DataFrame([[feature, values]]), ignore_index = True) azdias_valid_feature_values.columns = ['Attribute', 'Values'] # There are some features without a fixed range of values in the spreadsheet. Lets look at those numeric_features_per_spreadsheet = azdias_features[azdias_features.Meaning.str.contains('numeric value')] numeric_features_per_spreadsheet #Lets update Value to a appropriate range in azdias_valid_feature_values dataframe for these features a = [list(range(1,11))] b = [list(range(1,4))] year = [list(range(1900, 2020))] cars_num = [list(range(0,4000))] # Columns that typically have a value between 1 and 10 for col in ['ANZ_HAUSHALTE_AKTIV', 'ANZ_HH_TITEL', 'ANZ_TITEL']: azdias_valid_feature_values.loc[azdias_valid_feature_values.Attribute == col, 'Values'] = pd.Series(a, index = np.where(azdias_valid_feature_values.Attribute == col)[0]) # Columns that typically have a value between 1 and 3 azdias_valid_feature_values.loc[azdias_valid_feature_values.Attribute == 'ANZ_PERSONEN', 'Values'] = pd.Series(b, index = np.where(azdias_valid_feature_values.Attribute == 'ANZ_PERSONEN')[0]) # Columns that have year as value for col in ['GEBURTSJAHR', 'MIN_GEBAEUDEJAHR']: azdias_valid_feature_values.loc[azdias_valid_feature_values.Attribute == col, 'Values'] = pd.Series(year, index = np.where(azdias_valid_feature_values.Attribute == col)[0]) # Column that has number of cars as value. Current max is 2300. Using a range 0-4000 for the sake of similicity for now. azdias_valid_feature_values.loc[azdias_valid_feature_values.Attribute == 'KBA13_ANZAHL_PKW', 'Values'] = pd.Series(cars_num, index = np.where(azdias_valid_feature_values.Attribute == 'KBA13_ANZAHL_PKW')[0]) # Columns that are common in 'DIAS Attributes - Values 2017.xlsx' spreadsheet and azdias dataframe def get_common_columns(): ''' This function returns a list of columns are present in 'DIAS Attributes - Values 2017.xlsx' and also in azdias dataset Input: None Output: List of column names ''' return set(azdias.columns).intersection(set(azdias_valid_feature_values.Attribute)) common_columns = get_common_columns() # Update values to NaNs in azdias dataframe if the values are outside of the range as defined in azdias_valid_feature_values for col in common_columns: valid_values = azdias_valid_feature_values.loc[azdias_valid_feature_values.Attribute==col, 'Values'].iloc[0] invalid_values_idx = ~azdias.loc[:,col].isin(valid_values) azdias.loc[invalid_values_idx,col] = np.nan def get_non_numeric_data_rows(series): ''' This function takes in a Pandas series as input and returns the row indices containing non numeric data Input: Pandas Series Output: A list containing indices for non-numeric data ''' rows_with_nonnumeric_data = [] for index, value in series[series.notnull()].items(): try: int(value) except: rows_with_nonnumeric_data.append(index) return rows_with_nonnumeric_data # Let's take care of the columns that caused mix type warning during load # CAMEO_INTL_2015 rows_with_nonnumeric_data = get_non_numeric_data_rows(azdias.CAMEO_INTL_2015) azdias.CAMEO_INTL_2015[rows_with_nonnumeric_data].value_counts() # Check different values of CAMEO_INTL_2015 feature. This feature is not present in the spreadsheet ('DIAS Attributes - Values 2017.xlsx') azdias.CAMEO_INTL_2015.value_counts() ###Output _____no_output_____ ###Markdown We can see that the majority of the rows have a numeric value. We will convert these XX to NaNs and use an Imputer later. XX don't appear to be a valid value for this feature ###Code #Updating 'XX' to Nan azdias.loc[azdias.CAMEO_INTL_2015=='XX', 'CAMEO_INTL_2015'] = np.NaN # Now converting to numeric type azdias.CAMEO_INTL_2015 = azdias.CAMEO_INTL_2015.astype('float') # CAMEO_DEUG_2015 # We do not have to worry about this feature since it was present in the spreadsheet. All invalid values got updated to NaNs in the code above # Percentage of rows that have missing values for each of the columns. missing_features = round((azdias.isnull().sum(axis=0)/azdias.shape[0])*100, 2).sort_values(ascending=False) # Lets plot some of features with highest missing data percentage plt.figure(figsize=(20,7)) missing_features[0:30].plot.bar(); plt.title('Features / Percentage of NaN') plt.ylabel('Percentage of NaN') ###Output _____no_output_____ ###Markdown Wow, we have some features where 99% of the values are missing. Let's drop the columns with 50% or more missing values. These features won't help us much in analysis or modeling. ###Code columns_to_drop_due_to_missing_data = list(missing_features[missing_features >= 50].index) print('Dropping following features \n {}' .format(columns_to_drop_due_to_missing_data)) azdias = azdias.drop(columns_to_drop_due_to_missing_data, axis=1) print("New dimension of azdias: {}" .format(azdias.shape)) # Delete all rows where more than 100 features have values missing rows_to_keep = azdias.isnull().sum(axis=1) < 100 azdias = azdias[rows_to_keep] print(azdias.shape) azdias.head() #Lets check object type features in azdias dataframe azdias.select_dtypes(include='object') # function to Convert EINGEFUEGT_AM to number of seconds since unix epoch start def convert_datetime_to_seconds(x): ''' This function converts datetime stored as strings to number of seconds since unix epoch Input: string Output: number of seconds since epoch as int ''' try: return datetime.strptime(x, '%Y-%m-%d %H:%M:%S').timestamp() except: # for cases where x is not a valid datetime return 0 # Convert EINGEFUEGT_AM to number of seconds azdias.EINGEFUEGT_AM = azdias.EINGEFUEGT_AM.apply(convert_datetime_to_seconds) cols_not_in_spreadsheet = list(set(azdias.columns).difference(set(azdias_valid_feature_values.Attribute))) for col in cols_not_in_spreadsheet: print(azdias[col].dtype) ###Output float64 float64 float64 float64 float64 float64 float64 int64 float64 float64 float64 int64 int64 int64 float64 int64 int64 int64 int64 float64 float64 float64 float64 int64 float64 float64 float64 float64 int64 float64 float64 float64 int64 int64 float64 float64 float64 int64 float64 int64 int64 int64 float64 int64 float64 object float64 int64 float64 int64 int64 float64 float64 int64 float64 float64 float64 int64 int64 int64 float64 float64 float64 int64 int64 float64 float64 float64 int64 int64 int64 int64 float64 float64 float64 int64 float64 float64 int64 float64 int64 int64 int64 float64 int64 int64 int64 int64 ###Markdown It appears majority of these features are of numeric type. Since, we do not have any additional information about these features. However, most of the features present in the spreadsheet (DIAS Attributes - Values 2017.xlsx) are actually categorical with the exception of a few which are numerical. Let's convert those to strings so we can use one-hot encoding ###Code #These were the numeric features numeric_features_per_spreadsheet for col in numeric_features_per_spreadsheet.Attribute: try: print(azdias[col].dtype) except: print('Column {} does not exist' .format(col)) ###Output float64 Column ANZ_HH_TITEL does not exist float64 Column ANZ_TITEL does not exist float64 float64 float64 ###Markdown Some of these columns have been dropped and others are in numeric formats. ###Code common_columns = set(azdias_valid_feature_values.Attribute).intersection(set(azdias.columns)) %%time # Find out features with high correlation. Dropping columns where correlation is higher than 0.7. Since, these column vary together, one of them is enough to provide # information needed for our analysis corr_matrix = azdias.corr() #sns.heatmap(corr_matrix) upper = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool)) # Find features with correlation greater than 0.7 columns_to_drop_due_to_high_correlation = [column for column in upper.columns if any(upper[column] > 0.7)] print('Columns to be dropped are {}' .format(columns_to_drop_due_to_high_correlation)) azdias = azdias.drop(columns_to_drop_due_to_high_correlation, axis=1) azdias.shape # # look at features that can potentially be one-hot encoded columns_to_hot_encode = list(azdias.select_dtypes(include='object').columns) azdias = pd.get_dummies(azdias, dummy_na=False, columns=columns_to_hot_encode, drop_first=True) azdias.shape ###Output _____no_output_____ ###Markdown Impute and Scale Cleaned Data ###Code # Impute missing values # Using median to replace NaNs as they are less susceptible to outliers than mean imputer = SimpleImputer(strategy='median') azdias_columns = azdias.columns #save a copy of cleaned azdias data frame azdias_clean = azdias #Impute azdias = imputer.fit_transform(azdias) # Scale values scaler = StandardScaler() azdias = pd.DataFrame(scaler.fit_transform(azdias), columns=azdias_columns) azdias.head() # Let's create a function for the pre-processing done earlier. This function can be used on customer data set to be loaded def clean_data(df): ''' This function does the pre-processing task of cleaning up the dataframe passed as input parameter Input: DataFrame to be cleaned Output: Cleaned DataFrame ''' # Update values to NaNs in azdias dataframe if the values are outside of the range as defined in azdias_valid_feature_values for col in common_columns: valid_values = azdias_valid_feature_values.loc[azdias_valid_feature_values.Attribute==col, 'Values'].iloc[0] invalid_values_idx = ~df.loc[:,col].isin(valid_values) df.loc[invalid_values_idx,col] = np.nan # Update non-numeric data in CAMEO_INTL_2015 to NaN and then convert to numeric type rows_with_nonnumeric_data = get_non_numeric_data_rows(df.CAMEO_INTL_2015) df.iloc[rows_with_nonnumeric_data, df.columns.get_loc('CAMEO_INTL_2015')] = np.NaN df.CAMEO_INTL_2015 = df.CAMEO_INTL_2015.astype('float') # Convert EINGEFUEGT_AM to number of seconds since unix epoch start df.EINGEFUEGT_AM = df.EINGEFUEGT_AM.apply(convert_datetime_to_seconds) # drop columns due to missing data df = df.drop(columns_to_drop_due_to_missing_data, axis =1) # drop columns due to high correlation df = df.drop(columns_to_drop_due_to_high_correlation, axis =1) # One hot encoding df = pd.get_dummies(df, dummy_na=False, columns=columns_to_hot_encode, drop_first=True) # return cleaned df return df # Load customers data customers = pd.read_csv('Udacity_CUSTOMERS_052018.csv', sep=';') # We got the same warning that got during azdias load. Our clean_data function should take care of these customers = clean_data(customers) print(customers.shape) customers.head() # let's check out the additional columns in customers dataframe customers_extra_columns = set(customers.columns).difference(set(azdias.columns)) # Do these have NaNs customers[customers_extra_columns].isnull().sum(axis=0) # That's good we do not have NaNs in these columns. Let's save this data off to a new dataframe. customers_extra_columns_data = customers[customers_extra_columns] customers = customers.drop(customers_extra_columns, axis=1) # Lets impute and scale customers customers_columns = customers.columns #save a clean copy of customers dataframe customers_clean = customers #impute customers = imputer.transform(customers) #scale customers = pd.DataFrame(scaler.transform(customers), columns = customers_columns) customers.head() ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. Dimensionality ReductionPrincipal Component Analysis is one of the ways to reduce dimensionality. It allows us to represent data in less number of dimensions while keeping most of the variability inherent in the data. First we will perform pca without specifying number of components. This allows us to select required number of components for a desired variability. Although, there is not a hard and fast rule to retain certain percentage of variability while reducing number of dimensions, 80-90% is generally considered good. We will target 80% variability. Once we have required number of components we will perform pca again with that number as a parameter. We have 332 features and we will soon find out how many transformed featured we end up with for our targeted variance. ###Code %%time # Lets perform PCA pca = PCA(random_state=7).fit(azdias) plt.figure(figsize=(20,7)) plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('Number of Components') plt.ylabel('Cumulative Explained Variance') plt.show() # We can see here that first 200 components explain roughly 80% of the variation in the data. Let perform PCA again with 200 components pca = PCA(n_components=200, random_state=7).fit(azdias) azdias_pca = pca.transform(azdias) # Let's look at the weights of these features for each of these principal components x = ['PC']*200 y = range(1,201) principal_components = [i + str(j) for i, j in zip( x, y )] feature_weights = pd.DataFrame(pca.components_, columns = azdias.columns, index=principal_components) feature_weights.head() ###Output _____no_output_____ ###Markdown Let's find out most important features for some of these first few components. We will create a function to plot 20 most important features for these principal components ###Code def important_features(pc, draw_plot=True): ''' This function plots top 20 important features of a principal component. input: principal component name i.e. PC1, PC2 etc. output: None ''' order_index = feature_weights.loc[pc].abs().sort_values(ascending=False).index pc_feature_weights = feature_weights.loc[pc][order_index][0:20].sort_values() if draw_plot: plt.figure(figsize=(20,7)) pc_feature_weights.plot(kind='bar'); plt.title('Important Features for ' + pc) plt.ylabel('Feature Weight') plt.show() return pc_feature_weights _ = important_features('PC1') _ = important_features('PC2') ###Output _____no_output_____ ###Markdown K-Means ClusteringWe will be using MiniBatchKMeans for clustering to speedup processing. The MiniBatchKMeans is a variant of the KMeans algorithm which uses mini-batches to reduce the computation time, while still attempting to optimise the same objective function. Mini-batches are subsets of the input data, randomly sampled in each training iteration. These mini-batches drastically reduce the amount of computation required to converge to a local solution. In contrast to other algorithms that reduce the convergence time of k-means, mini-batch k-means produces results that are generally only slightly worse than the standard algorithm. (from sklearn documentation) ###Code %%time scores = [] for k in range(2,21): scores.append(MiniBatchKMeans(k, random_state=7).fit(azdias_pca).score(azdias_pca)) plt.figure(figsize=(20,7)) plt.plot(np.abs(scores)) plt.xlabel("Number of Clusters") plt.ylabel("SSE") ###Output CPU times: user 4min 47s, sys: 2min 1s, total: 6min 49s Wall time: 2min 48s ###Markdown There is not a significant improvement after 12 clusters. Let's refit and get cluster labels for these data points ###Code kmeans = MiniBatchKMeans(12, random_state=7).fit(azdias_pca) azdias_labels = kmeans.predict(azdias_pca) #Let's transform customer data and get the cluster labels customers_pca = pca.transform(customers) customers_labels = kmeans.predict(customers_pca) ###Output _____no_output_____ ###Markdown Analysis of Customer and general population Clusters ###Code # Calculate customer and general population count percentages in each of the clusters unique_azdias_labels, azdias_counts = np.unique(azdias_labels, return_counts=True) unique_customers_labels, customers_counts = np.unique(customers_labels, return_counts=True) customers_counts_percentage = customers_counts/customers_pca.shape[0] azdias_counts_percentage = azdias_counts/azdias_pca.shape[0] # Plot Customer and population percentage for each of the clusters number_of_clusters = 12 index = np.arange(number_of_clusters) width = 0.4 plt.figure(figsize=(20,7)) b1 = plt.bar(x= index, height = customers_counts_percentage, color='blue', label= 'Percentage Customer Population', width= width) b2 = plt.bar(x= index+width, height = azdias_counts_percentage, color='red', label='Percentage Total Population', width = width) plt.xticks(index + width/2, index) plt.legend(loc='best') plt.xlabel('Clusters') plt.ylabel('Percentage Population') plt.show() ###Output _____no_output_____ ###Markdown We see in this plot that cluster 3 has the highest percentage of total customers. Clusters 1, 7, 9 and 10 are some of the other clusters with significant customer concentration. ###Code # Calculate customer to general population ratio in each of the clusters customer_general_population_ratio = customers_counts/azdias_counts plt.figure(figsize=(20,7)) plt.bar(x=index, height=customer_general_population_ratio) plt.xlabel('Clusters') plt.ylabel('Customer to General Population ratio') plt.show() ###Output _____no_output_____ ###Markdown This plot suggests that cluster 9 has highest customer to population ratio of roughly 40%. I would recommend targeting population in clusters 3, 7, 9 and 10. At the same it would be advisable to see why customer to population ratio is so low in some of the clusters like 4, 6 and 8. Maybe, a special offer might entice people to become customers.Let's see if we can map these clusters to the original features we got with the dataset. It is one of the challenges with PCA where you work with transformed features instead of the original ones. ###Code # We will create a function to determine which top 10 features are important in a cluster def cluster_features(n): ''' This function determines which features are important in a cluster. First we find principal components with larger weight and then we get important features of these principal components Input: Cluster number i.e. 0 - 12 Output: None ''' pc_weights = pd.DataFrame(kmeans.cluster_centers_[n], columns=['Weight'], index=principal_components) order_index = pc_weights.abs().sort_values(by='Weight',ascending=False).index print('These are the top 4 principal components by weight in cluster {}' .format(n)) print(pc_weights.loc[order_index][0:4]) pcs = list(pc_weights.loc[order_index][0:4].index) most_important_features = pd.Series(dtype='float') for pc in pcs: most_important_features = most_important_features.append(important_features(pc, False)) order = most_important_features.abs().sort_values(ascending=False).index most_important_features = most_important_features.loc[order][0:10].to_frame() most_important_features.reset_index(inplace=True) most_important_features.columns = ['Feature', 'Weight'] most_important_features = most_important_features.merge(azdias_features, left_on='Feature', right_on='Attribute', how='left') most_important_features = most_important_features[['Feature', 'Weight','Description']].drop_duplicates() pd.options.display.width = 1000 pd.options.display.max_colwidth = 100 print('\n These are some of the important features in this cluster') print(most_important_features) # Let's look at important features in cluster 9 which has high customer to population ratio cluster_features(9) ###Output These are the top 4 principal components by weight in cluster 9 Weight PC2 4.623554 PC1 3.058266 PC3 -1.715304 PC4 1.120980 These are some of the important features in this cluster Feature Weight Description 0 CJT_TYP_1 0.242114 NaN 1 CJT_TYP_6 -0.221066 NaN 2 ALTERSKATEGORIE_GROB -0.216327 age classification through prename analysis 7 FINANZ_VORSORGER -0.214246 financial typology: be prepared 12 CJT_TYP_3 -0.214120 NaN 13 SEMIO_PFLICHT 0.204303 affinity indicating in what way the person is dutyfull traditional minded 20 SEMIO_TRADV 0.203461 affinity indicating in what way the person is traditional minded 27 FINANZ_ANLEGER 0.200935 financial typology: investor 32 ALTER_HH 0.197289 main age within the household 53 SEMIO_RAT 0.193022 affinity indicating in what way the person is of a rational mind ###Markdown Now we have a list of important features in cluster 9 which has the highest customer to population ratio. We can see that age, financial preparedness, interest in investments etc. are important features in this cluster. Now let's look at the value of some of these features in the cluster ###Code # We will be looking at feature values in cluster 9 but same can be done for other clusters as well cluster9_indexes = np.where(customers_labels==9) def feature_value_distribution(feature): feature_df = azdias_features[azdias_features.Attribute==feature][['Value','Meaning']] value_counts = customers_clean.iloc[cluster9_indexes][feature].value_counts().to_frame().reset_index() value_counts.columns = ['Value','Counts'] df = value_counts.merge(feature_df, left_on='Value', right_on='Value', how='inner') #customers_clean.iloc[cluster10_indexes].ALTERSKATEGORIE_GROB.value_counts().bar() plt.figure(figsize=(20,7)) df = df[['Meaning','Counts']] plt.bar(df.Meaning, df.Counts) plt.title(feature) plt.show() feature_value_distribution('ALTERSKATEGORIE_GROB') feature_value_distribution('FINANZ_VORSORGER') feature_value_distribution('SEMIO_RAT') ###Output _____no_output_____ ###Markdown We can see from the plots above that most customers in cluster 9 are older than 60 yrs. old. Their financial preparedness appear to be low and have high affinity to rational mind Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code #mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') #Let's load training dataset train = pd.read_csv('Udacity_MAILOUT_052018_TRAIN.csv', sep=';') train.head() # We have the same warning here that we got earlier in our analysis. Our cleaning function should work just fine. Let's try it out train = clean_data(train) train.head() # Let's separate out predictors and response variables. To keep things consistent with sklearn library we would call the response variable y and the predictors dataset X y = train.RESPONSE X = train.drop('RESPONSE', axis=1) # Let's look at our response variable plt.figure(figsize=(20,7)) plt.bar(['Failure','Success'],y.value_counts()) plt.ylabel('Count') plt.title('Campaign OutCome') ###Output _____no_output_____ ###Markdown Our data set is inbalanced as there is a small number of positive outcomes (customers) and a very large number of negative outcomes (people who were targeted by mailing campaign but did not become customers). Any model would do well predicting a negative outcome since we have so many of those to train on but the real challenge is to predict positive outcome. There are a couple of ways to work with such imbalanced data. We can under or over sample to get a balanced dataset. However, we will use AUROC (Area Under the Receiver Operating Characteristics).Most classification models calculate a probability for positive or negative outcome. Threshold is 50% by default. So, if calculated probability is more than 0.5 a positive outcome is predicted otheriwse a negative outcome is predicted. AUC/ROC allows us to visualize the model performance at varying thresholds. Higher area under the curve translates to higher performance. Business knowledge is usually needed to choose a threshold. It usually depends on what is more costly to a business: Losing a potential customer or losing money on someone who has low probability of becoming a customer.My opinion is this case would be that losing a customer is more expensive than to lose money on a mailing campaign for someone who is less likely to become a customer. Meaning, we would prefer false positives over false negatives. Impute and Scale ###Code #Impute imputer = SimpleImputer(strategy='median') X = pd.DataFrame(imputer.fit_transform(X), columns=X.columns) #Scale scaler = StandardScaler() X = pd.DataFrame(scaler.fit_transform(X), columns=X.columns) ###Output _____no_output_____ ###Markdown ModelingLet's create a function that we can use for different classifiers and use roc_auc score as performance metric with cross validation ###Code scores = [] def roc_auc_classifier(classifier, params_grid={}, X=X, y=y): ''' Check classifier performance with GridSearchCV Input: Classifier, hyper parameters for the classifier, predictor dataset and the response variable Output: Return the best parameters for a model ''' gcv = GridSearchCV(classifier, param_grid= params_grid, scoring='roc_auc', cv=3) gcv.fit(X, y) scores.append([classifier.__class__.__name__, gcv.best_score_]) print(gcv.best_score_) return(gcv.best_estimator_) %%time # Random Forest classifier rf = RandomForestClassifier(random_state=7) params_grid = {'max_depth':[5,10], 'n_estimators':[100, 200]} roc_auc_classifier(rf, params_grid) %%time # Logistic Regression lr = LogisticRegression(random_state=7, max_iter=1000) params_grid = {'C':[0.001], 'solver':['newton-cg', 'lbfgs']} roc_auc_classifier(lr, params_grid) %%time # AdaBoostClassifier adb = AdaBoostClassifier(random_state=7) params_grid = { 'learning_rate':[0.20, .50, 1], 'n_estimators':[20, 50, 100] } roc_auc_classifier(adb, params_grid) %%time gbc = GradientBoostingClassifier(random_state=7) params_grid = { 'learning_rate':[0.1, 0.5], 'n_estimators':[100, 200] } roc_auc_classifier(gbc, params_grid) # Plot model and corresponding scores score = pd.DataFrame(scores, columns=['Model','Score']) plt.figure(figsize=(20,7)) plt.bar(score.Model, score.Score) plt.title('Classifier Scores') plt.show() ###Output _____no_output_____ ###Markdown ConclusionWe got the best AUC score of 0.76 using AdaBoostClassifier after some hyper parameter tuning. Learning rate of 0.2 and n_estimators=20 got us to achieve this result. We will be using this model to predict on test data Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code #mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') test = pd.read_csv('Udacity_MAILOUT_052018_TEST.csv', sep=';') test.head() print(test.shape) # Save LNR column test_LNR = test.LNR ###Output (42833, 366) ###Markdown Clean, Impute and Scale ###Code test = clean_data(test) test = imputer.transform(test) test = scaler.transform(test) # re-fit with optimized parameters and predict gcv = GridSearchCV(AdaBoostClassifier(random_state=7), param_grid= {'learning_rate':[0.2], 'n_estimators':[20]}, scoring='roc_auc', cv=3) gcv.fit(X, y) predict = gcv.predict_proba(test) # Write to file predictions = pd.DataFrame(list(zip(list(test_LNR), predict[:,1])), columns = ['LNR','RESPONSE']) predictions.to_csv('Arvato.csv', index=False) ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. BUSINESS UNDERSTANDINGThis project seeks to analyse demographics data for customers of a mail-orer sales company to identify parts of the popultation that describe the core customer base of the company. Use that information to target individuals that are most likely to convert into becoming a customer for the company.To achieve this we need to answer the following questions:1. What segment of the population can be targeted for the company?2. What individual characteristics are most likely to convert to being a customer of the company? DATA UNDERSTANDINGThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import math from sklearn.preprocessing import StandardScaler from sklearn.cluster import MiniBatchKMeans from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import Imputer from sklearn.decomposition import PCA from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import Pipeline from sklearn.metrics import recall_score # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Gatherload libraries, modules and dataset ###Code # load in the data azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';') customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') azdias.head() customers.head() ###Output _____no_output_____ ###Markdown ACCESSPerform descriptive data analysis to get to know the format sctructure and nature of the dataset in question First we want to access the dataset for the general population ###Code azdis_shape = azdias.shape print('the shape of the dataset is {} with {} rows and {} columns'.format(azdis_shape, azdis_shape[0], azdis_shape[1])) ###Output the shape of the dataset is (891221, 366) with 891221 rows and 366 columns ###Markdown Running the processes below takes forever, so will slice the data and take 50% for the rest of the process ###Code take_50 = math.ceil(((azdis_shape[0]*50) / 100)) take_50 azdias =azdias.head(take_50) customers = customers.head(take_50) print('get the datatypes of the columns') azdias.dtypes.unique() # select numeric colums numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64'] azdis_numeric_df = azdias.select_dtypes(include=numerics) azdis_numeric_df.columns ###Output _____no_output_____ ###Markdown The names for the columns don't make sense. So we will use the discription of the data to match and replace the column names ###Code azdis_numeric_df.describe() ###Output _____no_output_____ ###Markdown From the table we can see that almost all the numeric features have missing valuesThe values in the features are normally spreadSome features have outlier, for example the ANZ_HAUSHALTE_AKTIV column has a 75th percentile value of 9.0 and a max of 595. definately an outlier. May need more insight to prove this later Now let's take a look at the customer dataThe CUSTOMERS dataframe contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP') ###Code customers_shape = customers.shape print('the shape of the dataset is {} with {} rows and {} columns'.format(customers_shape, customers_shape[0], customers_shape[1])) ###Output the shape of the dataset is (191652, 369) with 191652 rows and 369 columns ###Markdown Let's take a look at these 4 columns in the customer dataframe ###Code main_feat_customer = customers[['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP']] main_feat_customer.head() main_feat_customer.nunique() # What groups had the most purchase plt.figure(figsize=(10,7)) sns.barplot(data=main_feat_customer, x='CUSTOMER_GROUP', y='ONLINE_PURCHASE') # what do customers buy most plt.figure(figsize=(10,7)) sns.barplot(x='CUSTOMER_GROUP', y='ONLINE_PURCHASE', hue="PRODUCT_GROUP", data=main_feat_customer); ###Output _____no_output_____ ###Markdown We could see from the graph that single customer group pruchase cosmetics products.Even with the multi buyer group of customers, costmetic products where purchased the more ###Code customers['ONLINE_PURCHASE'].replace({0: "OFFLINE", 1: "ONLINE"}).value_counts().plot(kind='bar') ###Output _____no_output_____ ###Markdown there are more customers who made purchase offline than online ###Code # calculate the percent null of each column percen_null = azdias.isnull().sum()/ azdias.shape[0] # select fisrt 50 columns an dispay their percent null in descending order plt.figure(figsize=(14,8)) percen_null.sort_values(ascending=False)[:50].plot(kind='bar') plt.ylabel('percent of missing values'); plt.xlabel('features'); #plt.savefig('feature_missing_values.png', dpi=500, bbox_inches='tight', pad_inches=0, transparent=True) # columns whose percent null is greater than 50% percen_null[percen_null > .5] # columns whose percent null is less than 10 percent percen_null[percen_null < .1] # columns whose percent null is between 20 and 50 percent percen_null[(percen_null < .5) & (percen_null > .1) ] ###Output _____no_output_____ ###Markdown Check the number of attribute for each feature.For this and many more insights we want to load in the feature info file to better understand our data ###Code # load in the info files features_info = pd.read_excel('DIAS Attributes - Values 2017.xlsx',\ sheet_name='Tabelle1', index_col=[0, 1, 2]).reset_index() features_info.drop('level_0', axis=1, inplace=True) features_info_levels = pd.read_excel('DIAS Information Levels - Attributes 2017.xlsx', header=1, index_col=[0, 1]).reset_index() features_info_levels.drop('level_0', axis=1, inplace=True) features_info.head() features_info[features_info.Attribute=='AGER_TYP'] ###Output _____no_output_____ ###Markdown Find out how many features from the azdias dataframe we have in the features info ###Code featrues_we_have_info = np.intersect1d(np.array(azdias.columns),features_info.Attribute.unique(), assume_unique = True) features_we_dont_have_info = np.setdiff1d(np.array(azdias.columns),features_info.Attribute.unique(), assume_unique = True) ###Output _____no_output_____ ###Markdown So far we want to concentrate on the features we have info about and check the number of unique attribute each feature has ###Code # get the features we have info about feat_info_azdias= features_info[features_info.Attribute.isin(featrues_we_have_info)] feat_info_azdias.head() # now for each unique attribut we want to count the number of unique values they have unique_feat_table = dict({}) for att in feat_info_azdias.Attribute: if att not in unique_feat_table.keys(): unique_feat_table[att] = 0 unique_feat_table[att] += 1 unique_feat_df = pd.DataFrame.from_dict(unique_feat_table,orient='index') unique_feat_df.head() unique_feat_df.columns # lets plot them and see unique_feat_df.sort_values(0,ascending=False)[:50].plot(kind='bar', figsize=(15,9)) plt.ylabel('number of unique values'); plt.xlabel('features'); ###Output _____no_output_____ ###Markdown From the graph we see that some features have too many unique values which might not help in our analysis. so we have to clean them later Now from the features info, we want to get the features that have unknown values ###Code # get values that are unkown from the info data unknown_vals = features_info[features_info.Value.isin([-1,0])].Meaning.unique() unknown_vals # select rows that we have info about and have missing values info_azdias = features_info[(features_info.Attribute.isin(featrues_we_have_info))\ & (features_info.Meaning.isin(unknown_vals))] info_azdias.head() # get features with no null values features_info = features_info[~features_info['Value'].isna()] features_info.head() # split the value column into list -> if value is -1,0 then make it [-1,0] features_info.Value = [ str(i).split(',') for i in features_info.Value] azdias['ANZ_HAUSHALTE_AKTIV'].head() ###Output _____no_output_____ ###Markdown CLEANclean the dataset by perform the following duties:* Convert features to right data formats* Remove features with no variability* Drop duplicated* Drop nulls or fill nulls* e.t.c ###Code # Convert missing or unknown values to NaNs for col in azdias.columns: try: if col in features_info.index.tolist(): index = azdias[col].isin(features_info.at[col, 'Value'].tolist()[0]) azdias.loc[index,col] = np.Nan #azdias.at[index, col] = np.NaN except: print('error:', col) ###Output _____no_output_____ ###Markdown Now, we have to see the percent missing values in the general population dataset and remove them ###Code null_azdias = azdias.isnull().sum() / azdias.shape[0] * 100 plt.hist(null_azdias, np.arange(min(null_azdias)-0.5, max(null_azdias)+0.5) ) plt.xlabel('percent of missing values') plt.ylabel('number of features') plt.savefig('missing_values.png', dpi=500, bbox_inches='tight', pad_inches=0) plt.show() ###Output _____no_output_____ ###Markdown We can see that there are a lot of columns with higher null values. So we need to remove them ###Code # Remove the outlier columns from the dataset. col_outlier = null_azdias[null_azdias > 30].index # Show column outliers col_outlier # drop the outlier columns try: customers.drop(columns= [col_outlier], axis=1, inplace=True) azdias.drop(columns= [col_outlier], axis=1, inplace=True) except: print('error:', col) ###Output _____no_output_____ ###Markdown Drop Duplicated rows in each feature set ###Code # get the number of duplicated column duplicated_cols = azdias.duplicated() duplicated_cols.sum() # drop duplicates azdias.drop_duplicates(inplace=True) customers.drop_duplicates(inplace=True) ###Output _____no_output_____ ###Markdown Remove Outliers based on higher distinct values in features ###Code # Now, we need to also remove the outlier attributes from feature_info feat_info_new = features_info[features_info.index.isin(col_outlier) == False] feat_info_new.shape ###Output _____no_output_____ ###Markdown Remove featues with too many distinct values ###Code # Drop features with too many distinct values drop_columns = ['AGER_TYP', 'LNR', 'LP_FAMILIE_GROB', 'ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4', 'VERDICHTUNGSRAUM', 'EXTSEL992','EINGEFUEGT_AM', 'LP_STATUS_GROB', 'KBA05_BAUMAX', 'KK_KUNDENTYP', 'GEBURTSJAHR', 'ALTER_HH', 'TITEL_KZ'] try: azdias.drop(columns= [drop_columns], axis=1, inplace=True) customers.drop(columns= [drop_columns], axis=1, inplace=True) except: print('error:') customers.drop(['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], axis=1, inplace=True) ###Output error: ###Markdown Now we want to re-encode values in features ###Code azdias['OST_WEST_KZ'] = azdias['OST_WEST_KZ'].replace({'O':1.0, 'W':2.0}) customers['OST_WEST_KZ'] = customers['OST_WEST_KZ'].replace({'O':1.0, 'W':2.0}) azdias['CAMEO_DEUG_2015'] = azdias['CAMEO_DEUG_2015'].replace({'X':np.NaN}) azdias['CAMEO_INTL_2015'] = azdias['CAMEO_INTL_2015'].replace({'XX':np.NaN}) customers['CAMEO_DEUG_2015'] = customers['CAMEO_DEUG_2015'].replace({'X':np.NaN}) customers['CAMEO_INTL_2015'] = customers['CAMEO_INTL_2015'].replace({'XX':np.NaN}) azdias['CAMEO_DEUG_2015'] = azdias['CAMEO_DEUG_2015'].astype('float') customers['CAMEO_DEUG_2015'] = customers['CAMEO_DEUG_2015'].astype('float') ###Output _____no_output_____ ###Markdown Create Dummy variables for categorical columns in both demographic dataset ###Code categorical = ['CJT_GESAMTTYP', 'FINANZTYP', 'GFK_URLAUBERTYP', 'LP_FAMILIE_FEIN', 'LP_STATUS_FEIN', 'NATIONALITAET_KZ', 'SHOPPER_TYP', 'ZABEOTYP', 'GEBAEUDETYP', 'CAMEO_DEUG_2015', 'D19_KONSUMTYP', 'D19_LETZTER_KAUF_BRANCHE', 'ALTERSKATEGORIE_FEIN', 'EINGEZOGENAM_HH_JAHR', 'GEMEINDETYP', 'STRUKTURTYP', 'LP_LEBENSPHASE_GROB', 'CAMEO_DEU_2015', 'WOHNLAGE'] azdias = pd.get_dummies(azdias, columns=categorical) customers = pd.get_dummies(customers, columns=categorical) azdias['CAMEO_INTL_2015'] = azdias['CAMEO_INTL_2015'].astype('float') customers['CAMEO_INTL_2015'] = customers['CAMEO_INTL_2015'].astype('float') if 'EINGEZOGENAM_HH_JAHR_1986.0' in customers.columns: customers.drop('EINGEZOGENAM_HH_JAHR_1986.0', axis=1, inplace=True) if 'EINGEZOGENAM_HH_JAHR_1986.0' in azdias.columns: azdias.drop('EINGEZOGENAM_HH_JAHR_1986.0', axis=1, inplace=True) # fill na values with a dummy number -999 azdias.fillna(-999 , inplace=True) customers.fillna(-999 , inplace=True) azdias.select_dtypes(include=['object']).head() try: customers.drop('EINGEFUEGT_AM', axis=1, inplace=True) azdias.drop('EINGEFUEGT_AM', axis=1, inplace=True) except: print('error:') # Check for missing column miss_col = list(np.setdiff1d(azdias.columns, customers.columns)) print(miss_col) for col in miss_col: try: azdias.drop(col, axis=1 ,inplace=True) except: print(col) ###Output ['ALTERSKATEGORIE_FEIN_1.0', 'EINGEZOGENAM_HH_JAHR_1900.0', 'EINGEZOGENAM_HH_JAHR_1904.0'] ###Markdown DATA PREPARATION Now we have to put all together into a function for cleaning the dataset for preprocessing when creating the supervised model ###Code def clean_demographics_data(df): """ Perform feature trimming, re-encoding, and engineering for demographics data Args: 1. Demographics DataFrame Return: Cleaned demographics DataFrame """ for col in df: if col == 'RESPONSE': print('passing RESPONSE') pass else: if col in features_info.index.tolist(): index = df[col].isin(features_info.at[col, 'Value']) df.loc[index, col] = np.NaN # remove selected columns and rows drop_columns = ['AGER_TYP', 'LNR', 'LP_FAMILIE_GROB', 'ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4', 'VERDICHTUNGSRAUM', 'EXTSEL992','EINGEFUEGT_AM', 'LP_STATUS_GROB', 'KBA05_BAUMAX', 'KK_KUNDENTYP', 'GEBURTSJAHR', 'ALTER_HH', 'TITEL_KZ'] try: df.drop(columns=[drop_columns], axis=1, inplace=True) except: print('warning: at drop columns with excess unique variables') # drop outliers # try: # df.drop(columns=[col_outlier], axis=1, inplace=True) # except: # print('warning: at drop outliers') # select, re-encode, and engineer column values if df['OST_WEST_KZ'].dtypes != np.float64: df['OST_WEST_KZ'] = df['OST_WEST_KZ'].replace({'O':1.0, 'W':2.0}) if df['CAMEO_DEUG_2015'].dtypes == 'str' or df['CAMEO_DEUG_2015'].dtypes == 'object': df['CAMEO_DEUG_2015'] = df['CAMEO_DEUG_2015'].replace({'X':np.NaN}) if df['CAMEO_INTL_2015'].dtypes == 'str' or df['CAMEO_INTL_2015'].dtypes == 'object': df['CAMEO_INTL_2015'] = df['CAMEO_INTL_2015'].replace({'XX':np.NaN}) # change to float df['CAMEO_DEUG_2015'] = df['CAMEO_DEUG_2015'].astype('float') df['CAMEO_INTL_2015'] = df['CAMEO_INTL_2015'].astype('float') # create dummy variable for categorical columns categorical = ['CJT_GESAMTTYP', 'FINANZTYP', 'GFK_URLAUBERTYP', 'LP_FAMILIE_FEIN', 'LP_STATUS_FEIN', 'NATIONALITAET_KZ', 'SHOPPER_TYP', 'ZABEOTYP', 'GEBAEUDETYP', 'CAMEO_DEUG_2015', 'CAMEO_DEU_2015', 'D19_KONSUMTYP', 'ALTERSKATEGORIE_FEIN', 'D19_LETZTER_KAUF_BRANCHE', 'EINGEZOGENAM_HH_JAHR', 'GEMEINDETYP', 'STRUKTURTYP', 'LP_LEBENSPHASE_GROB', 'WOHNLAGE'] df_cleaned = pd.get_dummies(df, columns=categorical) if 'EINGEZOGENAM_HH_JAHR_1986.0' in df.columns: df_cleaned.drop('EINGEZOGENAM_HH_JAHR_1986.0', axis=1, inplace=True) try: df_cleaned.drop('EINGEFUEGT_AM', axis=1, inplace=True) except: print('error:') df_cleaned.fillna(-999 , inplace=True) # Return the cleaned dataframe. return df_cleaned ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. In this section we will perform two main operations: 1. PCA -> Reduce the dimensionality of the data. This would select feature of importance for clustering the data 2. Perform Clustering(K-mean) on the pca ###Code # setup pca and fit to our data # we first want to get 360 features from our component and see how those components better explain our features # then we will either increase or reduce the number of components to give maximum explainable features no_components = 360 pca = PCA(no_components) azdias_pca = pca.fit_transform(azdias) # Analyze the variance accounted for by each principal component. ind = np.arange(no_components) vals = pca.explained_variance_ratio_ plt.figure(figsize=(10, 6)) ax = plt.subplot() cumvals = np.cumsum(vals) ax.bar(ind, vals) ax.plot(ind, cumvals) for i in range(no_components): ax.set_xlabel("Principal Component") ax.set_ylabel("Percentage of Variance Explained ") plt.savefig('pca.png', dpi=500, bbox_inches='tight', pad_inches=0) # Re-apply PCA to the data while selecting for number of components to retain. sum(pca.explained_variance_ratio_) ###Output _____no_output_____ ###Markdown Clustering(K-Means) ###Code from sklearn.datasets import make_blobs #from yellowbrick.cluster import KElbowVisualizer # Instantiate the clustering model and visualizer model = MiniBatchKMeans() optimal_k = 8 kmeans = MiniBatchKMeans(n_clusters=optimal_k, random_state=15) model = kmeans.fit(azdias_pca) preds = model.predict(azdias_pca) azdias_preds = pd.DataFrame(preds, columns=['General_Population']) azdias_preds['General_Population'].value_counts().sort_index().plot(kind='bar') plt.xlabel('population group') ###Output _____no_output_____ ###Markdown Now, we need to cluster the data ###Code customers_pca = pca.transform(customers) customers_preds = model.predict(customers_pca) customers_preds = pd.DataFrame(customers_preds, columns=['Customers_Population']) data_clusters = pd.concat([customers_preds['Customers_Population'].value_counts(), azdias_preds['General_Population'].value_counts()], axis=1) data_clusters.plot(kind='bar') plt.savefig('clusters_comparison.png', dpi=500, bbox_inches='tight', pad_inches=0) ###Output _____no_output_____ ###Markdown So now, what are the likely people to become customers? ###Code target_list = azdias_preds[azdias_preds['General_Population'] == 0].index df_target = azdias.iloc[target_list] df_target.head(3) # What kinds of people are part of a cluster that is overrepresented in the # customer data compared to the general population? print('The cluster which is the most overrepresented is cluster {} with a difference of {}.' .format(max_index, np.round(max_diff, 4))) ###Output The cluster which is the most overrepresented is cluster 0 with a difference of 0.8789. ###Markdown Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') mailout_train.head() mailout_train_size = mailout_train.shape # Prepare the data using the function created earlier mailout_train_clean = clean_demographics_data(mailout_train) mailout_train_clean.shape y = mailout_train_clean['RESPONSE'] # Check for missing column in `clean_mailout_train` missing = list(np.setdiff1d(mailout_train_clean.columns, azdias.columns)) print(missing) for col in missing: try: mailout_train_clean.drop(col, axis=1, inplace=True) except : print('warning:',col) ###Output _____no_output_____ ###Markdown Split the data into train test sets ###Code X = mailout_train_clean.drop('RESPONSE', axis=1) X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2) ###Output _____no_output_____ ###Markdown Checking for class imbalance ###Code (y == 1).sum() (y == 0).sum() # percent imbalance ((y == 1).sum()/y.shape[0]) * 100 ###Output _____no_output_____ ###Markdown Build model ###Code clf = RandomForestClassifier(n_estimators=10, n_jobs=2) clf.fit(X_train, y_train) y_pred = clf.predict(X_val) recall_score(y_val, y_pred, average='micro') y_pred = clf.predict(X_train) ###Output [0 0 0 ..., 0 0 0] ###Markdown Build the model tuning ###Code # import adaboost classifier from sklearn.ensemble import AdaBoostClassifier def build_model_AdaBoostClassifier(): """Build a AdaBoostClassifier model pipeline. Returns: pipline: sklearn.model_selection.GridSearchCV. AdaBoostClassifier Classifier. """ # Set machine learning pipeline print('Model building.....') pipeline = Pipeline([ ('clf', AdaBoostClassifier()) ]) parameters = {} parameters['clf__n_estimators'] = [500, 1000] parameters['clf__learning_rate'] = [0.001,0.075,0.0001] # Set parameters for gird search and set the scoring to roc curve cv = GridSearchCV(pipeline, parameters, scoring='roc_auc',cv=5, n_jobs= 1) return cv model = build_model_AdaBoostClassifier() print('Train the model...') model.fit(X_train, y_train) model.best_params_ y_test_pred = model.predict(X_val) recall_score(y_val, y_test_pred, average='micro') (y_train == 0).sum() (y_train == 1).sum() y__train_pred = model.predict(X_train) recall_score(y_train, y__train_pred, average='micro') model.cv_results_ model.cv_results_['mean_test_score'] # plot loses # Get the test recall score for each stage of prediction. n_estimators = range(0, 6) plt.figure(figsize=(14,8)) plt.plot(n_estimators,model.cv_results_['mean_test_score']) plt.plot(n_estimators,model.cv_results_['mean_train_score']) plt.title('Staged Scores') plt.ylabel('roc_auc score') plt.xlabel('K-folds') plt.legend(scores.keys()) plt.show() ###Output /opt/conda/lib/python3.6/site-packages/sklearn/utils/deprecation.py:122: FutureWarning: You are accessing a training score ('mean_train_score'), which will not be available by default any more in 0.21. If you need training scores, please set return_train_score=True warnings.warn(*warn_args, **warn_kwargs) ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') mailout_test.shape LNX_indexes = mailout_test['LNR'] # Prepare the data using the function created earlier mailout_test_clean = clean_demographics_data(mailout_test) # Check for missing column in `clean_mailout_train` missing = list(np.setdiff1d(mailout_test_clean.columns, azdias.columns)) print(missing) for col in missing: try: mailout_test_clean.drop(col, axis=1, inplace=True) except: print('warning:', col) mailout_test_clean.shape azdias.shape # predict on the test y_final_pred = model.predict(mailout_test_clean) y_final_pred.shape LNX_indexes.shape submission = pd.DataFrame({'LNR':LNX_indexes,'RESPONSE':y_final_pred }) submission.set_index('LNR', inplace=True) submission.head() submission.columns submission.to_csv('submission.csv') ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. Importing Libraries ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import pickle #imports to help me plot my venn diagrams import matplotlib_venn as venn2 from matplotlib_venn import venn2 from pylab import rcParams # import the util.py file where I define my functions from utils import * # sklearn from sklearn.preprocessing import StandardScaler, RobustScaler, MinMaxScaler, OneHotEncoder from sklearn.impute import SimpleImputer from sklearn.decomposition import PCA from sklearn.cluster import KMeans from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import GridSearchCV, train_test_split from sklearn.metrics import confusion_matrix,precision_recall_fscore_support from sklearn.utils.multiclass import unique_labels from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split, StratifiedKFold, cross_val_score, RandomizedSearchCV, GridSearchCV from sklearn.metrics import roc_auc_score from sklearn.pipeline import Pipeline from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier as gbm from sklearn.linear_model import LogisticRegression from sklearn.neural_network import MLPClassifier from sklearn.model_selection import StratifiedKFold, learning_curve import xgboost as xgb import lightgbm as lgb import skopt from skopt import BayesSearchCV class BayesSearchCV(BayesSearchCV): def _run_search(self, x): raise BaseException('Use newer skopt') # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # load in the data ''' There are 2 warnings when we read in the datasets: DtypeWarning: Columns (19,20) have mixed types. Specify dtype option on import or set low_memory=False. interactivity=interactivity, compiler=compiler, result=result) This warning happens when pandas attempts to guess datatypes on particular columns, I will address this on the pre-processing steps ''' azdias = pd.read_csv(r"C:\Users\sousa\Desktop\github\Arvato\data\azdias.csv") customers = pd.read_csv(r"C:\Users\sousa\Desktop\github\Arvato\data\customers.csv") attributes = pd.read_csv(r"C:\Users\sousa\Desktop\github\Arvato\data\features.csv") dias_xls = pd.read_excel(r"C:\Users\sousa\Desktop\github\Arvato\data\DIAS Attributes - Values 2017.xlsx", header = 1) ###Output _____no_output_____ ###Markdown I want to use the dias_xls file to help me find the values that correspond to missing or unknow so I will perform a few fixes.Namely, havind row 1 be the header for the dataframe and removing the extra unnammed column. ###Code dias_xls.drop(columns=['Unnamed: 0'], inplace=True) dias_xls['Attribute'] = dias_xls['Attribute'].ffill() dias_xls.head() # I will now check what is the problem with the columns 19 and 20 # getting the name of these columns print(azdias.iloc[:,19:21].columns) print(customers.iloc[:,19:21].columns) ###Output Index(['CAMEO_DEUG_2015', 'CAMEO_INTL_2015'], dtype='object') Index(['CAMEO_DEUG_2015', 'CAMEO_INTL_2015'], dtype='object') ###Markdown It seems like the mixed type issue comes from that X that appears in these columns.There are ints, floats and strings all in the mix, I also want to do a quick fix on CAMEO_DEU_2015 ###Code attributes.head() azdias = special_feature_handler(azdias) customers = special_feature_handler(customers) ###Output _____no_output_____ ###Markdown Checking if values were fixed Change this cell to code if you want to perform the checksazdias.CAMEO_DEUG_2015.unique()customers.CAMEO_INTL_2015.unique() Considering the appearance of these mixed type data entries I created a function to check the dtype of the different attributesThis might be useful in case some attributes have too many category values, which might fragment the data clustering too much. ###Code #doing a quick check of categorical features and see if some are too granular to be maintained cat_check = categorical_checker(azdias, attributes) ###Output AGER_TYP 5 ANREDE_KZ 2 CAMEO_DEU_2015 45 CAMEO_DEUG_2015 9 CJT_GESAMTTYP 6 D19_BANKEN_DATUM 10 D19_BANKEN_OFFLINE_DATUM 10 D19_BANKEN_ONLINE_DATUM 10 D19_GESAMT_DATUM 10 D19_GESAMT_OFFLINE_DATUM 10 D19_GESAMT_ONLINE_DATUM 10 D19_KONSUMTYP 7 D19_TELKO_DATUM 10 D19_TELKO_OFFLINE_DATUM 10 D19_TELKO_ONLINE_DATUM 10 D19_VERSAND_DATUM 10 D19_VERSAND_OFFLINE_DATUM 10 D19_VERSAND_ONLINE_DATUM 10 D19_VERSI_DATUM 10 D19_VERSI_OFFLINE_DATUM 10 D19_VERSI_ONLINE_DATUM 10 FINANZTYP 6 GEBAEUDETYP 7 GFK_URLAUBERTYP 12 GREEN_AVANTGARDE 2 KBA05_BAUMAX 6 KK_KUNDENTYP 6 LP_FAMILIE_FEIN 12 LP_FAMILIE_GROB 6 LP_STATUS_FEIN 10 LP_STATUS_GROB 5 NATIONALITAET_KZ 4 OST_WEST_KZ 2 PLZ8_BAUMAX 5 SHOPPER_TYP 5 SOHO_KZ 2 TITEL_KZ 6 VERS_TYP 3 WOHNLAGE 8 ZABEOTYP 6 dtype: int64 ###Markdown Based on the categorical info it might be a good idea do drop CAMEO_DEU_2015 column, it is far too fragmented with 45 different category values, this is an idea to revisit after testing the models There is an extra column called Unnamed that seems like an index duplication, I will drop it We also have 3 columns that are different between azdias and customers:'CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'I will drop those to harmonize the 2 datasets ###Code customers = customers.drop(['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], inplace=False, axis=1) ###Output _____no_output_____ ###Markdown I will now check overal shapes of the datasets Azdias Shape ###Code # checking how the azdias dataframe looks like print('Printing dataframe shape') print(azdias.shape) print('________________________________________________________') azdias.head() ###Output Printing dataframe shape (891221, 366) ________________________________________________________ ###Markdown Customers Shape ###Code # checking how the customer dataframe looks like print('Printing dataframe shape') print(customers.shape) print('________________________________________________________') customers.head() ###Output Printing dataframe shape (191652, 366) ________________________________________________________ ###Markdown Attributes shape ###Code # Check the summary csv file print(attributes.shape) attributes.head() ###Output (332, 5) ###Markdown On the dataframe shapes: For now it is noted that the 2 initial working dataframes are harmonized in terms of number of columns: azdias: (891221, 366) customers: (191652, 366) attributes: (332, 5) ###Code #saving the unique attribute names to lists attributes_list = attributes.attribute.unique().tolist() azdias_list = list(azdias.columns) customers_list = list(customers.columns) #establishing uniqueness of the attributes accross the datasets in work common_to_all = (set(attributes_list) & set(azdias_list) & set(customers_list)) unique_to_azdias = (set(azdias_list) - set(attributes_list) - set(customers_list)) unique_to_customers = (set(customers_list) - set(attributes_list) - set(azdias_list)) unique_to_attributes = (set(attributes_list) - set(customers_list) - set(azdias_list)) unique_to_attributes_vs_azdias = (set(attributes_list) - set(azdias_list)) unique_to_azdias_vs_attributes = (set(attributes_list) - set(azdias_list)) common_azdias_attributes = (set(azdias_list) & set(attributes_list)) print("No of items common to all 3 daframes: " + str(len(common_to_all))) print("No of items exclusive to azdias: " + str(len(unique_to_azdias))) print("No of items exclusive to customers: " + str(len(unique_to_customers))) print("No of items exclusive to attributes: " + str(len(unique_to_attributes))) print("No of items overlapping between azdias and attributes: " + str(len(common_azdias_attributes))) print("No of items exclusive to attributes vs azdias: " + str(len(unique_to_attributes_vs_azdias))) print("No of items exclusive to azdias vs attributes: " + str(len(unique_to_azdias_vs_attributes))) rcParams['figure.figsize'] = 8, 8 ax = plt.axes() ax.set_facecolor('lightgrey') v = venn2([len(azdias_list), len(attributes_list), len(common_azdias_attributes)], set_labels=('Azdias', 'Attributes'), set_colors = ['cyan', 'grey']); plt.title("Attribute presence on Azdias vs DIAS Attributes ") plt.show() ###Output _____no_output_____ ###Markdown From this little exploration we got quite a little bit of information:- There are 3 extra features in the customers dataset, it corresponds to the columns 'CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'- All the datasets share 327 features between them- The attributes file has 5 columns corresponding to feature information that does not exist in the other datasets PreprocessingNow that I have a birds-eye view of the data I will proceed with cleaning and handling missing calues, re-encode features (since the first portion of this project will involve unsupervised learning), perform some feature enginnering and scaling.Assessing missing data and replacing it with nan.I will also decide on which strategy to use before scaling features:-remove nulls -> scalling -> put nulls back or-remove nulls -> scalling -> impute ###Code azdias_pre_cleanup = azdias.copy() customers_pre_cleanup = customers.copy() unknowns_to_NANs(azdias, dias_xls) unknowns_to_NANs(customers, dias_xls) ###Output _____no_output_____ ###Markdown Since the next step involves dropping columns missing data over a threshold it is important to check if there is a column match between azdias and customers before and after the cleanup processThere is a chance that some columns are missing too much data in one dataframe and being dropped while they are abundant in the other, causing a discrepancy in the shape between the 2 dataframesIt is always hard to define a threshold on how much missing data is too much, my first approach will consider over 30% too muchBased on model performance this is an idea to revisit and adjust ###Code balance_checker(azdias, customers) ###Output Feature balance between dfs?: True ###Markdown Prior to cleanup customers and azdias match ###Code percent_missing_azdias_df = percentage_of_missing(azdias) percent_missing_azdias_pc_df = percentage_of_missing(azdias_pre_cleanup) percent_missing_customers_df = percentage_of_missing(customers) percent_missing_customers_pc_df = percentage_of_missing(customers_pre_cleanup) print('Identified missing data in Azdias: ') print('Pre-cleanup: ' + str(azdias_pre_cleanup.isnull().sum().sum()) + ' Post_cleanup: ' + str(azdias.isnull().sum().sum())) print('Identified missing data in Customers: ') print('Pre-cleanup: ' + str(customers_pre_cleanup.isnull().sum().sum()) + ' Post_cleanup: ' + str(customers.isnull().sum().sum())) print('Azdias columns not missing values(percentage):') print('Pre-cleanup: ', (percent_missing_azdias_df['percent_missing'] == 0.0).sum()) print('Post-cleanup: ', (percent_missing_azdias_pc_df['percent_missing'] == 0.0).sum()) print('Customers columns not missing values(percentage):') print('Pre-cleanup: ', (percent_missing_customers_df['percent_missing'] == 0.0).sum()) print('Post-cleanup: ', (percent_missing_customers_pc_df['percent_missing'] == 0.0).sum()) ###Output Azdias columns not missing values(percentage): Pre-cleanup: 87 Post-cleanup: 93 Customers columns not missing values(percentage): Pre-cleanup: 87 Post-cleanup: 93 ###Markdown Deciding on what data to maintain based on the percentage missing ###Code # missing more or less than 30% of the data azdias_missing_over_30 = split_on_percentage(percent_missing_azdias_df, 30, '>') azdias_missing_less_30 = split_on_percentage(percent_missing_azdias_df, 30, '<=') customers_missing_over_30 = split_on_percentage(percent_missing_customers_df, 30, '>') customers_missing_less_30 = split_on_percentage(percent_missing_customers_df, 30, '<=') #plotting select features and their missing data percentages figure, axes = plt.subplots(4, 1, figsize = (15,15), squeeze = False) azdias_missing_over_30.sort_values(by = 'percent_missing', ascending = False).plot(kind = 'bar', x = 'column_name', y = 'percent_missing', ax = axes[0][0], color = 'cyan', title = 'Azdias percentage of missing values over 30%' ) #due to the sheer amount of data points to be plotted this does not make an appealing vis so I will restrict #the number of plotted points to 40 azdias_missing_less_30.sort_values(by = 'percent_missing', ascending = False)[:40].plot(kind = 'bar', x = 'column_name', y = 'percent_missing', ax = axes[1][0], color = 'cyan', title = 'Azdias percentage of missing values less 30%' ) customers_missing_over_30.sort_values(by = 'percent_missing', ascending = False).plot(kind = 'bar', x = 'column_name', y = 'percent_missing', ax = axes[2][0], color = 'grey', title = 'Customers percentage of missing values over 30%' ) #due to the sheer amount of data points to be plotted this does not make an appealing vis so I will restrict #the number of plotted points to 40 customers_missing_less_30.sort_values(by = 'percent_missing', ascending = False)[:40].plot(kind = 'bar', x = 'column_name', y = 'percent_missing', ax = axes[3][0],color = 'grey', title = 'Customers percentage of missing values less 30%' ) plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown The vast majority of the columns with missing values have a percent of missing under 30% Based on this information I will remove columns with more than 30% missing values ###Code #extracting column names with more than 30% values missing so we can drop them from azdias df azdias_col_delete = columns_to_delete(azdias_missing_over_30) #extracting column names with more than 30% values missing so we can drop them from customers df customers_col_delete = columns_to_delete(customers_missing_over_30) #dropping the columns identified in the previous lists azdias = azdias.drop(azdias_col_delete, axis = 1) customers = customers.drop(customers_col_delete, axis = 1) #since I just dropped several columns I will do another balance check balance_checker(azdias, customers) ###Output Feature balance between dfs?: False Your first argument df differs from the second on the following columns: {'REGIOTYP', 'KKK'} Your second argument df differs from the first on the following columns: set() ###Markdown Now that we dropped columns missing more than 30% of their data let's check if we should also drop rows based on a particular threshold ###Code #plotting distribution of null values row_hist(azdias, customers, 30) ###Output _____no_output_____ ###Markdown Based on this visualization we deduct 2 things - most of the rows have less than 100 missing values - both customer and azdias have probably overlapping rows in which they are missing the same info deleting rows based on the information acquired in the previous histogram azdias = row_dropper(azdias, 100)customers = row_dropper(customers, 100) plotting null values distribution after cleanuprow_hist(azdias, customers, 30) balance_checker(azdias, customers) It seems like dropping the rows makes the clustering perform worse so for now I will skip it Feature Encoding Like I previously checked using the categorical_checker there are many features in need of re-encoding for the unsupervised learning portion - numerical features will be kept as is- ordinal features will be kept as is- categorical features and mixed type features will have to be re-encoded (label or dummies) ###Code #checking for mixed type features attributes[attributes.type == 'mixed'] #retrieve a list of categorical features for future encoding cats = attributes[attributes.type == 'categorical'] list(cats['attribute']) ###Output _____no_output_____ ###Markdown At this point I already dealt with the CAMEO_INTL_2015 column by converting XX to nanPRAEGENDE_JUGENDJAHRE has 3 dimentions: generation decade, if people are mainstream or avant-garde and if they are from east or west, I will create new features out of this particular columnLP_LEBENSPHASE_GROB seems to encode the same information as the CAMEO column and it is divided between gross(grob) and fine (fein) ###Code balance_checker(azdias, customers) ###Output Feature balance between dfs?: False Your first argument df differs from the second on the following columns: {'REGIOTYP', 'KKK'} Your second argument df differs from the first on the following columns: set() ###Markdown Feature engineering Based on the previous exploration there are a few features that are good candidates for novel feature creation ###Code azdias_eng = azdias.copy() customers_eng = customers.copy() azdias_eng = feat_eng(azdias_eng) customers_eng = feat_eng(customers_eng) ###Output Creating PRAEGENDE_JUGENDJAHRE_DECADE feature Creating PRAEGENDE_JUGENDJAHRE_MOVEMENT feature Creating WOHNLAGE_QUALITY feature Creating WOHNLAGE_AREA feature Creating Wealth and Family feature Creating LP_LEBENSPHASE_FEIN_life_stage and LP_LEBENSPHASE_FEIN_fine_scale feature Creating PRAEGENDE_JUGENDJAHRE_DECADE feature Creating PRAEGENDE_JUGENDJAHRE_MOVEMENT feature Creating WOHNLAGE_QUALITY feature Creating WOHNLAGE_AREA feature Creating Wealth and Family feature Creating LP_LEBENSPHASE_FEIN_life_stage and LP_LEBENSPHASE_FEIN_fine_scale feature ###Markdown balance_checker(azdias, customers) Now that I am done with creating new features and dealing with the most obvious columns I might need to encode the remaining categorical features Considering this post: https://stats.stackexchange.com/questions/224051/one-hot-vs-dummy-encoding-in-scikit-learn there are advantages and drawbacks with chosing one-hot-encoding vs dummy encoding. There are also concerns regarding using dummies all together https://towardsdatascience.com/one-hot-encoding-is-making-your-tree-based-ensembles-worse-heres-why-d64b282b5769 so I will keep this in mind while moving forward Feature scaling Before moving on to dimentionality reduction I need to apply feature scaling, this way principal component vectors won't be affected by the variation that naturally occurs in the data Before applying the scaler there should be no missing values in the data azdias_eng.to_csv(r"C:\Users\sousa\Desktop\github\Arvato\data\azdias_eng.csv")customers_eng.to_csv(r"C:\Users\sousa\Desktop\github\Arvato\data\customers_eng.csv") ###Code #dataframes using StandardScaler azdias_SS = feature_scaling(azdias_eng, 'StandardScaler') customers_SS = feature_scaling(customers_eng, 'StandardScaler') #dataframes using MinMaxScaler azdias_MMS = feature_scaling(azdias_eng, 'MinMaxScaler') customers_MMS = feature_scaling(customers_eng, 'MinMaxScaler') ###Output _____no_output_____ ###Markdown Dimensionality Reduction Finally I will use PCA (linear technique) to select only the components that seem to be more impactfull ###Code n_components_azdias = len(azdias_SS.columns.values) n_components_customers = len(customers_SS.columns.values) azdias_SS_pca = pca_model(azdias_SS, n_components_azdias) customers_SS_pca = pca_model(customers_SS, n_components_customers) azdias_MMS_pca = pca_model(azdias_MMS, n_components_azdias) customers_MMS_pca = pca_model(customers_MMS, n_components_customers) ###Output _____no_output_____ ###Markdown Change to code if intending to save file progress for other notebooksazdias_SS.to_pickle(r"C:\Users\sousa\Desktop\github\Arvato\data\azdias_SS.pickle")azdias_MMS.to_pickle(r"C:\Users\sousa\Desktop\github\Arvato\data\azdias_MMS.pickle") ###Code scree_plots(azdias_SS_pca, azdias_MMS_pca, ' azdias') scree_plots(customers_SS_pca, customers_MMS_pca, ' customers') ###Output _____no_output_____ ###Markdown Each principal component is a directional vector pointing to the highest variance. The greater the distance from 0 the more the vector points to a feature.Lets check some interesting features in up to the third dimension. ###Code display_interesting_features(azdias_SS, azdias_SS_pca, 0) display_interesting_features(azdias_SS, azdias_SS_pca, 1) display_interesting_features(azdias_SS, azdias_SS_pca, 3) ###Output Lowest: PRAEGENDE_JUGENDJAHRE_DECADE -0.196 FINANZ_UNAUFFAELLIGER -0.191 FINANZ_SPARER -0.191 Highest: LP_LEBENSPHASE_FEIN_life_stage 0.147 ALTERSKATEGORIE_GROB 0.160 FINANZ_VORSORGER 0.168 ###Markdown The highest the weight of an attribute the more relevant it is considered to be, lets take a look on the most important features for a few dimensions considering that positive weights might relate to a positive relationship and negative weights a negative one.:- dimension 1(0) using standard scaler: These are some of features related to positive weights: - MOBI_RASTER: refers to the individual's mobility - KBA13_ANTG1: lower share of car owners - PLZ8_ANTG1 : lower number of 1-2 family houses And these are some of the feaures related to negative weights: - KBA13_ANTG4: refers to posession of higher number of cars - PLZ-ANTG3: number of 6-10 family houses in the PLZ8 - CAMEO_DEU_2015: detailed classification os cultural and living status So overall the first dimension refers to the social status and living conditions of the individuals present in the dataset.Interesting to see that even though different features were selected for dimensionality reduction but they overall have the same meaning. Based on these plots:- using standard scaler with 150 principal components 90% of the original variance can be represented- using minmax scaler with 150 components we represent 90% of the original variance Moving on I will pick the standard scaler PCA and I will re-fit with with a number of components that explains over 80% of the explained variance Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. After a lot of data Pre-Processing we are finally getting to the analysis, I will start by attempting KMeans to find relevant clusters Now that I have reduced the number of components to use, it is important to select the number of clusters to aim at for kmeans ###Code pca = PCA(150) azdias_SS_pca = pca.fit_transform(azdias_SS) customers_SS_PCA = pca.fit_transform(customers_SS) ###Output _____no_output_____ ###Markdown The elbow method (https://bl.ocks.org/rpgove/0060ff3b656618e9136b) is a way to validate the optimal number of clusters to use for a particular dataset.It can take some time training the dataset, optimising for the optimal n of clusters means that less resources are used. Based on the elbow plot 9 clusters should be enough to proceed with the kmeans training ###Code # refitting using just 9 clusters kmeans = KMeans(9) kmodel = kmeans.fit(azdias_SS_pca) #and now we can compare the customer data to the general demographics azdias_clusters = kmodel.predict(pca.transform(azdias_SS)) customers_clusters = kmodel.predict(pca.transform(customers_SS)) #getting clusters for the LNRs for the customers cluster_map = pd.DataFrame() cluster_map['LNR'] = azdias_eng.index.values cluster_map['cluster'] = kmodel.labels_ ###Output _____no_output_____ ###Markdown Experimenting with visualization of the clusters preparing cluster visualizationfrom collections import Counterazdias_labels = kmeans.labels_customers_labels = kmeans.labels_model_feat = list(azdias_eng.columns)cust_feat = list(customers_eng.columns)model_feat_df = pd.DataFrame()model_feat_df['model_feat'] = model_featFind model features not in customer featuresmodel_feat_notin_cust = [feat for feat in model_feat if feat not in cust_feat]len(model_feat_notin_cust)customers_pca = pca.transform(customers_eng)customers_labels = kmeans.predict(customers_pca)counts_customer = Counter(customers_labels)n_customers = customers_pca.shape[0]customer_freqs = {label: 100*(freq / n_customers) for label, freq in counts_customer.items()}counts_population = Counter(azdias_labels)n_population = azdias_SS_pca.shape[0]population_freqs = {label: 100*(freq / n_population) for label, freq in counts_population.items()}customer_clusters = pd.DataFrame.from_dict(customer_freqs, orient='index', columns=['% of data'])customer_clusters['Cluster'] = customer_clusters.indexcustomer_clusters['DataSet'] = 'Customers Data'population_clusters = pd.DataFrame.from_dict(population_freqs, orient='index', columns=['% of data'])population_clusters['Cluster'] = population_clusters.indexpopulation_clusters['DataSet'] = 'General Population'all_clusters = pd.concat([customer_clusters, population_clusters])all_clusters ###Code sns.catplot(x='Cluster', y='% of data', hue='DataSet', data=all_clusters, kind='bar') plt.show() #transform the customers using pca customers_pca = pca.transform(customers_SS) #predict clustering using the kmeans predict_customers = kmodel.predict(customers_pca) #cluster and center prediction and info clust_preds = kmodel.predict(azdias_SS_pca) centers = kmodel.cluster_centers_ # Compare the proportion of data in each cluster for the customer data to the # proportion of data in each cluster for the general population. general_pop = [] customers_pop = [] x = [i+1 for i in range(9)] for i in range(9): general_pop.append((clust_preds == i).sum()/len(clust_preds)) customers_pop.append((predict_customers == i).sum()/len(predict_customers)) df_general = pd.DataFrame({'cluster' : x, 'general_proportion' : general_pop, 'customers_proportion':customers_pop}) df_general.plot(x='cluster', y = ['general_proportion', 'customers_proportion'], kind='bar', figsize=(9,6)) plt.ylabel('proportion of persons in each cluster') plt.show() ###Output _____no_output_____ ###Markdown Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code mailout_train = pd.read_csv(r"C:\Users\sousa\Desktop\github\Arvato\data\mailout_train.csv") mailout_test = pd.read_csv(r"C:\Users\sousa\Desktop\github\Arvato\data\mailout_test.csv") #running all the cleaning and feature transformation functions #fixing the mixed type columns mailout_train = special_feature_handler(mailout_train) mailout_test = special_feature_handler(mailout_test) #dealing with missing and unknowns unknowns_to_NANs(mailout_train, dias_xls) unknowns_to_NANs(mailout_test, dias_xls) #getting percentages of missing percent_missing_train = percentage_of_missing(mailout_train) percent_missing_test = percentage_of_missing(mailout_test) #getting missing over 30% train_missing_over_30 = split_on_percentage(percent_missing_train, 30, '>') test_missing_over_30 = split_on_percentage(percent_missing_test, 30, '>') #getting columns to delete train_col_delete = columns_to_delete(train_missing_over_30) test_col_delete = columns_to_delete(test_missing_over_30) #dropping cols mailout_train = mailout_train.drop(train_col_delete, axis = 1) mailout_test = mailout_test.drop(test_col_delete, axis = 1) #feature engineering mailout_train = feat_eng(mailout_train) mailout_test = feat_eng(mailout_test) ###Output Creating PRAEGENDE_JUGENDJAHRE_DECADE feature Creating PRAEGENDE_JUGENDJAHRE_MOVEMENT feature Creating WOHNLAGE_QUALITY feature Creating WOHNLAGE_AREA feature Creating Wealth and Family feature Creating LP_LEBENSPHASE_FEIN_life_stage and LP_LEBENSPHASE_FEIN_fine_scale feature Creating PRAEGENDE_JUGENDJAHRE_DECADE feature Creating PRAEGENDE_JUGENDJAHRE_MOVEMENT feature Creating WOHNLAGE_QUALITY feature Creating WOHNLAGE_AREA feature Creating Wealth and Family feature Creating LP_LEBENSPHASE_FEIN_life_stage and LP_LEBENSPHASE_FEIN_fine_scale feature ###Markdown mailout_train.to_csv('mailout_train_clean.csv')mailout_test.to_csv('mailout_test_clean.csv') since I just dropped several columns I will do another balance check ###Code balance_checker(azdias_eng, mailout_train) balance_checker(mailout_train, mailout_test) ###Output Feature balance between dfs?: False Your first argument df differs from the second on the following columns: {'RESPONSE'} Your second argument df differs from the first on the following columns: set() ###Markdown Before moving on to the model I want to experiment a bit with the identified clusters and how the responses align with the clusters ###Code #merging mailout and cluster map on the LNR column clusters_mailout = pd.merge(mailout_train, cluster_map, on = 'LNR') response_and_cluster = clusters_mailout[['LNR', 'RESPONSE', 'cluster']] ax = sns.countplot(x="cluster", hue="RESPONSE", data=response_and_cluster) plot1 = response_and_cluster.groupby(['cluster', 'RESPONSE'])['RESPONSE'].count().unstack('RESPONSE') plot1[[0, 1]].plot(kind = 'bar', stacked = True) df_pos_response = response_and_cluster[response_and_cluster['RESPONSE'] == 1] df_neg_response = response_and_cluster[response_and_cluster['RESPONSE'] == 0] figure, axes = plt.subplots(2, 1, figsize = (15,15), squeeze = False) pos = df_pos_response['cluster'].value_counts().plot(kind='bar',figsize=(15,10), color = 'C1', ax = axes[0][0], title = 'Clusters with positive responses') neg = df_neg_response['cluster'].value_counts().plot(kind='bar',figsize=(15,10), ax = axes[1][0], title = 'Clusters with negative responses') plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Change to code if intending to save file progress for other notebooksmailout_train.to_pickle(r"C:\Users\sousa\Desktop\github\Arvato\data\mailout_train.pickle")mailout_test.to_pickle(r"C:\Users\sousa\Desktop\github\Arvato\data\mailout_test.pickle") Getting to the models for response prediction ###Code #getting the target target = mailout_train['RESPONSE'] mailout_train_clean = mailout_train.drop(['RESPONSE'], inplace=False, axis=1) #dropping LNR mailout_test_clean = mailout_test.copy() mailout_train_clean.drop(['LNR'], inplace = True, axis = 1) mailout_test_clean.drop(['LNR'], inplace = True, axis = 1) balance_checker(mailout_train_clean, mailout_test_clean) #checking the label distribution sns.countplot(target).set_title("Label distribution") ###Output _____no_output_____ ###Markdown Based on this plot these datasets are quite imbalanced, there are quite a few more responses corresponding to 0 than to 1.Considering this accuracy will not be an appropriate metric, ROC-AUC is a better option (1 being a perfect score, and 0.5 just random chance)Lets try model selection ###Code SEED = 28 # 5 stratified folds skf = StratifiedKFold(n_splits=5, random_state=SEED, shuffle=True) skf.get_n_splits(mailout_train_clean, target) ###Output _____no_output_____ ###Markdown The best performing model seem to be XGB so from here on I will continue the analysis with this model.There also seems to be no difference beetween standard scaler and Min Max Scaler. since the second is a bit faster I will stick with it. ###Code scaler = MinMaxScaler() scaler.fit(mailout_train_clean.astype('float')) mailout_train_scaled = scaler.transform(mailout_train_clean) mailout_test_scaled = scaler.transform(mailout_test_clean) # map back to dfs mailout_train_scaled = pd.DataFrame(data=mailout_train_scaled, columns=mailout_train_clean.columns) mailout_test_scaled = pd.DataFrame(data=mailout_test_scaled, columns=mailout_test_clean.columns) balance_checker(mailout_train, mailout_test) ###Output _____no_output_____ ###Markdown I experimented with multiple models to determine which one was the best performing. The best performer was xgboost so I decided to use the Bayesian Optimization algorithm to fine tune the hyperparameters that produce the best results. ###Code #submission 1 parameters bayes_xgb = xgb.XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bytree=0.6872852588903648, eval_metric='auc', gamma=1.0, learning_rate=0.014017007043823785, max_delta_step=0, max_depth=7, min_child_weight=1, missing=None, n_estimators=236, n_jobs=-1, nthread=None, objective='binary:logistic', random_state=0, reg_alpha=1e-09, reg_lambda=1, scale_pos_weight=1, seed=None, silent=1, subsample=0.5) ###Output _____no_output_____ ###Markdown submission 2 parametersbayes_xgb = xgb.XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bytree=1, eval_metric='auc', gamma=0, learning_rate=0.04788137748021131, max_delta_step=0, max_depth=1, min_child_weight=10, missing=None, n_estimators=429, n_jobs=-1, nthread=None, objective='binary:logistic', random_state=0, reg_alpha=0.17651703245342792, reg_lambda=1, scale_pos_weight=1, seed=None, silent=1, subsample=0.5) submission 3 parametersbayes_xgb = xgb.XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bytree=0.9889952021544406, eval_metric='auc', gamma=0.7810813412544743, learning_rate=0.044734814719961276, max_delta_step=0, max_depth=1, min_child_weight=10, missing=None, n_estimators=454, n_jobs=-1, nthread=None, objective='binary:logistic', random_state=0, reg_alpha=1.0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=1, subsample=0.5) submission 4 parametersbayes_xgb = xgb.XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bytree=0.8, eval_metric='auc', gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=5, min_child_weight=1, missing=None, n_estimators=454, n_jobs=2, nthread=None, objective='binary:logistic', random_state=0, reg_alpha=1.0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=1, subsample=0.8) submission 5 parametersbayes_xgb = xgb.XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bytree=0.6872852588903648, eval_metric='auc', gamma=1, learning_rate=0.01, max_delta_step=0, max_depth=3, min_child_weight=4, missing=None, n_estimators=454, n_jobs=2, nthread=None, objective='binary:logistic', random_state=0, reg_alpha=1e-09, reg_lambda=1, scale_pos_weight=1, seed=None, silent=1, subsample=0.8) {'colsample_bylevel': 0.6207294509491699, 'colsample_bytree': 0.9623594386716922, 'gamma': 1.0, 'learning_rate': 0.001, 'max_delta_step': 1, 'max_depth': 6, 'min_child_weight': 1, 'n_estimators': 500, 'reg_alpha': 1e-09, 'reg_lambda': 0.0002004101751465717, 'scale_pos_weight': 56, 'subsample': 0.6092942995200367} submission 6 parametersbayes_xgb = xgb.XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=0.6207294509491699, colsample_bytree=0.9623594386716922, eval_metric='auc', gamma=1, learning_rate=0.001, max_delta_step=1, max_depth=6, min_child_weight=1, missing=None, n_estimators=500, n_jobs=2, nthread=None, objective='binary:logistic', random_state=0, reg_alpha=1e-09, reg_lambda=0.0002004101751465717, scale_pos_weight=56, seed=None, silent=1, subsample=0.6092942995200367) bayes_xgb = xgb.XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.5578105865941786, eval_metric='auc', gamma=0.07148993933953135, learning_rate=0.002858444321957567, max_delta_step=0, max_depth=4, min_child_weight=1, missing=None, n_estimators=500, n_jobs=-1, nthread=None, objective='binary:logistic', random_state=0, reg_alpha=0.004635659074237927, reg_lambda=1, scale_pos_weight=41, seed=None, silent=1, subsample=0.522846932344412, verbosity=1) ###Code bayes_xgb.fit(mailout_train_scaled, target) plot_feature_importances(model=bayes_xgb, model_type="XGBClassifier", features=mailout_train_scaled.columns) ###Output _____no_output_____ ###Markdown submission 1 lgbmbayes_lgbm = lgb.LGBMClassifier(application='binary', boosting_type='gbdt', class_weight=None, colsample_bytree=1.0, importance_type='split', learning_rate=0.09531171453332088, max_bin=100, max_depth=2, metric='auc', min_child_samples=14, min_child_weight=0.001, min_data_in_leaf=265, min_split_gain=0.0, n_estimators=24, n_jobs=-1, num_leaves=450, objective=None, random_state=None, reg_alpha=1e-09, reg_lambda=1.0, scale_pos_weight=69.87642288579819, silent=True, subsample=1.0, subsample_for_bin=200000, subsample_freq=0, verbose=0) ###Code #submission 2 lgbm bayes_lgbm = lgb.LGBMClassifier(application='binary', boosting_type='gbdt', class_weight=None, colsample_bytree=1.0, importance_type='split', learning_rate=0.0882410095694084, max_bin=826, max_depth=2, metric='auc', min_child_samples=0, min_child_weight=0.001, min_data_in_leaf=32, min_split_gain=0.0, n_estimators=24, n_jobs=-1, num_leaves=94, objective=None, random_state=None, reg_alpha=1.0, reg_lambda=1e-09, scale_pos_weight=90.0, silent=True, subsample=1.0, subsample_for_bin=200000, subsample_freq=0, verbose=0) bayes_lgbm.fit(mailout_train_scaled, target) plot_feature_importances(model=bayes_lgbm, model_type="LGBMClassifier", features=mailout_train_scaled.columns) plot_comparison_feature(column = 'D19_SOZIALES', df=mailout_train) ###Output _____no_output_____ ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code #fit and predict xgb bayes_xgb.fit(mailout_train_scaled, target) predictions = bayes_xgb.predict_proba(mailout_test_scaled)[:,1] #fit and predict lgbm bayes_lgbm.fit(mailout_train_scaled, target) predictions = bayes_lgbm.predict_proba(mailout_test_scaled)[:,1] # create submission file lnr = pd.DataFrame(mailout_test['LNR'].astype('int32')) predictions = pd.DataFrame(predictions) predictions = predictions.rename(columns={0: "RESPONSE"}) dfs = [lnr, predictions] submission = pd.concat(dfs, sort=False, axis=1) submission.set_index('LNR', inplace = True) submission.head() submission.shape submission.to_csv('kaggle_sub10.csv') ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. Table of ContentsI. [Part 0: Get to Know the Data](part0)$\;\;\;\;\;\;$[0.1 Explore azdias data](part0.1)$\;\;\;\;\;\;$[0.2 Explore customers data](part0.2)II. [Part 1: Customer Segmentation Report](part1)$\;\;\;\;\;\;$[1.1 Load and clean data](part1.1)$\;\;\;\;\;\;$[1.2 Perform PCA to reduce features](part1.2)$\;\;\;\;\;\;$[1.3 K-Means clustering](part1.3)III. [Part 2: Supervised Learning Model](part2)$\;\;\;\;\;\;$[2.1 Load, clean and perform PCA on training data](part2.1)$\;\;\;\;\;\;$[2.2 Downsample training data on major class](part2.2)$\;\;\;\;\;\;$[2.3 Classify training data](part2.3)$\;\;\;\;\;\;$[2.4 Improve classifier on training data](part2.4)$\;\;\;\;\;\;\;\;\;$[2.4.1 Build pipeline for grid search](part2.4.1)$\;\;\;\;\;\;\;\;\;$[2.4.2 Redo model fitting on training data without downsampling](part2.4.2)$\;\;\;\;\;\;\;\;\;$[2.4.3 Redo model fitting on training data without PCA](part2.4.3)IV. [Part 3: Kaggle Competition](part3)$\;\;\;\;\;\;$[3.1 Prepare test data](part3.1)$\;\;\;\;\;\;$[3.2 Predict on test data ](part3.2) ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.decomposition import PCA as PCA from sklearn.preprocessing import StandardScaler from sklearn.cluster import KMeans from sklearn.svm import LinearSVC from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV from sklearn.model_selection import cross_val_score, KFold from sklearn.linear_model import LogisticRegression from sklearn.metrics import classification_report, confusion_matrix from sklearn.metrics import roc_auc_score import itertools ! conda install -c conda-forge xgboost -y import xgboost as xgb # magic word for producing visualizations in notebook %matplotlib inline ###Output Collecting package metadata: done Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 4.6.14 latest version: 4.8.3 Please update conda by running $ conda update -n base conda ## Package Plan ## environment location: /opt/conda added / updated specs: - xgboost The following packages will be downloaded: package | build ---------------------------|----------------- _py-xgboost-mutex-2.0 | cpu_0 8 KB conda-forge ca-certificates-2020.6.20 | hecda079_0 145 KB conda-forge certifi-2020.6.20 | py36h9f0ad1d_0 151 KB conda-forge libxgboost-1.0.2 | he1b5a44_1 2.8 MB conda-forge py-xgboost-1.0.2 | py36h9f0ad1d_1 2.2 MB conda-forge python_abi-3.6 | 1_cp36m 4 KB conda-forge xgboost-1.0.2 | py36h831f99a_1 11 KB conda-forge ------------------------------------------------------------ Total: 5.4 MB The following NEW packages will be INSTALLED: _py-xgboost-mutex conda-forge/linux-64::_py-xgboost-mutex-2.0-cpu_0 libxgboost conda-forge/linux-64::libxgboost-1.0.2-he1b5a44_1 py-xgboost conda-forge/linux-64::py-xgboost-1.0.2-py36h9f0ad1d_1 python_abi conda-forge/linux-64::python_abi-3.6-1_cp36m xgboost conda-forge/linux-64::xgboost-1.0.2-py36h831f99a_1 The following packages will be UPDATED: ca-certificates 2019.11.28-hecc5488_0 --> 2020.6.20-hecda079_0 certifi 2019.11.28-py36_0 --> 2020.6.20-py36h9f0ad1d_0 The following packages will be DOWNGRADED: scipy 1.2.1-py36h09a28d5_1 --> 0.19.1-py36_blas_openblas_202 Downloading and Extracting Packages python_abi-3.6 | 4 KB | ##################################### | 100% libxgboost-1.0.2 | 2.8 MB | ##################################### | 100% ca-certificates-2020 | 145 KB | ##################################### | 100% certifi-2020.6.20 | 151 KB | ##################################### | 100% xgboost-1.0.2 | 11 KB | ##################################### | 100% py-xgboost-1.0.2 | 2.2 MB | ##################################### | 100% _py-xgboost-mutex-2. | 8 KB | ##################################### | 100% Preparing transaction: done Verifying transaction: done Executing transaction: done ###Markdown Part 0: Get to Know the Data There are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # load in the data types_dict = {'CAMEO_DEUG_2015': object, 'CAMEO_INTL_2015': object} # Define data type so no warning while reading csv azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';', dtype=types_dict) customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';', dtype=types_dict) print('azdias data shape: ', azdias.shape) print('customers data shape: ', customers.shape) # Convert column 18, 19 to numeric type azdias.iloc[:,18] = pd.to_numeric(azdias.iloc[:,18], errors='coerce') azdias.iloc[:,19] = pd.to_numeric(azdias.iloc[:,19], errors='coerce') customers.iloc[:,18] = pd.to_numeric(customers.iloc[:,18], errors='coerce') customers.iloc[:,19] = pd.to_numeric(customers.iloc[:,19], errors='coerce') print(azdias.dtypes[18:20]) print(customers.dtypes[18:20]) ###Output CAMEO_DEUG_2015 float64 CAMEO_INTL_2015 float64 dtype: object CAMEO_DEUG_2015 float64 CAMEO_INTL_2015 float64 dtype: object ###Markdown 0.1 Explore azdias data **Check NaN values in the data** ###Code # Check NaN values perc_nan = azdias.isnull().mean() print('Column No. with >90% NaN: ', np.where(perc_nan>0.9)) ###Output Column No. with >90% NaN: (array([4, 5, 6, 7]),) ###Markdown Four columns in azdias have more than 90% NaNs. These columns can be dropped when cleaning the data. ###Code azdias.iloc[:,[4, 5, 6, 7]].head() # Plot percentage of NaN in each column plt.bar(np.arange(len(perc_nan)), perc_nan); plt.title('Percentage of NaN in each azdias data column'); plt.xlabel('Column index'); ###Output _____no_output_____ ###Markdown Most of columns have NaN values less than 20%. NaN can be filled by the "unknown" label accroding to the freature descriptions in the data attibutes file. **Check column data type** ###Code # Count columns in numeric, boolean and category type num_azd = azdias.select_dtypes(include=['int','float64']).columns bool_azd = azdias.select_dtypes(include=['bool']).columns cat_azd = azdias.select_dtypes(include=['object']).columns len(num_azd), len(bool_azd), len(cat_azd) ###Output _____no_output_____ ###Markdown Most of the columns are numeric. Let's check four category columns as below. "EINGEFUEGT_AM" is actually a datetime column which has no description in the data attributes file. This column may be dropped. ###Code # Check category columns azdias.loc[:,cat_azd].head() ###Output _____no_output_____ ###Markdown Let's check the values in column "D19_LETZTER_KAUF_BRANCHE" below. These category values can be coded to numbers in data cleaning. ###Code # Check column "D19_LETZTER_KAUF_BRANCHE" values count_vals = azdias.D19_LETZTER_KAUF_BRANCHE.value_counts() (count_vals[:10]/customers.shape[0]).plot(kind='bar'); plt.title('D19_LETZTER_KAUF_BRANCHE'); plt.tight_layout(); ###Output _____no_output_____ ###Markdown Let's also check the data range of numeric columns. Some columns have -1. ###Code print('Numeric columns min value: {}'.format(azdias[num_azd].min().min())) print('Numeric columns max value: {}'.format(azdias[num_azd].max().max())) ###Output Numeric columns min value: -1.0 Numeric columns max value: 1082873.0 ###Markdown 0.2 Explore customers data Now, we repeat previous steps to check customers data. **Check NaN values in the data** ###Code # Check NaN values perc_na = customers.isnull().mean() print('Column No. with >90% NaN: ', np.where(perc_na>0.9)) ###Output Column No. with >90% NaN: (array([4, 5, 6, 7]),) ###Markdown customers data has the same four columns with more than 90% NaNs. These columns can be dropped when cleaning the data. ###Code plt.bar(np.arange(len(perc_na)), perc_na); plt.title('Percentage of NaN in each customers data column'); plt.xlabel('Column index'); ###Output _____no_output_____ ###Markdown **Check column data type** Customers data has the same columns in the same order as azdias data plus three more columns: 'CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'. Let's check the value distribution of these columns in the figure below. These category values can be coded to numbers in data cleaning. ###Code col_names = ['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'] i=1 for col_name in col_names: plt.subplot(1,3,i) count_vals = customers[col_name].value_counts() (count_vals[:10]/customers.shape[0]).plot(kind='bar'); plt.title(col_name); plt.tight_layout(); i +=1 ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation Report The main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. In the customer segmentation part, we'd like to differentiate customer group ("customers") from the general population ("azdias"). KMeans clustering is a good choise to group the data. Considering customer group is part of the general population, it may belongs to part of the groups in general population. We will do clustering on the general population first, then apply the same clustering scheme on the customers data to see if it is only grouped into part of the clusters. 1.1 Load and clean data After loading the data from csv file, column 8 and 9 need to be converted to numeric types first.According to data exploraton in the previous section, columns with more than 90% of NaNs that can be dropped. There is one datetime column "EINGEFUEGT_AM" which has no description in "DIAS Attributes - Values 2017.xlsx". This will also be dropped. "LNR" column is the ID for each data sample, which will also be dropped before clustering and machine learning. Furthermore, according to feature descriptions in "DIAS Attributes - Values 2017.xlsx", most of the features in numeric columns have "unkown" category which is coded as -1, 0, or 9. Hence NaN in each column can be filled as the numeric code of "unkown" category. If the "unknow" category has more than 1 numeric code, they will be merge to 1 number.Finally, category columns are encoded to numbers. All the steps will be performed in the "load_and_clean_data" function below. ###Code def load_and_clean_data(file_name): ''' INPUT file_name - csv file name OUTPUT df - pandas dataframe This function load data from csv file to df and cleans df by the following steps: 1. Convert 2 columns from string to number 2. Drop columns with 90% more NaNs, 1 ID column and 1 datetime column 3. Fill NaNs in numeric columns with label of unknown 4. Encode category columns to numeric columns ''' # Load data from csv file types_dict = {'CAMEO_DEUG_2015': object, 'CAMEO_INTL_2015': object} df = pd.read_csv(file_name, sep=';', dtype=types_dict) print('File loaded...') print('data shape before cleaning: {}'.format(df.shape)) # Convert two columns to numeric type df.CAMEO_DEUG_2015 = pd.to_numeric(df.CAMEO_DEUG_2015, errors='coerce') df.CAMEO_INTL_2015 = pd.to_numeric(df.CAMEO_INTL_2015, errors='coerce') # Drop columns with more than 90% of missing values perc_nan = df.isnull().mean() df = df.drop(df.columns[np.where(perc_nan>0.9)], axis=1) # Drop one ID column and one datetime columns df = df.drop(['LNR','EINGEFUEGT_AM'], axis=1) # Map NaN to Unknown label 10 in the following columns col_unknown_10 = ['D19_BANKEN_DATUM', 'D19_BANKEN_OFFLINE_DATUM', 'D19_BANKEN_ONLINE_DATUM', 'D19_GESAMT_ONLINE_DATUM', 'D19_TELKO_DATUM', 'D19_TELKO_ONLINE_DATUM', 'D19_VERSAND_DATUM', 'D19_VERSAND_OFFLINE_DATUM', 'D19_VERSAND_ONLINE_DATUM']; for col in col_unknown_10: df[col].fillna(10, inplace=True) # Map NaN to Unknown label 0 in the following columns col_name0 = ['ALTERSKATEGORIE_GROB','ANREDE_KZ','GEBAEUDETYP','HH_EINKOMMEN_SCORE','KBA05_BAUMAX', 'KBA05_GBZ','NATIONALITAET_KZ','PRAEGENDE_JUGENDJAHRE','REGIOTYP','TITEL_KZ', 'WOHNDAUER_2008','W_KEIT_KIND_HH','KKK'] for col in col_name0: df[df[col]==-1] = 0 # Merge unknown label -1 with 0 df[col].fillna(0, inplace=True) # Map NaN to Unknown label 9 in the following columns indx00 = np.where(df.columns.str.match('KBA05_CCM1'))[0][0] indx01 = np.where(df.columns.str.match('KBA05_FRAU'))[0][0]+1 indx10 = np.where(df.columns.str.match('KBA05_HERST1'))[0][0] indx11 = np.where(df.columns.str.match('KBA05_ZUL4'))[0][0]+1 indx20 = np.where(df.columns.str.match('SEMIO_DOM'))[0][0] indx21 = np.where(df.columns.str.match('SEMIO_VERT'))[0][0]+1 indx3 = np.where(df.columns.str.match('D19_KONSUMTYP'))[0][0] indx4 = np.where(df.columns.str.match('RELAT_AB'))[0][0] indx5 = np.where(df.columns.str.match('ZABEOTYP'))[0][0] col_indx = np.concatenate((np.arange(indx00,indx01),np.arange(indx10,indx11),np.arange(indx20,indx21), np.arange(indx3,indx3+1),np.arange(indx4,indx4+1),np.arange(indx5,indx5+1))) col_name9 = df.columns[col_indx] for col in col_name9: df[df[col]==-1] = 9 # Merge unknown label -1 with 9 df[col].fillna(9, inplace=True) # Replace NaN in the rest of numeric columns with min() num_vars = df.select_dtypes(include=['float', 'int']).columns for col in num_vars: df[col].fillna((df[col].min()), inplace=True) print('NaN in numeric columns filled...') # Dummy the categorical variables cat_vars = df.select_dtypes(include=['object']).copy().columns for var in cat_vars: # for each cat add dummy var, drop original column df = pd.concat([df.drop(var, axis=1), pd.get_dummies(df[var], prefix=var, prefix_sep='_', drop_first=True)], axis=1) print('File cleaned.') print('data shape after cleaning: {}'.format(df.shape)) return df ###Output _____no_output_____ ###Markdown **Load and clean "azdias" and "customers" data** ###Code azd = load_and_clean_data('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv') print('Cleaned azdias shape: {}'.format(azd.shape)) cust = load_and_clean_data('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv') print('Cleaned customers shape: {}'.format(cust.shape)) ###Output File loaded... data shape before cleaning: (191652, 369) NaN in numeric columns filled... File cleaned. data shape after cleaning: (191652, 441) Cleaned customers shape: (191652, 441) ###Markdown 1.2 Perform PCA to reduce features Note that the cleaned data has more than 400 features. Insead of doing KMeans clustering directly on the cleaned data, we prefer to do PCA(Principal Component Analysis) first to reduce features. **Define fuction to standardize data and perform PCA** Before doing PCA, we need to standardize the dataset's features onto unit scale (mean = 0 and variance = 1) so that PCA won't be affected by the original data scale. A threshold with default number of 0.9 is set to cut off the principal components so we can get the minimum number of features while preserving at least 90% of data variations. ###Code def choose_principal_components(df, percent=0.9): ''' INPUT df - 2D pandas dataframe percent - percentage of data variance to be perserved after pca OUTPUT df_principal - 2D dataframe of ordered principal components from df num_pca - minimum number of principal components that perserves more than "percent" of variance of df pca - PCA model This function perform PCA on df and return pca model, principal components in order and its number and plot accumulated percentage of variance explained by each of the principal components. ''' # Standardize data for PCA df_std = StandardScaler().fit_transform(df) print('Standardized data...') # Perform PCA pca = PCA(n_components=df.shape[1], random_state=42) pca.fit(df_std) print('Initial PCA is done...') # Accumulate percentage of variance explained by each of the principal components. weight_pca = pca.explained_variance_ratio_ perc_pca = np.cumsum(weight_pca) # Minimum number of principal components that perserves more than 90% of variance of df num_pca = np.where(perc_pca>=percent)[0][0] # PCA transform df_std to num_pca components pca = PCA(n_components=num_pca, random_state=42) principal_components = pca.fit_transform(df_std) df_principal= pd.DataFrame(principal_components) print('Final PCA keeps {} features.'.format(num_pca)) # Plot PCA results plt.figure(figsize=(6,4)) plt.plot(perc_pca, color='k', lw=2) plt.xlabel('Number of components') plt.ylabel('Total explained variance') plt.xlim(0, df.shape[1]) plt.yticks(np.arange(0, 1.1, 0.1)) plt.axvline(num_pca, c='b') plt.axhline(percent, c='r') plt.show(); return df_principal, num_pca, pca ###Output _____no_output_____ ###Markdown **Choose minimum number of features to preserve 90% of data variation**According to the plot below, we can use the first 196 principal components from PCA to represent the original data. The data size has been reduced more than 50%. ###Code # Perform PCA on azdias azdias_pca, num_pca, pca = choose_principal_components(azd, percent=0.9) ###Output Standardized data... Initial PCA is done... Final PCA keeps 196 features. ###Markdown **Check feature importance in each principal component** Let's define a function to record the feature importance in each principal component. ###Code def create_importance_dataframe(pca, df_col): ''' INPUT pca - PCA model df_col - features(column names) of original dataframe OUTPUT importance_df - 2D dataframe of feature importance for each PCA component This function record feature importance for each PCA component into a 2D dataframe ''' # Change pcs components ndarray to a dataframe importance_df = pd.DataFrame(pca.components_) # Assign columns importance_df.columns = df_col # Change to absolute values importance_df =importance_df.apply(np.abs) # Transpose importance_df=importance_df.transpose() # Change column names again ## First get number of pcs num_pcs = importance_df.shape[1] ## Generate the new column names new_columns = [f'PC{i}' for i in range(1, num_pcs + 1)] ## Now rename importance_df.columns = new_columns # Return importance df return importance_df ###Output _____no_output_____ ###Markdown The table below shows the importance score of each feature in top 10 principal components. ###Code # Call function to create importance df importance_df =create_importance_dataframe(pca, azd.columns) # Show first few rows display(importance_df.iloc[0:4,0:10]) ###Output _____no_output_____ ###Markdown Now we can plot the top 10 features in the first principal component. ###Code # Sort depending on PC of interest ## PC1 top 10 important features pc1_top_10_features = importance_df['PC1'].sort_values(ascending = False)[:10] # print(), print(f'PC1 top 10 feautres are \n') # display(pc1_top_10_features ) # Plot top 10 important features in the first principal component ax = pc1_top_10_features.plot.bar(title='Top 10 important features in 1st principal component',rot=60); ax.set_ylabel('importance'); ###Output _____no_output_____ ###Markdown It is hard to select a few most important features out of 400 by analyzing their importance scores in 196 principal components. We will just use the transformed data in the principal components' space for KMeans clustering later. Perform the same PCA on customers data ###Code # find columns to drop in customers data which are not in azdias data col_drop = np.setdiff1d(cust.columns, azd.columns) # Standardize customers data cust_std = StandardScaler().fit_transform(cust.drop(col_drop,axis=1)) # Perform PCA on customers data as same as PCA on azdias data customers_pca = pca.transform(cust_std) customers_pca.shape ###Output _____no_output_____ ###Markdown 1.3 K-Means clustering **Choose a reasonable cluser number**Run k-means to cluster the neighborhood into 2~20 clusters. Calculate the sum of squared distance to find the optimal cluster number. ###Code def choose_kmean_number(df): ''' INPUT df - pandas dataframe OUTPUT None This function display elbow plot for k-mean clustering on df ''' Sum_of_squared_distances = [] for k in range(2,20,2): # run k-means clustering kmeans = KMeans(n_clusters=k, random_state=0).fit(df) # Record sum of squared distances of samples to the nearest cluster center Sum_of_squared_distances.append(kmeans.inertia_) K = range(2,20,2) plt.plot(K, Sum_of_squared_distances, 'bx-') plt.xlabel('k',fontsize=14) plt.ylabel('Sum_of_squared_distances',fontsize=14) plt.title('Elbow method for optimal k',fontsize=14) plt.show() choose_kmean_number(azdias_pca) ###Output _____no_output_____ ###Markdown According to the elbow plot, we can divide azdias data to 10 clusters. Run k-means clusering for 10 clusters ###Code def kmean_clustering(k, df_pca, df): ''' INPUT k - number of clustering df_pca - pandas dataframe with reduced features after PCA df - original pandas dataframe OUTPUT df - original pandas dataframe with cluster label added to the first column kmeans - KMeans clustering model This function groups df_pca into k clusters and add cluster labels to the original df ''' # run k-means clustering kmeans = KMeans(n_clusters=k, random_state=0).fit(df_pca) # add clustering labels to df df.insert(0, 'Cluster Label', kmeans.labels_) return df, kmeans ###Output _____no_output_____ ###Markdown Now, let's group azdias data into 10 clusters as below. ###Code azdias, kmeans = kmean_clustering(10, azdias_pca, azd) azdias.iloc[0:5,0:9] ###Output _____no_output_____ ###Markdown Do the same clustering on customers data ###Code # Do the same KMeans clustering on customers data cust_labels = kmeans.predict(customers_pca) # add clustering labels to customers cust.insert(0, 'Cluster Label', cust_labels) ###Output _____no_output_____ ###Markdown Check clustering population The plot below shows the sample counts in each cluster group for both general population and customers data. We can see that general population distributes quite evenly in eight clusters while the customer group is only populated in three clusters. ###Code # Get sample count in each cluster azd_cluster = azdias.groupby('Cluster Label').LNR.count() cust_cluster = cust.groupby('Cluster Label').LNR.count() # Plot sample count in each cluster df_cluster = pd.concat([azd_cluster, cust_cluster], axis=1, sort=False) df_cluster.columns = ['General Populations', 'Customers'] df_cluster.plot(kind="bar", title="Sample count",rot=0); plt.ylabel("Count"); ###Output _____no_output_____ ###Markdown From the unsupervised learning we know that the customers data has different distribution on its demographic features from the general population data. So we should be able to identified potential customers from the general population. Part 2: Supervised Learning Model Now that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. 2.1 Load, clean and perform PCA on training data First, we load and clean the training data. ###Code mailout_train = load_and_clean_data('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv') X_train = mailout_train.drop('RESPONSE',axis=1) Y_train = mailout_train['RESPONSE'] ###Output _____no_output_____ ###Markdown Let's check the event rate for class 0 ("RESPONSE"= 0, not customer) and class 1 ("RESPONSE"= 1, customer). Class 1 is only 1% of the whole samples. ###Code resp_rate = round(mailout_train.groupby('RESPONSE').count().iloc[:,0]/mailout_train.shape[0],2) ax = resp_rate.plot.bar(rot=0); for p in ax.patches: ax.annotate(str(p.get_height()), (p.get_x() * 1.005, p.get_height() * 1.005)); ###Output _____no_output_____ ###Markdown The training data may have different characteristics from the general population data, so we do a seperate PCA on the training data. ###Code X_train_pca, num_pca, pca = choose_principal_components(X_train, 0.9) ###Output Standardized data... Initial PCA is done... Final PCA keeps 184 features. ###Markdown The plot shows that we can use 184 principal components to represent the training data while preserving 90% of it variations. By doing PCA, we achieve size reduction by more than 50%. We can also visualize how sparse the actual customers are among all training samples by plotting 500 samples in the space of two principle components below. ###Code dfpca = X_train_pca.loc[:,[50,60]] dfpca[2] = Y_train dfpca.columns = ['p1', 'p2', 'resp'] dfpca.head() ax1 = dfpca.iloc[0:500,:].plot.scatter(x= 'p1', y= 'p2', c = 'resp', colormap='coolwarm'); ###Output _____no_output_____ ###Markdown In the scatter plot above, there are only a few red dots (actual customers) among all samples. 2.2 Downsample training data on major class Now, here comes the triky part of classification problem. After PCA, the data has already been standardized. We can directly used the PCA transformed data to train the classifier.According to the event label distribution plot from "response" column in previous section, Response 1 (class 1) samples is only 1% of all samples. If we just use all the samples to train a standard classifier, it will very likely predict all sample as class 0 while still achieving 99% of accuracy.How to classify this highly imbalanced data? There are may ways to handle it in aspects of both data resampling and machine learning algorithm. Here we use cluster-based downsampling to downsample the major class (class 0) in each of its clusters to make the data more balanced while preserving the data characteristics in class 0 as much as possible. ###Code def downsample_class0(X, Y, k_cluster=10, DnSamp_rate=20): ''' INPUT X - 2D dataframe for down-sampling Y - 1D array of class labels(0/1) for X k_cluster - k-mean cluster number DnSamp_rate - down-sampling rate OUTPUT X_DnSamp - 2D dataframe for down-sampled X Y_DnSample - 1D array of class labels(0/1) for X_DnSamp This function performs k-mean clustering on X, and down sample class 0 in each cluster ''' # Run k_mean cluster for 10 cluster on class 0 of traning data X_class0 = X[Y==0] X_class1= X[Y==1] print('Original Class1/Class0 = {}'.format(sum(Y==1)/sum(Y==0))) X_class0_cluster, kmeans = kmean_clustering(k_cluster, X_class0, X_class0) print('K-mean clustering is done for {} clusters.'.format(k_cluster)) # Down-sample in each cluster of class 0 cluster_size = X_class0_cluster.groupby('Cluster Label').count()[0] X_DnSamp = pd.DataFrame() for i in np.arange(k_cluster): df_cluster = X_class0_cluster[X_class0_cluster['Cluster Label']==i] X_DnSamp = X_DnSamp.append(df_cluster.sample(n=round(cluster_size[i]/DnSamp_rate).astype(int), random_state=i)) # Append class 1 data X_DnSamp.drop('Cluster Label',axis=1,inplace=True) num_class0 = X_DnSamp.shape[0] X_DnSamp = X_DnSamp.append(X_class1) # Set same column names X_DnSamp.columns = X.columns print('Down sampled X size: {}'.format(X_DnSamp.shape)) # Down sample Y_train Y_DnSample = np.zeros(X_DnSamp.shape[0], dtype=int) Y_DnSample[num_class0:] = 1 print('Down sampled Class1/Class0 = {}'.format(sum(Y_DnSample==1)/sum(Y_DnSample==0))) return X_DnSamp, Y_DnSample # Downsample training data on class 0 X_train_pca_DnSamp, Y_train_DnSample = downsample_class0(X_train_pca, Y_train, k_cluster=10, DnSamp_rate=50) ###Output Original Class1/Class0 = 0.012538298373792129 K-mean clustering is done for 10 clusters. Down sampled X size: (1381, 184) Down sampled Class1/Class0 = 0.6266195524146054 ###Markdown After downsampling, class 1 is about half size of class 0. 2.3 classify training data First, we try classifier LinearSVC and Logistic Regression and check their ROC score since the final prediction on test data will be evaluated by roc_auc score in Kaggle competition. ###Code # Define LinearSVC classifier clf = LinearSVC(random_state=0, tol=1e-5) # Fit model on downsampled training data clf.fit(X_train_pca_DnSamp, Y_train_DnSample) # Predict on training data yhat = clf.predict(X_train_pca) # Print ROC score roc=roc_auc_score(Y_train, yhat) print('ROC score: {}'.format(roc)) # Display model clf # Fit Logistic Regression classifier on downsampled training data LR = LogisticRegression(C=0.02, solver='sag').fit(X_train_pca_DnSamp, Y_train_DnSample) # Predict on training data yhat = LR.predict(X_train_pca) # Print ROC score roc=roc_auc_score(Y_train, yhat) print('ROC score: {}'.format(roc)) # Display model LR ###Output ROC score: 0.6305465082692591 ###Markdown LinearSVC and logistic regression classifier give similar scores. Let's also define a function to plot the confusion matrix. ###Code def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ INPUT cm - actual class in 1D array classes - predicted class in 1D array OUTPUT None This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # Compute confusion matrix cnf_matrix = confusion_matrix(Y_train, yhat, labels=[1,0]) np.set_printoptions(precision=2) # Plot non-normalized confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=['response=1','response=0'],normalize= False, title='Confusion matrix') ###Output Confusion matrix, without normalization [[ 253 279] [ 9100 33330]] ###Markdown Although 33330 actual class 0 samples are classified correctly which occupy 78% of class 0, 279 actual class 1 samples which are responded potential customers are classified as class 0. That is half of the actual class 1 population. The performance is not satisfying on the standard classifiers. We definitely don't want to lose any of the potential customers. So we choose XBGClassifier from XGBoost in the next step. XGBoost (Extreme Gradient Boosting) is an advanced and more efficient implementation of Gradient Boosting Algorithm which has better performance on imbalanced data. ###Code # Define XGBClassifier xgbr = xgb.XGBClassifier(verbosity=0) print(xgbr) # Fit model on downsampled training data xgbr.fit(X_train_pca_DnSamp, Y_train_DnSample) # Predict on training data yhat = xgbr.predict(X_train_pca) # roc score roc = roc_auc_score(Y_train, yhat) print('ROC score: {}'.format(roc)) # print("Training score: ", xgbr.score(X_train_pca, Y_train)) # cross validation score results = cross_val_score(xgbr, X_train_pca_DnSamp, Y_train_DnSample, cv=3) print("Accuracy: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100)) ###Output XGBClassifier(base_score=None, booster=None, colsample_bylevel=None, colsample_bynode=None, colsample_bytree=None, gamma=None, gpu_id=None, importance_type='gain', interaction_constraints=None, learning_rate=None, max_delta_step=None, max_depth=None, min_child_weight=None, missing=nan, monotone_constraints=None, n_estimators=100, n_jobs=None, num_parallel_tree=None, objective='binary:logistic', random_state=None, reg_alpha=None, reg_lambda=None, scale_pos_weight=None, subsample=None, tree_method=None, validate_parameters=False, verbosity=0) ROC score: 0.8522847449758026 Accuracy: 41.79% (4.97%) ###Markdown The ROC score is improved from 0.6 to 0.85. That is great! The Accuracy of cross validation results is only 41.79% though. ###Code # Compute confusion matrix cnf_matrix = confusion_matrix(Y_train, yhat, labels=[1,0]) np.set_printoptions(precision=2) # Plot non-normalized confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=['response=1','response=0'],normalize= False, title='Confusion matrix') ###Output Confusion matrix, without normalization [[ 467 65] [ 7351 35079]] ###Markdown From the confusion matrix, we can see that XGBClassifier correctly identifies 467 class 1 samples, only miss 65 samples. The true positive rate is 467/(467+65)=0.88. The false negtive rate is 7351/(7351+35079)=0.17. We improve a lot on capturing as many of the actual customers as possible. ###Code print('Training class1: {}, class0: {}, class1/class0: {}'. format(sum(Y_train==1),sum(Y_train==0),round(sum(Y_train==1)/sum(Y_train==0),2))) print('Prediction class1: {}, class0: {}, class1/class0: {}'. format(sum(yhat==1),sum(yhat==0),round(sum(yhat==1)/sum(yhat==0),2))) print('training score: ', xgbr.score(X_train_pca, Y_train)) ###Output Training class1: 532, class0: 42430, class1/class0: 0.01 Prediction class1: 7818, class0: 35144, class1/class0: 0.22 training score: 0.82738233788 ###Markdown 2.4 Improve classifier on training data 2.4.1 Build pipeline for grid search We'll improve the classifier by building a pipeline and tuning it using grid search.Please check the [XGBoost reference](https://xgboost.readthedocs.io/en/latest/parameter.htmlparameters-for-tree-booster) for setting parameters in XGBClassifier. ###Code # Build pipeline pipeline = Pipeline([ ('xgbr', xgb.XGBClassifier(verbosity=0)) ]) pipeline # Use GridSearchCV to optimize model parameters = { 'xgbr__learning_rate': [0.01, 0.1], 'xgbr__n_estimators': [200, 500], 'xgbr__gamma': [0.1, 1.0] } model = GridSearchCV(pipeline, param_grid=parameters, scoring='roc_auc', cv=4) model # Fit model on downsampled training data and display the best parameters from grid search model.fit(X_train_pca_DnSamp, Y_train_DnSample) print("\nBest Parameters:", model.best_params_) # Check results on cross validation results = cross_val_score(model, X_train_pca_DnSamp, Y_train_DnSample, cv=3) print("Accuracy: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100)) # Predict on training data yhat = model.predict(X_train_pca) # roc score roc=roc_auc_score(Y_train, yhat) print('ROC score: {}'.format(roc)) # Predict probability on training data yhat_prob = model.predict_proba(X_train_pca) yhat_prob[0:2] # Compute confusion matrix cnf_matrix = confusion_matrix(Y_train, yhat, labels=[1,0]) np.set_printoptions(precision=2) # Plot non-normalized confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=['response=1','response=0'],normalize= False, title='Confusion matrix') # print classification_report print (classification_report(Y_train, yhat)) ###Output precision recall f1-score support 0 1.00 0.84 0.91 42430 1 0.07 0.88 0.12 532 avg / total 0.99 0.84 0.91 42962 ###Markdown After tuning, the ROC score just imporves a little from 0.85 to 0.86. From confusion matrix, the false positive rate remains the same while the false negtive rate decreases from 0.17 to 6601/(6601+35829)=0.16. More class 0 samples are identified correctly after tuning. 2.4.2 Redo model fitting on training data without downsampling ###Code def plot_results(Y, Y_pred): """ INPUT Y - actual class in 1D array Y_pred - predicted class in 1D array OUTPUT None This function prints AUC score and plots the confusion matrix of classification results """ # Print ROC score auc=roc_auc_score(Y_train, yhat) print('AUC score: {}'.format(auc)) # Compute confusion matrix cnf_matrix = confusion_matrix(Y_train, yhat, labels=[1,0]) # np.set_printoptions(precision=2) # Plot non-normalized confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=['response=1','response=0'],normalize= False, title='Confusion matrix') # Fit Logistic Regression classifier on training data after PCA LR = LogisticRegression(C=0.02, solver='sag').fit(X_train_pca, Y_train) # Predict on training data yhat = LR.predict(X_train_pca) # Display results plot_results(Y_train, yhat) # Display model LR # Define XGBClassifier xgbr = xgb.XGBClassifier(verbosity=0) print(xgbr) # Fit model on training data after PCA xgbr.fit(X_train_pca, Y_train) # Predict on training data yhat = xgbr.predict(X_train_pca) # Display results plot_results(Y_train, yhat) # Cross validation score results = cross_val_score(xgbr, X_train_pca, Y_train, cv=3) print("Accuracy: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100)) # Use GridSearchCV to optimize model parameters = { 'xgbr__n_estimators': [100, 500], 'xgbr__gamma': [0, 0.1] } model = GridSearchCV(pipeline, param_grid=parameters, scoring='roc_auc', cv=4) # fit model using training data after PCA model.fit(X_train_pca, Y_train) # Print best parameters from grid search print("\nBest Parameters:", model.best_params_) # Predict on training data yhat = model.predict(X_train_pca) # Display results plot_results(Y_train, yhat) # Check results on cross validation results = cross_val_score(model, X_train_pca, Y_train, cv=3) print("Accuracy: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100)) ###Output AUC score: 0.9389097744360902 Confusion matrix, without normalization [[ 467 65] [ 0 42430]] Accuracy: 56.04% (2.17%) ###Markdown Although the model did well on classifying training data, it only get AUC score of 0.52 on test data according to the Kaggle competition feedbacks. We need to think about how to improve the score on test data. 2.4.3 Redo model fitting on training data without PCA Fit XGBClassifier on training data without PCA to include all features. ###Code # Define XGBClassifier xgbr = xgb.XGBClassifier(verbosity=0) print(xgbr) # Fit model on training data after PCA xgbr.fit(X_train, Y_train) # Predict on training data yhat = xgbr.predict(X_train) # Display results plot_results(Y_train, yhat) # Cross validation score results = cross_val_score(xgbr, X_train_pca, Y_train, cv=3) print("Accuracy: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100)) # Use GridSearchCV to optimize model parameters = { 'xgbr__gamma': [0, 0.1], 'xgbr__max_depth': [6, 10] } model = GridSearchCV(pipeline, param_grid=parameters, scoring='roc_auc', cv=4) # fit model using training data after PCA model.fit(X_train, Y_train) # Print best parameters from grid search print("\nBest Parameters:", model.best_params_) # Predict on training data yhat = model.predict(X_train) # Display results plot_results(Y_train, yhat) # Check results on cross validation results = cross_val_score(model, X_train, Y_train, cv=3) print("Accuracy: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100)) ###Output AUC score: 0.9172932330827068 Confusion matrix, without normalization [[ 444 88] [ 0 42430]] Accuracy: 67.15% (2.14%) ###Markdown Compared the model after grid search and the xgbr model before grid search. The model before grid search did a little better on classify class 1 sampels, just missed 86 samples other than 88 samples after grid search. So the xgbr model before grid search will be use to predict on test data. **Discussion**Although downsampling and XGB classifier were used in this project to better deal with imbalanced data, there are still other techniques to improve the results.- Data resamplingBefore doing machine learning, there are other methods to resample the data other than downsampling used in this project. A popular method to try is over sampling based on clusters. After applying k-means clustering to the data, the minority class 1 samples can be duplicated in each of its clusters to achieve equivalent data size as the majority class 0. The advantage of over sampling is not losing any information on the original data. And of course you’d like to try it on a more powerful computer.- Bagging-based & boosting-based techniques on machine learningBagging is an abbreviation of Bootstrap Aggregating. It generates N different bootstrap training samples as the replacement of the original data. Each bootstrapped sample is used to train one classifier such as logistic regression, decision tree and etc. The results from each individual classifier are combined to come up with an improved one.Other boosting-based classifier such as Ada Boost classifier can also be tried besides XGB classifier used in this project. Even for XGB classifier, more parameters can be optimized when computational resources are available. Part 3: Kaggle Competition Now that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. 3.1 Prepare test data First, load and clean the test data. ###Code # Load and clean test data mailout_test = load_and_clean_data('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv') ###Output File loaded... data shape before cleaning: (42833, 366) NaN in numeric columns filled... File cleaned. data shape after cleaning: (42833, 436) ###Markdown Also load the "LNR" column to be saved in the prediction report. ###Code # Load 'LNR' column from original test data file ID = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';', usecols=['LNR']) ###Output _____no_output_____ ###Markdown Then transform the test data using the same PCA model fit by the training data. ###Code # Perform PCA on test data as same as PCA on training data # X_test_pca = pca.transform(mailout_test) ###Output _____no_output_____ ###Markdown 3.2 Predict on test data Predict on the test data using the refined model. ###Code # Predict on test data before pca y_test_est = xgbr.predict_proba(mailout_test) #y_test_est = model.predict_proba(mailout_test) ## Predict on test data after pca # y_test_est = model.predict_proba(X_test_pca) # Save predic results to dataframe df_predict = ID df_predict['RESPONSE'] = y_test_est[:,1] print('{}% in the test data has "RESPONSE"=1.'.format(round(100*sum(df_predict['RESPONSE']>=0.5)/df_predict.shape[0],2))) np.where(df_predict['RESPONSE']>=0.5) ###Output _____no_output_____ ###Markdown The model predicts 0.6% percent of class 1 samples in the test data. Sounds reasonable.Finally, save the results into csv file for Kaggle competition submission. It scored 0.70685 at the competition. ###Code df_predict.to_csv('predict_test.csv', index=False, sep=',', encoding='utf-8') # Save notebook to html format, return 0 if succeed from subprocess import call call(['python', '-m', 'nbconvert', 'Arvato Project Workbook.ipynb']) ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, I will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. I'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, I'll apply what I've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that I will use has been provided by Udacity's partners at Bertelsmann Arvato Analytics, and represents a real-life data science task. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import time import random import pickle import xgboost from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler from sklearn.cluster import KMeans from sklearn.decomposition import PCA from xgboost import XGBClassifier from imblearn.over_sampling import SMOTE from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV, RepeatedStratifiedKFold from sklearn.metrics import f1_score, accuracy_score, confusion_matrix, roc_auc_score, plot_confusion_matrix from numpy import mean # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the Data ###Code # load in the data - I saved locally with "," as a separator # please change to ";" for the original dataset; adapt path as needed! try: azdias = pd.read_csv('Udacity_AZDIAS_052018.csv', sep=',') customers = pd.read_csv('Udacity_CUSTOMERS_052018.csv', sep=',') except: print("You do not have access to the files.") # read feature information; adapt path as needed try: feat_top = pd.read_excel('DIAS Information Levels - Attributes 2017.xlsx') feat_det = pd.read_excel('DIAS Attributes - Values 2017.xlsx') feat_sum = pd.read_csv('DIAS_Attributes_Summary_DSND1.csv') except: print("You do not have access to the files.") ###Output _____no_output_____ ###Markdown Part 0.0: Understanding feature informationThe provided Excel Data seem to contain important information about the values of each feature - notably, values that correspond to "unknown" data. ###Code f_rows, f_cols = feat_det.shape print("Detailed mapping of data values for each feature") print("Number of rows: {}".format(f_rows)) print("Number of cols: {}".format(f_cols)) print(feat_det.head()) # Using forward fill to eliminate NaNs from Attribute Column feat_det['Attribute'].ffill(inplace = True) # Correcting for "GROB" Categories - formatation error leads to "NaN" Values print(feat_det[feat_det['Meaning'].isna() == True]) feat_det['Meaning'].ffill(inplace = True) # Creating a new data frame with "unknown" values - which are equivalent to NaN, but receive a number miss_val = feat_det[(feat_det['Meaning'] == 'unknown')][['Attribute', 'Value']].reset_index(drop=True) # Checking if there are any duplicated Attributes, that would need to be consolidated # True if no extra steps are required print(eval('miss_val[\'Attribute\'].nunique()==miss_val.shape[0]')) # Transforming 'Value' Strings into lists of integers def split_transform(x): ''' INPUT: x - (string) format '[a, b]' to be transformed into a list of integers OUTPUT: [x] - (list) list of integers ''' try: a_list = x.split(',') map_object = map(int, a_list) list_ints = list(map_object) return list_ints except: return [x] miss_val['Value'] = miss_val['Value'].apply(split_transform) ###Output _____no_output_____ ###Markdown Part 0.1: General Population DatasetThe goal of this section is to understand the "Azdias" dataset, and provide information for the data cleaning. ###Code # exploring the general population dataset a_rows, a_cols = azdias.shape print("General Population at Large") print("Number of rows: {}".format(a_rows)) print("Number of cols: {}".format(a_cols)) print(azdias.head()) ###Output General Population at Large Number of rows: 891221 Number of cols: 366 LNR AGER_TYP AKT_DAT_KL ALTER_HH ALTER_KIND1 ALTER_KIND2 \ 0 910215 -1 NaN NaN NaN NaN 1 910220 -1 9.0 0.0 NaN NaN 2 910225 -1 9.0 17.0 NaN NaN 3 910226 2 1.0 13.0 NaN NaN 4 910241 -1 1.0 20.0 NaN NaN ALTER_KIND3 ALTER_KIND4 ALTERSKATEGORIE_FEIN ANZ_HAUSHALTE_AKTIV ... \ 0 NaN NaN NaN NaN ... 1 NaN NaN 21.0 11.0 ... 2 NaN NaN 17.0 10.0 ... 3 NaN NaN 13.0 1.0 ... 4 NaN NaN 14.0 3.0 ... VHN VK_DHT4A VK_DISTANZ VK_ZG11 W_KEIT_KIND_HH WOHNDAUER_2008 \ 0 NaN NaN NaN NaN NaN NaN 1 4.0 8.0 11.0 10.0 3.0 9.0 2 2.0 9.0 9.0 6.0 3.0 9.0 3 0.0 7.0 10.0 11.0 NaN 9.0 4 2.0 3.0 5.0 4.0 2.0 9.0 WOHNLAGE ZABEOTYP ANREDE_KZ ALTERSKATEGORIE_GROB 0 NaN 3 1 2 1 4.0 5 2 1 2 2.0 5 2 3 3 7.0 3 2 4 4 3.0 4 1 3 [5 rows x 366 columns] ###Markdown Replacing "Unknown" Values ###Code # creating a copy of dataset to perform cleaning azdias_clean = azdias.copy() # Replacing "unknown" values with NaN for index in miss_val.index: current_atr = miss_val.loc[index]['Attribute'] current_list = miss_val.loc[index]['Value'] for value in current_list: try: # some features are not present in azdias azdias_clean.loc[:, current_atr].replace(value, np.nan, inplace = True) except: continue # Checking for random columns # It seems to work! for column in random.sample(set(miss_val['Attribute'].unique()),5): try: print('Column name:', column) print('Missing Values:', miss_val[miss_val['Attribute'] == column]['Value'].values) print('azdias_original: \n', azdias[column].value_counts(dropna=False), '\n') print('azdias_cleaned: \n', azdias_clean[column].value_counts(dropna=False), '\n \n') except: pass ###Output Column name: KBA13_KW_90 Missing Values: [list([-1])] azdias_original: 3.0 277407 2.0 181685 0.0 133326 NaN 105800 4.0 82747 5.0 58683 1.0 51573 Name: KBA13_KW_90, dtype: int64 azdias_cleaned: 3.0 277407 2.0 181685 0.0 133326 NaN 105800 4.0 82747 5.0 58683 1.0 51573 Name: KBA13_KW_90, dtype: int64 Column name: KBA13_BJ_2004 Missing Values: [list([-1])] azdias_original: 3.0 364930 2.0 166228 4.0 157705 NaN 105800 1.0 49522 5.0 47036 Name: KBA13_BJ_2004, dtype: int64 azdias_cleaned: 3.0 364930 2.0 166228 4.0 157705 NaN 105800 1.0 49522 5.0 47036 Name: KBA13_BJ_2004, dtype: int64 Column name: KBA05_KW3 Missing Values: [list([-1, 9])] azdias_original: 1.0 233518 0.0 206843 2.0 160776 NaN 133324 3.0 80358 4.0 61616 9.0 14786 Name: KBA05_KW3, dtype: int64 azdias_cleaned: 1.0 233518 0.0 206843 2.0 160776 NaN 148110 3.0 80358 4.0 61616 Name: KBA05_KW3, dtype: int64 Column name: KBA13_HERST_FORD_OPEL Missing Values: [list([-1])] azdias_original: 3.0 326805 2.0 164003 4.0 154044 NaN 105800 1.0 74276 5.0 66293 Name: KBA13_HERST_FORD_OPEL, dtype: int64 azdias_cleaned: 3.0 326805 2.0 164003 4.0 154044 NaN 105800 1.0 74276 5.0 66293 Name: KBA13_HERST_FORD_OPEL, dtype: int64 Column name: KBA05_MOTOR Missing Values: [list([-1, 9])] azdias_original: 3.0 289858 2.0 222119 NaN 133324 1.0 121085 4.0 110049 9.0 14786 Name: KBA05_MOTOR, dtype: int64 azdias_cleaned: 3.0 289858 2.0 222119 NaN 148110 1.0 121085 4.0 110049 Name: KBA05_MOTOR, dtype: int64 ###Markdown Dealing with Missing Values - Columns ###Code # Identifying the proportion of NaN values nan_prop = azdias_clean.isna().mean().round(4) * 100 # Sorting and counting values nan_prop.value_counts().sort_index() # Analyzing the results clean_cols = len(nan_prop[nan_prop == 0]) nan_cols = nan_prop.shape[0] - clean_cols print('{} ({:0.1f}%) columns have no missing values'.format(clean_cols, 100*clean_cols/nan_prop.shape[0])) print('{} ({:0.1f}%) columns have at least one missing value'.format(nan_cols, 100*nan_cols/nan_prop.shape[0])) # Distribution of missing values plt.figure(figsize=(15,10)); plt.hist(nan_prop.values, bins=20); plt.title('Distribution of Missing Values - Columns (Percentage)') plt.xlabel('Percentage of missing values (%)'); plt.ylabel('Column Count'); # Analyzing the results (>=20% missing values are clearly outliers) clean_cols = len(nan_prop[nan_prop < 20]) nan_cols = nan_prop.shape[0] - clean_cols print('{} ({:0.1f}%) columns have less than 20% missing values'.format(clean_cols, 100*clean_cols/nan_prop.shape[0])) print('{} ({:0.1f}%) columns have at least 20% of its values missing'.format(nan_cols, 100*nan_cols/nan_prop.shape[0])) # Visualizing the percentage of NaNs w_nan_prop = (nan_prop[nan_prop > 0].sort_values(ascending=False))[:20] w_nan_prop.plot.bar(figsize=(15,10), facecolor ='b') plt.title('Top 20 Percentage of Missing Values') plt.xlabel('Column name with missing values') plt.ylabel('Percentage of missing values') plt.show() ###Output _____no_output_____ ###Markdown From the chart above, we notice some columns with more than 1/5 of its values missing. Figure 1 shows that those are clearly the outliers - 94.8% of the columns (347) have less than 20% of its values missing. To reduce the dataset's complexity, I will choose to drop those 19 columns in the data cleaning step. ###Code azdias_col_drop = list(nan_prop[nan_prop >= 20].index) azdias_col_keep = list(nan_prop[nan_prop < 20].index) azdias_clean = azdias_clean[azdias_col_keep] ###Output _____no_output_____ ###Markdown Dealing with Missing Values - Rows ###Code # Missing Values in rows na_rows_sum = azdias_clean.isna().sum(axis=1) # Distribution of missing values in percentual values prop_nan_rows = 100*na_rows_sum.values/len(azdias_clean.columns) plt.figure(figsize=(15,10)); plt.hist(prop_nan_rows, bins=100); plt.title('Distribution of Missing Values - Rows (Percentage)') plt.xlabel('Percentage of missing values (%)'); plt.ylabel('Row Count'); # Analyzing the results (>=10% missing values are clearly outliers) clean_rows = len(prop_nan_rows[prop_nan_rows < 10]) nan_rows = prop_nan_rows.shape[0] - clean_rows print('{} ({:0.1f}%) rows have less than 10% missing values'.format(clean_rows, 100*clean_rows/prop_nan_rows.shape[0])) print('{} ({:0.1f}%) rows have at least 10% of its values missing'.format(nan_rows, 100*nan_rows/prop_nan_rows.shape[0])) ###Output 737287 (82.7%) rows have less than 10% missing values 153934 (17.3%) rows have at least 10% of its values missing ###Markdown As Figure 3 and the analysis above show, removing rows with at least 10% of its values missing will still retain 82.7% of the values. To increase the informative value and meaningfullness of the dataset, I will choose to remove those rows in the data cleaning step. ###Code azdias_row_drop = list(na_rows_sum[prop_nan_rows >= 10].index) azdias_row_keep = list(na_rows_sum[prop_nan_rows < 10].index) azdias_clean = azdias_clean.drop(azdias_row_drop) ###Output _____no_output_____ ###Markdown Analyzing and selecting featuresThis section uses the information provided in the Data Science Nanodegree Term 1, in order to identify the true data types of the data set (using the file *DIAS_Attributes_Summary.csv*). However, please note that the provided information is not complete.Since the unsupervised learning techniques to be used will only work on data that is encoded numerically, I will need to make a few encoding changes or additional assumptions to be able to make progress. In addition, while almost all of the values in the dataset are encoded using numbers, not all of them represent numeric values. There are five types to be found:1. Numeric2. Interval3. Ordinal4. Mixed5. CategoricalI will keep the numeric and interval data without changes.Most of the variables in the dataset are ordinal in nature. While ordinal values may technically be non-linear in spacing, I made the simplifying assumption that the ordinal variables can be treated as being interval in nature (that is, kept without any changes).Special handling will be necessary for the remaining two variable types: categorical, and 'mixed'. ###Code # Analyzing data types and their frequency - majority is stored as a number azdias_clean.dtypes.value_counts() # the feature summary file from DSND1 comes in handy to identify data types # Interestingly enough, the majority of categorical data is store as a number feat_sum['type'].value_counts() # Checking if all azdias columns are present in feat_sum not_feat_sum = [col for col in azdias_clean.columns if col not in feat_sum['attribute'].values] not_azdias = [col for col in feat_sum['attribute'].values if col not in azdias_clean.columns] print('{} AZDIAS Attributes have no information on data type.'.format(len(not_feat_sum))) print('{} Feature Summary Attributes were not found on AZDIAS.'.format(len(not_azdias))) ###Output 43 AZDIAS Attributes have no information on data type. 20 Feature Summary Attributes were not found on AZDIAS. ###Markdown Deep Dive: Categorical Data ###Code feat_sum[feat_sum['type'] == 'categorical']['attribute'] # First, let's take a look into categorical data # Finding 'categorical' Attributes and how they were stored cat_data = [col for col in feat_sum[feat_sum['type'] == 'categorical']['attribute'].values if col in azdias_clean.columns] print('{} Attributes are listed as "categorical".'.format(len(cat_data))) print('However, they are stored in the database as follows:') print(azdias_clean[cat_data].dtypes.value_counts()) # Now, let's look into their values and find out where re-encoding is necessary att_to_re_encode = [] for attribute in cat_data: possible_values = azdias_clean[attribute].unique() for value in possible_values: try: float(value) except: # attribute value is not numerical - re-encoding is needed! att_to_re_encode.append(attribute) break # Visualizing those attributes and their values for attribute in att_to_re_encode: print(attribute) print(azdias_clean[attribute].value_counts(), '\n') # OST_WEST_KZ requires re-encoding. # I will choose 1 for "West", 0 for "Ost" azdias_clean['OST_WEST_KZ'] = azdias_clean['OST_WEST_KZ'].apply(lambda x: 1 if x == 'W' else 0) # CAMEO_DEUG_2015 requires re-encoding. # All numbers will be converted to float and then string; X will be replaced by nan azdias_clean['CAMEO_DEUG_2015'].replace('X', np.nan, inplace=True) azdias_clean['CAMEO_DEUG_2015'] = azdias_clean['CAMEO_DEUG_2015'].apply(float).apply(str) # check azdias_clean['CAMEO_DEUG_2015'].value_counts() # CAMEO_DEU_2015 has several categories. This might be harmful when creating dummy columns # So I will choose to drop it print('There are {} distinct categories of the CAMEO_DEU_2015 Attribute.'.format(len(azdias_clean['CAMEO_DEU_2015'].unique()))) azdias_clean.drop(columns=['CAMEO_DEU_2015'], inplace=True) # Now, let's examine the other columns saved as "object" print(azdias_clean.select_dtypes('object')) # "EIGENFUEGT_AM" is a data stamp of (probably) the entry on the database # That's irrelevant to our problem, so we can drop the column azdias_clean.drop(columns=['EINGEFUEGT_AM'], inplace=True) # Looking at their description, "Fein" (Fine) and "Grob" (Rough) seem to convey the same data # For simplicity, we will keep the "Grob" values only azdias_clean.drop(columns=['LP_FAMILIE_FEIN', 'LP_STATUS_FEIN'], inplace=True) feat_top.loc[17:20][['Attribute', 'Description']] # Finally, let's look at our categorical columns and re-encode those with multiple values: fin_cat_data = [col for col in feat_sum[feat_sum['type'] == 'categorical']['attribute'].values if col in azdias_clean.columns] to_encode = azdias_clean[fin_cat_data].nunique()[azdias_clean[fin_cat_data].nunique()>2] to_encode # Now we add the dummy columns for the above mentioned attributes azdias_dummies = pd.get_dummies(azdias_clean, columns = to_encode.index) azdias_dummies.drop(columns=[x for x in azdias_dummies.columns if '_nan' in x], inplace = True) ###Output _____no_output_____ ###Markdown Deep Dive: Mixed DataThere are a handful of features that are marked as "mixed" in the feature summary that require special treatment in order to be included in the analysis. There are two in particular that deserve attention:* "PRAEGENDE_JUGENDJAHRE" combines information on three dimensions: generation by decade, movement (mainstream vs. avantgarde), and nation (east vs. west). While there aren't enough levels to disentangle east from west, you should create two new variables to capture the other two dimensions: an interval-type variable for decade, and a binary variable for movement.* "CAMEO_INTL_2015" combines information on two axes: wealth and life stage. Break up the two-digit codes by their 'tens'-place and 'ones'-place digits into two new ordinal variables (which, for the purposes of this project, is equivalent to just treating them as their raw numeric values). ###Code # Identify and visualize "mixed" data mix_data = [col for col in feat_sum[feat_sum['type'] == 'mixed']['attribute'].values if col in azdias_dummies.columns] for att in mix_data: print(att) print(azdias_dummies[att].value_counts(), '\n') # Let's start with "Prägende Jugendjahre" and split it into decade/movement # Here, 4 means the 40s, .... and 9 the 90s. decade_dic = {1: 4, 2: 4, 3: 5, 4: 5, 5: 6, 6: 6, 7: 6, 8: 7, 9: 7, 10: 8, 11: 8, 12: 8, 13: 8, 14: 9, 15: 9} # Here, 0 means Mainstream and 1 means Avant-Garde move_dic = {1: 0, 2: 1, 3: 0, 4: 1, 5: 0, 6: 1, 7: 1, 8: 0, 9: 1, 10: 0, 11: 1, 12: 0, 13: 1, 14: 0, 15: 1} # Creating new columns azdias_dummies['DECADE_PRAGENDE_JUGENDJAHRE'] = azdias_dummies['PRAEGENDE_JUGENDJAHRE'].map(decade_dic) azdias_dummies['MOV_PRAGENDE_JUGENDJAHRE'] = azdias_dummies['PRAEGENDE_JUGENDJAHRE'].map(move_dic) azdias_dummies.drop(columns='PRAEGENDE_JUGENDJAHRE', inplace= True) # Moving to 'Cameo_Intl_15'... # "tens" correspond to wealth azdias_dummies['WEALTH_CAMEO_INTL_2015'] = azdias_dummies['CAMEO_INTL_2015'].astype(str).str[0] azdias_dummies['WEALTH_CAMEO_INTL_2015'].replace(['n', 'a', 'X'], np.nan, inplace = True) #NaNs are stored as "n" # "ones" correspond to life stage azdias_dummies['LIFE_CAMEO_INTL_2015'] = azdias_dummies['CAMEO_INTL_2015'].astype(str).str[1] azdias_dummies['LIFE_CAMEO_INTL_2015'].replace(['n', 'a', 'X'], np.nan, inplace = True) #NaNs are stored as "a" # drop original col azdias_dummies.drop(columns='CAMEO_INTL_2015', inplace = True) # Additionally, I will drop both "Lebensphase" Columns, as the "Life Stage" info is already there! azdias_dummies.drop(columns=['LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB'], inplace = True) ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of the analysis will come in this part of the project. Here, unsupervised learning techniques were used to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, I will be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. Part 1.1. Writing a cleaning functionThis function contains all pre-processing steps, and yields clean data sets, ready for analysis:* Replace "Unknown" Values with NaN* Removes Columns with at least 20% NaN* Removes Rows with at least 10% NaN* Removes unnecessary mixed and categorical values* Replaces Categorical Data with Dummy Variables* Re-Encodes mixed and categorical dataWe will apply it both on the general population dataset as well as on the customers dataset. ###Code def clean_dataset(df, model=False, df_general=azdias, df_customers=customers, feat_det=feat_det, feat_sum=feat_sum): ''' INPUT: df - (pandas dataframe) to be cleaned/pre-processed model - (boolean) True if rows should not be deleted, False if rows should be deleted df_general - (pandas dataframe) general population data df_customers - (pandas dataframe) customers data feat_det - (pandas dataframe) attribute values data feat_sum - (pandas dataframe) attribute summary data OUTPUT: df - (pandas dataframe) cleaned, pre-processed dataset ''' # Creating a new data frame with "unknown" values - which are equivalent to NaN, but receive a number miss_val = feat_det[(feat_det['Meaning'] == 'unknown')][['Attribute', 'Value']].reset_index(drop=True) # Transforming 'Value' Strings into lists of integers miss_val['Value'] = miss_val['Value'].apply(split_transform) # Replacing "unknown" values with NaN for index in miss_val.index: current_atr = miss_val.loc[index]['Attribute'] current_list = miss_val.loc[index]['Value'] for value in current_list: try: # some features are not present in df df.loc[:, current_atr].replace(value, np.nan, inplace = True) except: continue # Missing values in general population attributes nan_prop = df_general.isna().mean().round(4) * 100 df_col_drop = list(nan_prop[nan_prop >= 20].index) df_col_keep = list(nan_prop[nan_prop < 20].index) df = df[df_col_keep] # Missing Values in rows - except when using train/test data if model == False: na_rows_sum = df.isna().sum(axis=1) prop_nan_rows = 100*na_rows_sum.values/len(df.columns) df_row_drop = list(na_rows_sum[prop_nan_rows >= 10].index) df_row_keep = list(na_rows_sum[prop_nan_rows < 10].index) df = df.drop(df_row_drop) # Re-Encoding Categorical features df['OST_WEST_KZ'] = df['OST_WEST_KZ'].apply(lambda x: 1 if x == 'W' else 0) df['CAMEO_DEUG_2015'].replace('X', np.nan, inplace=True) df['CAMEO_DEUG_2015'] = df['CAMEO_DEUG_2015'].apply(float).apply(str) # Dropping unnecessary columns df.drop(columns=['CAMEO_DEU_2015', 'EINGEFUEGT_AM', 'LP_FAMILIE_FEIN', 'LP_STATUS_FEIN'], inplace=True) # Creating dummy variables fin_cat_data = [col for col in feat_sum[feat_sum['type'] == 'categorical']['attribute'].values if col in df.columns] to_encode = df[fin_cat_data].nunique()[df[fin_cat_data].nunique()>2] df = pd.get_dummies(df, columns = to_encode.index) df.drop(columns=[x for x in df.columns if '_nan' in x], inplace = True) # PRAEGENDE_JUGENDJAHRE # Here, 4 means the 40s, .... and 9 the 90s. decade_dic = {1: 4, 2: 4, 3: 5, 4: 5, 5: 6, 6: 6, 7: 6, 8: 7, 9: 7, 10: 8, 11: 8, 12: 8, 13: 8, 14: 9, 15: 9} # Here, 0 means Mainstream and 1 means Avant-Garde move_dic = {1: 0, 2: 1, 3: 0, 4: 1, 5: 0, 6: 1, 7: 1, 8: 0, 9: 1, 10: 0, 11: 1, 12: 0, 13: 1, 14: 0, 15: 1} # Creating new columns df['DECADE_PRAGENDE_JUGENDJAHRE'] = df['PRAEGENDE_JUGENDJAHRE'].map(decade_dic) df['MOV_PRAGENDE_JUGENDJAHRE'] = df['PRAEGENDE_JUGENDJAHRE'].map(move_dic) # CAMEO_INTL_2015 # "tens" correspond to wealth df['WEALTH_CAMEO_INTL_2015'] = df['CAMEO_INTL_2015'].astype(str).str[0] df['WEALTH_CAMEO_INTL_2015'].replace(['n', 'a', 'X'], np.nan, inplace = True) #NaNs are stored as "n", "a", "X" # "ones" correspond to life stage df['LIFE_CAMEO_INTL_2015'] = df['CAMEO_INTL_2015'].astype(str).str[1] df['LIFE_CAMEO_INTL_2015'].replace(['n', 'a', 'X'], np.nan, inplace = True) #NaNs are stored as "n", "a", "X" # Dropping unnecessary columns df.drop(columns=['LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB', 'PRAEGENDE_JUGENDJAHRE', 'CAMEO_INTL_2015'], inplace = True) return df # Testing function clean_azdias = clean_dataset(azdias) clean_azdias.equals(azdias_dummies) # Applying the same process to the customers dataset # first, we remove the columns that don't appear in the general pop dataset cust_clean = customers.drop(columns=['CUSTOMER_GROUP', 'ONLINE_PURCHASE','PRODUCT_GROUP']) cust_clean = clean_dataset(cust_clean) # Comparing both cleaned datasets to check if same features are present [x for x in clean_azdias.columns if x not in cust_clean.columns] # This feature seems to be a dummy column introduced in the cleaning step... cust_clean.columns[cust_clean.columns.str.contains('GEBAEUDETYP')] # So I will add a column for this with zeros cust_clean['GEBAEUDETYP_5.0'] = [0] * len(cust_clean.index) # Final check: columns in cust_clean that are not in clean_azdias [x for x in cust_clean.columns if x not in clean_azdias.columns] ###Output _____no_output_____ ###Markdown Part 1.2. Feature TransformationBefore we apply dimensionality reduction techniques to the data, we need to perform feature scaling so that the principal component vectors are not influenced by the natural differences in scale for features.A StandardScaler, which scales each feature to mean 0 and standard deviation 1, was used. According to sklearn's [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html), NaNs are treated as in StandardScaler missing values: disregarded in fit, and maintained in transform. Hence, I inputed missing values after the scaling, so that the replacing values do not influence the scaling process. ###Code # Scaling features with Standard Scaler - there is no need to remove the NaNs beforehand scaler = StandardScaler() clean_azdias_scaled = pd.DataFrame(scaler.fit_transform(clean_azdias), columns = clean_azdias.columns) # Dealing with remaining NaNs imputer = SimpleImputer(missing_values = np.nan, strategy = 'mean') azdias_scaled_nonan = pd.DataFrame(imputer.fit_transform(clean_azdias_scaled), columns = clean_azdias_scaled.columns) ###Output _____no_output_____ ###Markdown Part 1.3. Dimensionality ReductionIn the words of [this informative blog post](https://medium.com/@cxu24/why-dimensionality-reduction-is-important-dd60b5611543): >In addition to avoiding overfitting and redundancy, dimensionality reduction also leads to better human interpretations and less computational cost with simplification of models.By applying sklearn's PCA class (Principal Component Analysis), we are able to identify which components explain most of the variance of the data and to select those to our model. I have written a function to apply these steps to the customer data as well. ###Code # PCA to all components pca = PCA().fit(azdias_scaled_nonan) # Visualizing results - Ratio of variance explained and cumulative variance explained by components fig, ax1 = plt.subplots(figsize=(15,10)) # Cumulative Variance Explained color = 'tab:red' ax1.set_xlabel('# Components') ax1.set_ylabel('Cumulative Variance Explained', color=color) ax1.plot(np.cumsum(pca.explained_variance_ratio_), color=color) ax1.tick_params(axis='y', labelcolor=color) ax2 = ax1.twinx() # Ratio of Variance Explained color = 'tab:blue' ax2.set_ylabel('Ratio of Variance Explained', color=color) ax2.plot(pca.explained_variance_ratio_, color=color) ax2.tick_params(axis='y', labelcolor=color) # Draw a line where the cumulative explained variance hits 80% comp_80 = np.where(np.cumsum(pca.explained_variance_ratio_) > 0.8)[0][0] ax1.axvline(comp_80, linestyle='dashed', color='black') ax1.axhline(0.8, linestyle='dashed', color='black') ax1.set_title('Principal Component Analysis: # Components vs. Explained Variance') fig.tight_layout() print('We need {} components to explain 80% of the variance on our data!'.format(comp_80)) # As mentioned above, we will pick the first 145 components for our model - keeping a 80% explained variance reduction = round(100*(1 - comp_80/len(azdias_scaled_nonan.columns)),1) print ('Selecting these {} components represents a reduction of {}% on the size of our dataset'. format(comp_80, reduction)) # re-fit a PCA instance to perform the decided-on transformation pca_red = PCA(n_components=comp_80, random_state=42) azdias_pca = pca_red.fit_transform(azdias_scaled_nonan) # Writing a function to summarize those steps def scale_imput_pca(df, n_components, random_state=42): ''' INPUT: df - (pandas dataframe) dataset to be scaled, imputed with mean & pca transformed n_components - (int) number of pca components random_state - (int) random state for PCA OUTPUT: df_pca - (pandas dataframe) Principal Components dataframe ''' # scaling scaler = StandardScaler() df_scaled = pd.DataFrame(scaler.fit_transform(df), columns = df.columns) # imputing imputer = SimpleImputer(missing_values = np.nan, strategy = 'mean') df_scaled_nonan = pd.DataFrame(imputer.fit_transform(df_scaled), columns = df_scaled.columns) # pca pca_red = PCA(n_components=n_components, random_state=random_state) df_pca = pca_red.fit_transform(df_scaled_nonan) return df_pca # test function azdias_pca_2 = scale_imput_pca(clean_azdias, comp_80) # testing output np.array_equal(azdias_pca, azdias_pca_2) # apply to customers dataset cust_pca = scale_imput_pca(cust_clean, comp_80) ###Output _____no_output_____ ###Markdown Part 1.4: Interpret Principal ComponentsEach principal component is a unit vector that points in the direction of highest variance (after accounting for the variance captured by earlier principal components). The further a weight is from zero, the more the principal component is in the direction of the corresponding feature. If two features have large weights of the same sign (both positive or both negative), then increases in one tend expect to be associated with increases in the other. To contrast, features with different signs can be expected to show a negative correlation: increases in one variable should result in a decrease in the other. ###Code # Observing the components' dimensions, we realize each feature is a column # whereas each line represent one component pca_red.components_.shape # Mapping component weights to features map_pca = pd.DataFrame(pca_red.components_.transpose(), index = azdias_scaled_nonan.columns) map_pca.head() # Joining with feature description map_pca_des = pd.concat([feat_top.set_index('Attribute')['Description'], map_pca], axis=1, sort=False).sort_index() map_pca_des.dropna(subset=[0], inplace = True) # Most-relevant features are at the beginning (positive correlation) and at the end (negative correlation) def most_relevant_feats(component, n): ''' INPUT: component - (int) component number n - (int) number of features to be shown OUTPUT: df - (pandas dataframe) most relevant features of the component, their weights and descriptions ''' # highest positive correlation pos_cor = map_pca_des.sort_values(by=component)[['Description',component]].tail(n) # highest negative correlation neg_cor = map_pca_des.sort_values(by=component)[['Description',component]].head(n) # concat both df = pd.concat([pos_cor, neg_cor], axis = 0).replace(np.nan, "-") return df # looking into the features of the first 10 components for comp in range(0, 10): print('Component ', comp) display(most_relevant_feats(comp, 5).style.background_gradient(cmap='Blues',subset=[comp])) ###Output Component 0 ###Markdown We notice that some of the features are not present in the feature description file, what difficultates the interpretation of each component. Part 1.5: Clustering General Population & CustomersIn this substep, I used sklearn's [KMeans](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.htmlsklearn.cluster.KMeans) class to perform k-means clustering on the PCA-transformed data. The elbow method (with the KMeans object's .score() method, i.e. average within-cluster distance) was applied to define the optimal number of clusters. ###Code # performing and scoring KMeans up to 15 clusters start = time.time() clusters = list(range(1,16)) scores = [] for cluster in clusters: kmeans_k = KMeans(cluster) model_k = kmeans_k.fit(azdias_pca) scores.append(abs(model_k.score(azdias_pca))) end = time.time() print("Time elapsed: {} minutes".format((end-start)/60)) # Visualizing results plt.figure(figsize=(15,10)) plt.plot(clusters, scores, linestyle='-', marker='o'); plt.title('K-Means: # Clusters Analysis') plt.ylabel('Average Within-Cluster Distances'); plt.xlabel('# Clusters'); ###Output _____no_output_____ ###Markdown The picture above shows a rather constant rate of decay after 10 clusters, so we will pick this number for our model. ###Code %%time kmeans_k = KMeans(10) model_k = kmeans_k.fit(azdias_pca) # General Population prediction_clusters_azdias = model_k.predict(azdias_pca) # Customers prediction_clusters_cust = model_k.predict(cust_pca) # Taking a look at the clusters prediction_clusters_azdias ###Output _____no_output_____ ###Markdown Part 1.6: Comparing ClustersConsider the proportion of persons in each cluster for the general population, and the proportions for the customers. If we think the company's customer base to be universal, then the cluster assignment proportions should be fairly similar between the two. If there are only particular segments of the population that are interested in the company's products, then we should see a mismatch from one to the other. If there is a higher proportion of persons in a cluster for the customer data compared to the general population (e.g. 5% of persons are assigned to a cluster for the general population, but 15% of the customer data is closest to that cluster's centroid) then that suggests the people in that cluster to be a target audience for the company. On the other hand, the proportion of the data in a cluster being larger in the general population than the customer data (e.g. only 2% of customers closest to a population centroid that captures 6% of the data) suggests that group of persons to be outside of the target demographics. ###Code # Comparing % of individuals in each cluster for customers and general population cust_df = pd.DataFrame(100*pd.Series(prediction_clusters_cust).value_counts()/len(prediction_clusters_cust)) pop_df = pd.DataFrame(100*pd.Series(prediction_clusters_azdias).value_counts()/len(prediction_clusters_azdias)) diff = pd.DataFrame(cust_df-pop_df) diff.rename(columns={0: 'Delta'}, inplace = True) cust_df.rename(columns={0: '% Customers'}, inplace = True) pop_df.rename(columns={0: '% General Population'}, inplace = True) diff_comp = diff.merge(cust_df, left_index = True, right_index = True).merge(pop_df, left_index = True, right_index = True) diff_comp.round(2).sort_values(by='Delta', ascending=False).style.background_gradient(cmap='Blues',subset=['Delta']) # Customer Clusters (diff>0, top 2) - identifying and analyzing most important components for cluster in diff_comp[diff_comp['Delta'] > 0].sort_values(by='Delta', ascending=False)[:2].index.values: # selecting the two components with the highest negative weight neg_comp = pd.Series(model_k.cluster_centers_[cluster,:]).sort_values()[:2].index.values # selecting the two components with the highest positive weight pos_comp = pd.Series(model_k.cluster_centers_[cluster,:]).sort_values()[-2:].index.values print('\n','CLUSTER ', cluster) print('Negative Components: ', neg_comp) display(most_relevant_feats(neg_comp[0], 5).style.background_gradient(cmap='Reds',subset=[neg_comp[0]])) display(most_relevant_feats(neg_comp[1], 5).style.background_gradient(cmap='Reds',subset=[neg_comp[1]])) print('Positive Components: ', pos_comp) display(most_relevant_feats(pos_comp[0], 5).style.background_gradient(cmap='Reds',subset=[pos_comp[0]])) display(most_relevant_feats(pos_comp[1], 5).style.background_gradient(cmap='Reds',subset=[pos_comp[1]])) print('-------------------') # Non-Customer Clusters (diff<0, top 2) - identifying and analyzing most important components for cluster in diff_comp[diff_comp['Delta'] < 0].sort_values(by='Delta', ascending=False)[-2:].index.values: # selecting the two components with the highest negative weight neg_comp = pd.Series(model_k.cluster_centers_[cluster,:]).sort_values()[:2].index.values # selecting the two components with the highest positive weight pos_comp = pd.Series(model_k.cluster_centers_[cluster,:]).sort_values()[-2:].index.values print('\n','CLUSTER ', cluster) print('Negative Components: ', neg_comp) display(most_relevant_feats(neg_comp[0], 5).style.background_gradient(cmap='Blues',subset=[neg_comp[0]])) display(most_relevant_feats(neg_comp[1], 5).style.background_gradient(cmap='Blues',subset=[neg_comp[1]])) print('Positive Components: ', pos_comp) display(most_relevant_feats(pos_comp[0], 5).style.background_gradient(cmap='Blues',subset=[pos_comp[0]])) display(most_relevant_feats(pos_comp[1], 5).style.background_gradient(cmap='Blues',subset=[pos_comp[1]])) print('-------------------') # Using Seaborn's distplot to plot the proportion of individuals in each cluster fig, ax = plt.subplots(figsize=(15,10)) ax.set_title('General Population vs Customers Distribution') ax.set_xticks([0,1,2,3,4,5,6,7,8,9]) ax.set_xlabel('Cluster') ax.set_ylabel('Proportion of individuals') sns.distplot(prediction_clusters_azdias, hist=False, label='General Population',ax=ax) sns.distplot(prediction_clusters_cust , hist=False, label='Customers', ax=ax) ax.legend(); ###Output _____no_output_____ ###Markdown Part 2: Supervised Learning ModelNow that we've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. I've picked the XGBoost algorithm to predict which customers are most likely to respond to mail-order campaigns. The results are obviously skewed because of the dataset's imbalance. I have used the [SMOTE](https://machinelearningmastery.com/smote-oversampling-for-imbalanced-classification/) oversampling technique to offset the imbalance effect, to little avail. ###Code # load train data - change to ";" mailout_train = pd.read_csv('Udacity_MAILOUT_052018_TRAIN.csv', sep=',') # Exploring the dataset mailout_train.head() # split train data into X and y X = mailout_train.copy().drop(columns=['RESPONSE']) y = mailout_train.copy()['RESPONSE'] # Response values - dataset is heavily imbalanced! y.value_counts() # cleaning dataset, but preserving all its rows pd.options.mode.chained_assignment = None X_clean = clean_dataset(X, model=True) # scaling, inputing & creating PCA matrix X_clean_pca = scale_imput_pca(X_clean, comp_80) # splitting into train and validation X_train, X_valid, y_train, y_valid = train_test_split(X_clean_pca, y, test_size=0.3, random_state=42) # balancing input sm = SMOTE(sampling_strategy = 1.0, random_state=42) X_balanced_train, y_balanced_train = sm.fit_resample(X_train, y_train) # checking re-balanced training set y_balanced_train.value_counts() ###Output _____no_output_____ ###Markdown XGBClassifier AlgorithmIn order to identify potential customers (i.e. individuals that will potentially answer to a marketing campaign), I chose XGBoost's classifier algorithm, XGBClassifier. As [this post](https://towardsdatascience.com/a-beginners-guide-to-xgboost-87f5d4c30ed7) highlights, XGBoost offers a high-performance implementation of gradient boosted trees, training models in succession, with each new model beign trained to correct the errors made by the previous ones. In order to tune the algorithm's hyperparameters, I have resorted to GridSearchCV, as well as fit_params with f1 scoring for [early stopping](https://www.kaggle.com/c/sberbank-russian-housing-market/discussion/32739). The f1_eval wrapper was found [here](https://stackoverflow.com/questions/51587535/custom-evaluation-function-based-on-f1-for-use-in-xgboost-python-api). ###Code # first tryout with default parameters # instanciate model xgb_clas = XGBClassifier() # Fit the model xgb_clas.fit(X_balanced_train, y_balanced_train) # Predict with test set y_pred = xgb_clas.predict(X_valid) # Score the model score_test = roc_auc_score(y_valid,y_pred) print(score_test) # Accuracy print('Accuracy Score: ', accuracy_score(y_valid, y_pred).round(3)) # Plot confusion matrices titles_options = [("Confusion matrix, without normalization", None), ("Normalized confusion matrix", 'true')] for title, normalize in titles_options: disp = plot_confusion_matrix(xgb_clas, X_valid, y_valid, display_labels=[0,1], cmap=plt.cm.Blues, normalize=normalize) disp.ax_.set_title(title) plt.grid(False) ###Output _____no_output_____ ###Markdown Tuning Hyperparameters - Approach 1The default parameters generated a bad classifier, roughly similar to random guessing - it gets the response right in only 51% of all cases. Therefore, we need to tune the algorithm's hyperparameters to increase the model's performance and optimize its roc_auc_score. This [article](https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/) was used to guide the tuning. ###Code # Defining a function to train models and perform cross validation def modelfit(alg, X, y, useTrainCV=True, cv_folds=5, early_stopping_rounds=50): if useTrainCV: xgb_param = alg.get_xgb_params() xgtrain = xgboost.DMatrix(X, y) cvresult = xgboost.cv(xgb_param, xgtrain, num_boost_round=alg.get_params()['n_estimators'], nfold=cv_folds, metrics='auc', early_stopping_rounds=early_stopping_rounds) alg.set_params(n_estimators=cvresult.shape[0]) #Fit the algorithm on the data alg.fit(X, y,eval_metric='auc') #Predict training set: dtrain_predictions = alg.predict(X) dtrain_predprob = alg.predict_proba(X)[:,1] #Print model report: print ("\nModel Report") print ("Accuracy : %.4g", accuracy_score(y, dtrain_predictions)) print ("AUC Score (Train): %f", roc_auc_score(y, dtrain_predprob)) # 1- finding optimal number of trees with slightly high learning rate xgb1 = XGBClassifier( learning_rate =0.1, n_estimators=1000, max_depth=5, min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27) modelfit(xgb1, X_clean_pca, y) xgb1.n_estimators # 2- tune max_depth and min_child_weight param_test1 = { 'max_depth':[3, 6], 'min_child_weight':[3, 5] } gsearch1 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=xgb1.n_estimators, max_depth=5, min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27), param_grid = param_test1, scoring='roc_auc',n_jobs=4, cv=5) gsearch1.fit(X_clean_pca, y) gsearch1.cv_results_, gsearch1.best_params_, gsearch1.best_score_ gsearch1.cv_results_, gsearch1.best_params_, gsearch1.best_score_ # 3 - tune gamma param_test2 = { 'gamma':[0, 0.1, 0.2] } gsearch2 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=xgb1.n_estimators, max_depth=5, min_child_weight=5, gamma=0, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test2, scoring='roc_auc',n_jobs=4, cv=5) gsearch2.fit(X_clean_pca, y) gsearch2.cv_results_, gsearch2.best_params_, gsearch2.best_score_ # 4 - tune subsample, colsample_bytree param_test3 = { 'subsample':[0.5, 0.9], 'colsample_bytree':[0.5, 0.9] } gsearch3 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=xgb1.n_estimators, max_depth=5, min_child_weight=5, gamma=0.1, subsample=0.8, colsample_bytree=0.8, objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test3, scoring='roc_auc',n_jobs=4, cv=5) gsearch3.fit(X_clean_pca, y) gsearch3.cv_results_, gsearch3.best_params_, gsearch3.best_score_ # 5 - tune regularization parameters param_test4 = { 'reg_alpha':[0.01, 0.1, 1] } gsearch4 = GridSearchCV(estimator = XGBClassifier(learning_rate =0.1, n_estimators=xgb1.n_estimators, max_depth=5, min_child_weight=5, gamma=0.1, subsample=0.5, colsample_bytree=0.5, objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test4, scoring='roc_auc',n_jobs=4, cv=5) gsearch4.fit(X_clean_pca, y) gsearch4.cv_results_, gsearch4.best_params_, gsearch4.best_score_ # 6 - reduce learning rate xgb4 = XGBClassifier( learning_rate =0.01, n_estimators=1000, max_depth=5, min_child_weight=5, gamma=0.1, subsample=0.8, colsample_bytree=0.8, reg_alpha = 0.1, objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27) modelfit(xgb4, X_clean_pca, y) ###Output Model Report Accuracy : %.4g 0.9876169638284996 AUC Score (Train): %f 0.7046345905418743 ###Markdown Tuning Hyperparameters - Approach 2As suggested in this [blog post](https://machinelearningmastery.com/xgboost-for-imbalanced-classification/), I applied cross validation score and the scale_pos_weight parameter to counteract the dataset's imbalance. ###Code # optimal scale_pos_weight scale_pos_opt = int(y.value_counts()[0]/y.value_counts()[1]) # define model model_scale = XGBClassifier(scale_pos_weight=scale_pos_opt) # define evaluation procedure cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) # evaluate model scores = cross_val_score(model_scale, X_clean_pca, y, scoring='roc_auc', cv=cv, n_jobs=-1) # summarize performance print('Mean ROC AUC: %.5f' % mean(scores)) ###Output Mean ROC AUC: 0.53571 ###Markdown Combining both Approaches ###Code xgb_final = XGBClassifier( learning_rate =0.01, n_estimators=1000, max_depth=5, min_child_weight=5, gamma=0.1, subsample=0.5, colsample_bytree=0.5, reg_alpha = 0.1, objective= 'binary:logistic', nthread=4, scale_pos_weight=scale_pos_opt, seed=27) modelfit(xgb_final, X_clean_pca, y) # final roc auc score y_pred_final = xgb_final.predict(X_valid) print(roc_auc_score(y_valid, y_pred_final)) # Storing the model as a pickle file pickle.dump(xgb_final, open('./xgb_model.pkl', 'wb')) model = pickle.load(open("./xgb_model.pkl", 'rb')) ###Output _____no_output_____ ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code # load test data - adapt path as needed # please change to ";" for the original dataset - I saved with "," locally mailout_test = pd.read_csv('Udacity_MAILOUT_052018_TEST.csv', sep=',') # preparing the dataset mailout_clean = clean_dataset(mailout_test, model=True) mailout_clean_pca = scale_imput_pca(mailout_clean, comp_80) # predicting with combined approach response = model.predict(mailout_clean_pca) # creating submission file kaggle_sub = pd.DataFrame() kaggle_sub['LNR'] = mailout_test['LNR'] kaggle_sub['RESPONSE'] = response # saving to CSV kaggle_sub.to_csv('Kaggle_Submission.csv', index=False) # test with approach 1 response_1 = xgb4.predict(mailout_clean_pca) kaggle_sub_1 = pd.DataFrame() kaggle_sub_1['LNR'] = mailout_test['LNR'] kaggle_sub_1['RESPONSE'] = response_1 kaggle_sub_1.to_csv('Kaggle_Submission_1.csv', index=False) ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. TODO Explore data ###Code # load in the data azdias = pd.read_csv('../data/Udacity_AZDIAS_052018.csv', sep=';') customers = pd.read_csv('../data/Udacity_CUSTOMERS_052018.csv', sep=';') print(azdias.shape) azdias.head() ###Output _____no_output_____ ###Markdown Cleaning Data dictionaryCheck possible values, their encooding, missmatched columns ###Code # Read up data dictionary df_dictionary = pd.read_excel('../data/DIAS Attributes - Values 2017.xlsx', skiprows=1, usecols=[1,2,3,4]) df_dictionary['Attribute'].fillna(method='ffill', inplace=True) df_dictionary['Description'].fillna(method='ffill', inplace=True) df_dictionary.loc[df_dictionary['Value']=='…', 'Type'] = 'continous' df_dictionary.loc[~(df_dictionary['Value']=='…'), 'Type'] = 'non-continous' # Split the cells with list values '-1, 0' -> ['-1', '0'] df_splitvals = df_dictionary['Value'].str.split(',') split_rows = ~df_splitvals.isnull() df_dictionary.loc[split_rows, 'Value'] = df_splitvals.loc[split_rows] # New row for cells with list items df_dictionary = df_dictionary.explode('Value') # Convert string numbers to numerical all_values = df_dictionary['Value'] df_dictionary['Value'] = pd.to_numeric(df_dictionary['Value'], errors='coerce', downcast='integer') df_dictionary['Value'] = df_dictionary['Value'].fillna(all_values) # Create list of possible values and missing values by column data_type = df_dictionary[['Attribute', 'Type']].drop_duplicates() # Create columns with possible values possible_values = df_dictionary.groupby(by='Attribute')['Value'].apply(list) print(len(possible_values), len(data_type)) data_type = data_type.merge(possible_values, left_on='Attribute', right_index=True).reset_index(drop=True) print(len(data_type)) # missing values df_missing = df_dictionary[df_dictionary['Meaning'].astype('str').str.contains('unknown|no classification possible|no transactions known')] missing_values = df_missing.groupby(by='Attribute')['Value'].apply(list) data_type = data_type.merge(missing_values, left_on='Attribute', right_index=True, how='left').reset_index(drop=True) print(len(data_type)) data_type.rename(columns={'Value_x':'Values','Value_y':'Missing values'}, inplace=True) data_type.head() # Tidy up mismatched column names cols_customers = customers.columns cols_azdias = azdias.columns cols_datadict = sorted(data_type['Attribute']) print(len(cols_customers)) print(len(cols_azdias)) print(len(cols_datadict)) df_cols = pd.Series(sorted(cols_customers), name='customers').to_frame() df_cols = df_cols.merge(pd.Series(sorted(cols_azdias), name='azdias').to_frame(), left_on='customers', right_on='azdias', how='outer') # Deal with mis-spelled column names # in all three sets: rename the _RZ cols_datadict = [col[0:-3] if col[-3:]=='_RZ' else col for col in cols_datadict ] df_cols.loc[df_cols['customers']=='D19_BUCH_CD'] = 'D19_BUCH' # in data dictionary: KBA13_CCM_1400_2500 -> KBA13_CCM_1401_2500 df_dict_cols = pd.Series(sorted(cols_datadict), name='dictionary').to_frame() df_dict_cols.loc[df_dict_cols['dictionary']=='KBA13_CCM_1400_2500']='KBA13_CCM_1401_2500' # in data dictionary: CAMEO_DEUINTL_2015 -> CAMEO_INTL_2015 df_dict_cols.loc[df_dict_cols['dictionary']=='CAMEO_DEUINTL_2015']='CAMEO_INTL_2015' # in data dictionary: SOHO_FLAG -> SOHO_KZ df_dict_cols.loc[df_dict_cols['dictionary']=='SOHO_FLAG']='SOHO_KZ' df_cols = df_cols.merge(df_dict_cols, left_on='customers', right_on='dictionary', how='outer') df_cols['All Cols']=df_cols.fillna(method='pad', axis=1)['dictionary'] len(df_cols) # rename columns in data dictionary data_type['Attribute'] = data_type['Attribute'].str.replace('_RZ', '', regex=True) col_name_changes = {'KBA13_CCM_1400_2500':'KBA13_CCM_1401_2500', 'CAMEO_DEUINTL_2015':'CAMEO_INTL_2015', 'SOHO_FLAG': 'SOHO_KZ'} data_type['Attribute'] = data_type['Attribute'].replace(col_name_changes) # rename column in customer and azdias azdias.columns = azdias.columns.str.replace('D19_BUCH_CD', 'D19_BUCH') customers.columns = customers.columns.str.replace('D19_BUCH_CD', 'D19_BUCH') # Checks cols_to_drop = df_cols[df_cols.isnull().any(axis=1)]['All Cols'].values print(len(cols_to_drop)) cols_to_drop[:20] # Check if df is updated df_cols[df_cols['All Cols'].str.contains('CAMEO|KBA13_CCM_140')] ###Output _____no_output_____ ###Markdown Keep columns present in both dataset at the same time. ###Code # Drop all non matching columns cols_to_drop_azdias = [col for col in cols_to_drop if col in azdias.columns ] print(len(cols_to_drop_azdias)) azdias_coldropped = azdias.drop(columns=cols_to_drop_azdias) print('az: {}'.format(len(azdias_coldropped.columns))) cols_to_drop_customers = [col for col in cols_to_drop if col in customers.columns ] print(len(cols_to_drop_customers)) customers_coldropped = customers.drop(columns=cols_to_drop_customers) print('cus: {}'.format(len(customers_coldropped.columns))) ###Output _____no_output_____ ###Markdown replace missing/unknown with nan ###Code # dictionary of missing/unknown value encoding replace_missing = data_type[~data_type['Missing values'].isnull()].set_index('Attribute')['Missing values'].to_dict() # replace missing values customers_clean_na = customers_coldropped.copy() customers_clean_na[customers_clean_na.isin(replace_missing)] = np.nan azdias_clean_na = azdias_coldropped.copy() azdias_clean_na[azdias_clean_na.isin(replace_missing)] = np.nan def clean_df_values(df_in): df = df_in.copy() # turn object columns to numerical object_cols = df.select_dtypes('object').columns object_cols = object_cols[~object_cols.isin(['CAMEO_DEU_2015', 'OST_WEST_KZ'])] if len(object_cols)>0: df[object_cols] = df[object_cols].apply(pd.to_numeric, errors='coerce') # Loop through columns, check if values are in the right range and right data type for i, col in enumerate(df.columns): # continous columns if data_type.loc[data_type['Attribute']==col, 'Type'].iloc[0] == 'continous': if pd.api.types.is_numeric_dtype(df[col]): next else: print(col, df.dtypes[col]) # String encoded & int categorical columns else: in_possible_vals = df[col].isin(data_type.loc[data_type['Attribute']==col, 'Values'].iloc[0]) is_na = df[col].isna() df.loc[~in_possible_vals & ~is_na, col] = pd.NA return df azdias_clean = clean_df_values(azdias_clean_na) customers_clean = clean_df_values(customers_clean_na) # azdias_clean.to_csv('../data/clean_azdias.csv') # customers_clean.to_csv('../data/clean_customers.csv') ###Output _____no_output_____ ###Markdown Compare to population Similar to Kolmogorov-Smirnov Test1. loop through each column 1. calculate relative proportions of each category - for both population and costumers 2. Calculate difference for each 3. Sum the absolute value 2. Order columns by the biggest difference ###Code customers_clean = pd.read_csv('../data/clean_customers.csv', index_col=0) customers_clean.head() cols_null = customers_clean.isnull().sum()/ len(customers_clean) cols_null.sort_values() customers_clean[~customers_clean['TITEL_KZ'].isnull()]['TITEL_KZ'].unique() sns.histplot(data=(customers_clean.isnull()).sum() / len(customers_clean)) plt.xticks(ticks=np.arange(0,1.05,0.1)); cols_null[cols_null>0.3].sort_values() # Drop rows with over 20% missing values customers_clean_dro = customers_clean[(customers_clean.isnull().sum(1) / len(customers_clean.columns) <= 0.2)].copy() sns.histplot(data=(customers_clean_dro.isnull()).sum(1) / len(customers_clean.columns)); # Drop columns with over 20% missing values customers_clean_dro = customers_clean_dro[customers_clean.columns[(customers_clean.isnull().sum(0) / len(customers_clean) <= 0.2)]].copy() sns.histplot(data=(customers_clean_dro.isnull()).sum(0) / len(customers_clean)); ###Output _____no_output_____ ###Markdown Imputation ###Code from sklearn.impute import SimpleImputer simple_imp = SimpleImputer(strategy='most_frequent') from sklearn.experimental import enable_iterative_imputer from sklearn.impute import IterativeImputer from sklearn.impute import KNNImputer customers_imputed = simple_imp.fit_transform(customers_clean_dro) pd.DataFrame(customers_imputed).isnull().sum().sum() ###Output _____no_output_____ ###Markdown __kNN Imputer__ ###Code knn_imputer = KNNImputer(n_neighbors=3, weights="uniform") customers_knn_imputed = knn_imputer.fit_transform(customers_clean_dro) ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. ###Code from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler from sklearn.cluster import KMeans pca = PCA() pca = pca.fit(X=StandardScaler().fit_transform(customers_imputed)) cus_pca = pca.transform(X=StandardScaler().fit_transform(customers_imputed)) print(np.sum(pca.explained_variance_ratio_)) plt.bar(x=range(1,len(customers_clean_dro.columns)+1) , height=pca.explained_variance_ratio_) plt.bar(x=range(1,len(customers_clean_dro.columns)+1) , height=np.cumsum(pca.explained_variance_ratio_)) inertia = [] for n_clus in range(2,41): kmeans = KMeans(n_clusters=n_clus) kmeans = kmeans.fit(cus_pca) inertia.append(kmeans.inertia_) plt.plot(inertia) inertia = [] for n_clus in range(2,41): kmeans = KMeans(n_clusters=n_clus) kmeans = kmeans.fit(cus_pca[:,:50]) inertia.append(kmeans.inertia_) plt.plot(inertia) ###Output _____no_output_____ ###Markdown __KNN Imputed KMeans__ ###Code pca = PCA() pca = pca.fit(X=StandardScaler().fit_transform(customers_knn_imputed)) cus_pca = pca.transform(X=StandardScaler().fit_transform(customers_knn_imputed)) print(np.sum(pca.explained_variance_ratio_)) plt.bar(x=range(1,len(customers_clean_dro.columns)+1) , height=pca.explained_variance_ratio_) plt.bar(x=range(1,len(customers_clean_dro.columns)+1) , height=np.cumsum(pca.explained_variance_ratio_)) inertia = [] for n_clus in range(2,41): kmeans = KMeans(n_clusters=n_clus) kmeans = kmeans.fit(cus_pca) inertia.append(kmeans.inertia_) plt.plot(inertia) ###Output _____no_output_____ ###Markdown Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') ###Output _____no_output_____ ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter.Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd #pd.set_option("display.max_columns", None) #pd.set_option("display.max_rows", None) pd.options.mode.chained_assignment = None import matplotlib.pyplot as plt import seaborn as sns from sklearn.preprocessing import Imputer from sklearn.preprocessing import StandardScaler, RobustScaler, MinMaxScaler, OneHotEncoder from sklearn.decomposition import PCA from sklearn.cluster import KMeans from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import GridSearchCV, train_test_split from sklearn.metrics import confusion_matrix,precision_recall_fscore_support from sklearn.utils.multiclass import unique_labels from sklearn.linear_model import LinearRegression #from sklearn import cross_validation from sklearn.model_selection import train_test_split from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC from sklearn.datasets import load_digits from sklearn.model_selection import learning_curve from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier, GradientBoostingRegressor from sklearn.pipeline import Pipeline from sklearn.metrics import roc_curve from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import AdaBoostClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import ShuffleSplit from sklearn.datasets import load_digits from sklearn.metrics import roc_auc_score import xgboost as xgb import warnings; warnings.simplefilter('ignore') # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # load in the data azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';', low_memory=False) customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';', low_memory=False) azdias.head() # Describe the azdias dataset azdias.describe() # Display count of rows and columns azdias.shape # Describe the customers dataset customers.describe() # Display azdias first rows customers.head() # Display count of rows and columns customers.shape def plot_null_ratio(df, name): """ plotting the NaN ratio & proportion of a data set of columns Input: df = dataset to be analyzed name = name of the dataset Output: Plot of the NaN ratio & proportion """ plt.subplots_adjust(hspace=4.0, top = 0.9) df_isnull = df.isnull().sum() df_isnull_perc = df_isnull / len(df) * 100 fig = plt.figure() ax = fig.add_subplot(211) ax = df_isnull_perc.sort_values(ascending=False).head(100).plot(kind='bar', figsize=(15,15), title='Columns with highest Null ratio in {}'.format(name)) ax.set_xlabel("Attribute azdias") ax.set_ylabel("Ratio of Null [%]") ax = fig.add_subplot(212) plt.hist(df_isnull_perc, bins=20); plt.xlabel('Proportion of NaN Values in the Feature/Column') plt.ylabel('Number of Features/Columns in {}'.format(name)) plt.title('Proportion of NaN Values in {} Columns'.format(name)) plt.subplots_adjust(hspace=1.0) plot_null_ratio(azdias, 'azdias') plot_null_ratio(customers, 'customers') def col_to_drop(df, limit_perc): """ Output of the columns to be deleted Input: df = dataset to be analyzed limit_perc = limit in percent Output: columns to drop """ df_null = df.isnull().sum() df_null_perc = df_null / len(df) * 100 df_null_perc = df_null_perc[df_null_perc > limit_perc].index return df_null_perc col_to_drop(azdias, 20) # display of columns which are not present in azdias (set(customers.columns.values) - set(azdias.columns.values)) # upload and simple adjustments of the dataset 'DIAS Information Levels - Attributes 2017.xlsx' attribuets_desc = pd.read_excel('DIAS Information Levels - Attributes 2017.xlsx', skiprows=[0]) attribuets_desc.drop('Unnamed: 0', axis=1, inplace=True) attribuets_desc.head() # upload and simple adjustments of the dataset 'DIAS Attributes - Values 2017.xlsx' attributes_values = pd.read_excel('DIAS Attributes - Values 2017.xlsx', skiprows=[0]) attributes_values.drop(columns=['Unnamed: 0'], inplace=True) attributes_values['Attribute'] = attributes_values['Attribute'].fillna(method='ffill') attributes_values['Description'] = attributes_values['Description'].fillna(method='ffill') attributes_values.head(10) # output of the overlapping attributes customers_unique = set(list(customers.columns)) azdias_unique = set(list(azdias.columns)) attribuets_desc_unique = set(list(attribuets_desc.Attribute)) attributes_values_unique = set(list(attributes_values.Attribute)) common_attributes = customers_unique & azdias_unique & attribuets_desc_unique & attributes_values_unique common_attributes # display of the overlapping attributes columns_not_described = list(set(azdias) - set(common_attributes)) columns_not_described # representation of the dtypes in azdias azdias.select_dtypes(include=['object']) # number of unique attributes 'D19_LETZTER_KAUF_BRANCHE' len(azdias['D19_LETZTER_KAUF_BRANCHE'].unique()) # number of unique attributes 'EINGEFUEGT_AM' len(azdias['EINGEFUEGT_AM'].unique()) # number of unique attributes 'OST_WEST_KZ' len(azdias['OST_WEST_KZ'].unique()) # customizing the attributes values dataset with the meaning unknown and creating the array for the values attributes_values_copy =attributes_values.copy() attributes_values_copy['Meaning'].fillna('unknown', inplace=True) attributes_values_unknown = attributes_values_copy[attributes_values_copy['Meaning'].str.contains("unknown")] attributes_values_unknown['Value'] = attributes_values_unknown.Value.apply(lambda x: str(x).split(',')) attributes_values_unknown ###Output _____no_output_____ ###Markdown Data Cleaning ###Code def data_cleaning(df, df_name=None): """ summary of findings on data cleansing Input: df = dataset to be cleaned df_name = name of dataset Output: Prepared dataset """ #Dropping columns with more than twenty percent nulls in azdias drop_cols = ['ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4', 'ALTERSKATEGORIE_FEIN', 'D19_BANKEN_ONLINE_QUOTE_12', 'D19_GESAMT_ONLINE_QUOTE_12', 'D19_KONSUMTYP', 'D19_LETZTER_KAUF_BRANCHE', 'D19_LOTTO', 'D19_SOZIALES', 'D19_TELKO_ONLINE_QUOTE_12', 'D19_VERSAND_ONLINE_QUOTE_12', 'D19_VERSI_ONLINE_QUOTE_12', 'EXTSEL992', 'KK_KUNDENTYP'] df = df.drop(drop_cols,axis=1) #Filtering unique columns in customers dataset if df_name == 'customers': df.drop(columns=['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], inplace=True) #Dropping columns that are not described in DIAS Information Levels & Values.xlsx attribuets_desc = pd.read_excel('DIAS Information Levels - Attributes 2017.xlsx', skiprows=[0]) attribuets_desc.drop('Unnamed: 0', axis=1, inplace=True) attributes_values = pd.read_excel('DIAS Attributes - Values 2017.xlsx', skiprows=[0]) attributes_values.drop('Unnamed: 0', axis=1, inplace=True) attribuets_desc_unique = set(list(attribuets_desc.Attribute)) attributes_values_unique = set(list(attributes_values.Attribute)) common_attributes = attribuets_desc_unique & attributes_values_unique columns_not_described = list(set(df) - set(common_attributes)) df.drop(labels=columns_not_described, axis=1, inplace=True) # Replacing unknown Values with Null attributes_values['Meaning'].fillna('unknown', inplace=True) attributes_values_unknown = attributes_values[attributes_values['Meaning'].str.contains("unknown")] attributes_values_unknown['Value'] = attributes_values_unknown.Value.apply(lambda x: str(x).split(',')) for column in df: try: index = attributes_values_unknown.index[attributes_values_unknown['Attribute']==column].tolist()[0] m = df[column].isin(attributes_values_unknown['values'][index]) df[column] = df[column].mask(m, np.nan) except: continue #Dropping rows with more than twenty percent nulls in azdias #null_perc_row = df.isnull().mean(axis=1) #df = df[null_perc_row < 0.2] # Re-encode Features: df[['CAMEO_DEUG_2015']] = df[['CAMEO_DEUG_2015']].replace(['X','XX'],-1) df[['CAMEO_DEUG_2015']] = df[['CAMEO_DEUG_2015']].fillna(-1) df[['CAMEO_DEUG_2015']] = df[['CAMEO_DEUG_2015']].astype(int) df[['OST_WEST_KZ', 'CAMEO_DEU_2015']]=df[['OST_WEST_KZ', 'CAMEO_DEU_2015']].fillna(-1) df['OST_WEST_KZ'] = df['OST_WEST_KZ'].map({'W': 1, 'O': 2}) # Get dummies df = pd.get_dummies(df) df_columns = list(df.columns.values) #Impute for col in df.columns: df[col] = df[col].fillna(df[col].mode()[0]) scale = StandardScaler(copy=False) scaled = scale.fit_transform(df) df = pd.DataFrame(scaled,columns= df_columns) return df # implementation of the data preparation for customers customers_prep = data_cleaning(customers, 'customers') # implementation of the data preparation for azdias azdias_prep = data_cleaning(azdias, 'azdias') ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. ###Code def sklearn_pca(df, components=None): pca = PCA(components) data_transformed = pca.fit_transform(df) return pca, data_transformed customers_pca_model, pca_transformed = sklearn_pca(customers_prep, 150) def plot_pca(df, Name=None): """ Principal Component Analysis Input: df = dataset to be analyzed Name = name of dataset Output: plot of PCA """ pca = PCA() customers_pca = pca.fit_transform(df) plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.title('Explained Variance - Scree Plot {}'.format(Name)) plt.xlabel('Number of Components') plt.ylabel('Cumulative Explained Variance') plt.grid(b=True) plt.show() # Plotting pca for the datasets customers & azdias plot_pca(customers_prep, 'Customers') plot_pca(azdias_prep, 'Azdias') def reduce_data(df,n=None): """ reduction of data by component Input: df = dataset to be analyzed n = number of components Output: reduced dataset """ pca = PCA(n_components=n).fit(df) reduced_data = pca.transform(df) reduced_data = pd.DataFrame(reduced_data) print(pca.explained_variance_ratio_.sum()) return reduced_data, pca customers_red, pca_customers = reduce_data(customers_prep, 150) azdias_red, pca_azdias = reduce_data(azdias_prep, 150) def elbow_method(df, Name=None): """ Plot of the elbow method Input: df = dataset to be analyzed Name = name of dataset Output: plot of elbow method """ points = np.array([]) K = range(1,15) for k in K: kmeans = KMeans(k) km = kmeans.fit(df) points = np.append(points, np.abs(km.score(df))) plt.plot(K, points, linestyle='-', marker='x', color='blue') plt.xlabel('K') plt.ylabel('SSE Score') plt.title('Elbow Graph - {}'.format(Name)) plt.show() # plotting elbow method for customers & azdias data elbow_method(customers_red, 'Customers') elbow_method(azdias_red, 'Azdias') def kmeans_cluster(df, k=None): """ Plot of the elbow method Input: df = dataset to be analyzed k = number of clusters Output: predictions model according to kmeans """ kmeans_k = KMeans(n_clusters=k) model_k = kmeans_k.fit(df) df = model_k.predict(df) return df, kmeans_k # Implementation of predictions model according to kmeans for customers & azdias dataset customers_predict, kmeans_customers = kmeans_cluster(customers_red, 12) azdias_predict, kmeans_azdias = kmeans_cluster(azdias_red, 12) def cluster_plot(df1, df2, df1_name=None, df2_name=None): """ Plot of the ratio for each cluster for two datasets & display of data Input: df1 = first dataset to be analyzed df2 = scoud dataset to be analyzed df1_name = name of first dataset to be analyzed df2_name = name of scoud dataset to be analyzed Output: Plot & display of the ratio for each cluster """ df1 = pd.Series(df1, name='{}'.format(df1_name)) df1 = df1.value_counts().sort_index() df2 = pd.Series(df2, name='{}'.format(df2_name)) df2 = df2.value_counts().sort_index() df = pd.concat([df2, df1], axis=1).reset_index() df.columns = ['cluster','{}'.format(df2_name),'{}'.format(df1_name)] df['Perct of {}'.format(df2_name)] = (df['{}'.format(df2_name)] / (df['{}'.format(df2_name)].sum()) * 100 ).round(2) df['Perct of {}'.format(df1_name)] = (df['{}'.format(df1_name)] / (df['{}'.format(df1_name)].sum()) * 100 ).round(2) fig = plt.figure(figsize=(12,5)) ax = fig.add_subplot(111) ax = df['Perct of {}'.format(df2_name)].plot(x=df['cluster'], width=-0.3, align='edge', color='blue', kind='bar', position=0) ax = df['Perct of {}'.format(df1_name)].plot(kind='bar', color='grey', width = 0.3, align='edge', position=1) ax.set_xlabel('Clusters', fontsize=15) ax.set_ylabel('Ratio [%]', fontsize=15) ax.xaxis.set(ticklabels=range(20)) ax.tick_params(axis = 'x', which = 'major', labelsize = 13) ax.margins(x=0.5,y=0.1) plt.legend(('{}'.format(df2_name), '{}'.format(df1_name)), fontsize=15) plt.title(('Percentege of Azdias & Customer in each cluster')) plt.show() return df # plotting percentege of azdias & customer in each cluster cluster_plot(customers_predict, azdias_predict, df1_name='Customers', df2_name='Azdias') def combine_info(df_prep, df_red, kmeans, pca, cluster): #n, k, cluster): """ Plot of the ratio for each cluster for two datasets & display of data Input: df_prep = first dataset to be analyzed df_red = scoud dataset to be analyzed n = name of first dataset to be analyzed k = name of scoud dataset to be analyzed cluster = Output: Plot & display of the ratio for each cluster """ df = dict(zip(df_prep.columns, pca.inverse_transform(kmeans.cluster_centers_[cluster]))) df = pd.DataFrame.from_dict(df, orient='index', columns=['Values']).sort_values(by='Values') df['Description'] = np.nan for index, row in attribuets_desc.iterrows(): index_val = row['Attribute'] df.loc[df.index == index_val, "Description"] = row['Description'] print(df.sort_values(by='Values', ascending=False)) pd.set_option("display.max_columns", None) pd.set_option("display.max_rows", None) combine_info(customers_prep, customers_red, kmeans_customers, pca_customers, 2) ###Output Values \ FINANZ_ANLEGER 1.494176 FINANZ_UNAUFFAELLIGER 1.473982 FINANZ_SPARER 1.456437 SEMIO_REL 1.379579 KBA13_ANZAHL_PKW 1.221558 SEMIO_KRIT 1.135870 D19_GESAMT_DATUM 1.060814 CJT_GESAMTTYP 1.054164 SEMIO_PFLICHT 1.000871 SEMIO_KAEM 0.954174 SEMIO_DOM 0.913484 D19_VERSAND_DATUM 0.913041 SEMIO_FAM 0.902379 D19_GESAMT_ONLINE_DATUM 0.816972 REGIOTYP 0.780586 MOBI_REGIO 0.779875 KBA05_MAXBJ 0.777718 D19_VERSAND_ONLINE_DATUM 0.757963 D19_GESAMT_OFFLINE_DATUM 0.723522 W_KEIT_KIND_HH 0.710652 KBA05_MAXAH 0.706154 D19_VERSAND_OFFLINE_DATUM 0.657303 SEMIO_RAT 0.649122 BALLRAUM 0.638445 SEMIO_MAT 0.601003 KBA13_VORB_3 0.575671 EWDICHTE 0.560792 KBA13_CCM_0_1400 0.507866 KBA13_KW_60 0.494509 KKK 0.493611 KBA13_CCM_1200 0.488977 KBA13_KW_40 0.484955 KBA13_KW_50 0.481137 KBA05_ANTG1 0.458851 KBA13_CCM_1000 0.449531 KBA13_KMH_0_140 0.437574 KBA13_KW_70 0.434717 KBA13_KW_80 0.434201 KBA13_SITZE_5 0.420757 KBA05_VORB2 0.414465 D19_TELKO_DATUM 0.404061 D19_BANKEN_DATUM 0.384242 KBA13_CCM_1800 0.370059 ZABEOTYP 0.369477 KBA13_HALTER_25 0.343879 KBA13_KW_90 0.337899 KBA13_ALTERHALTER_30 0.324162 KBA05_AUTOQUOT 0.314016 ANREDE_KZ 0.301585 D19_BANKEN_ONLINE_DATUM 0.288367 KBA13_KMH_140_210 0.277442 PLZ8_ANTG2 0.269239 KBA13_KW_120 0.268686 KBA05_GBZ 0.264855 WOHNDAUER_2008 0.256603 KBA13_BJ_2000 0.252151 KBA13_MOTOR 0.244230 D19_TELKO_OFFLINE_DATUM 0.237999 KBA13_KMH_180 0.235299 KBA05_HERST5 0.235226 KBA13_KW_110 0.233832 KBA13_SEG_KOMPAKTKLASSE 0.232705 KBA13_FAB_ASIEN 0.228444 KBA13_HERST_ASIEN 0.212578 KBA13_HALTER_20 0.210537 KBA13_HALTER_30 0.210459 KBA13_SEG_KLEINWAGEN 0.208173 KBA13_KW_0_60 0.207254 KBA13_SEG_WOHNMOBILE 0.198945 KBA05_MOD1 0.193371 KBA13_BJ_1999 0.190148 FINANZ_HAUSBAUER 0.188703 KBA13_BJ_2009 0.177934 KBA13_CCM_2500 0.177283 KBA13_FAB_SONSTIGE 0.175994 KBA13_HERST_SONST 0.175994 KBA13_BJ_2008 0.174100 D19_BANKEN_OFFLINE_DATUM 0.172584 KBA05_KW1 0.171586 KBA05_ZUL1 0.168378 KBA13_NISSAN 0.158270 KBA13_SEG_OBERKLASSE 0.155514 KBA13_SEG_KLEINST 0.153720 KBA05_MOD4 0.152673 GEBAEUDETYP_RASTER 0.150619 KBA05_ALTER2 0.149055 KBA05_MOTOR 0.147385 KBA05_HERST4 0.147160 KBA13_HERST_FORD_OPEL 0.147046 KBA13_KMH_250 0.146412 KBA05_CCM1 0.143920 KBA13_KMH_211 0.142743 KBA13_KRSHERST_FORD_OPEL 0.136205 KBA13_VORB_2 0.135912 KBA13_KW_121 0.135523 KBA13_VORB_1_2 0.130530 KBA13_CCM_2501 0.126275 KBA05_HERST3 0.119301 KBA13_CCM_1400 0.117014 KBA13_FORD 0.114721 KBA13_OPEL 0.110146 KBA13_RENAULT 0.098193 KBA05_MAXVORB 0.097804 INNENSTADT 0.093306 KBA05_SEG3 0.092044 D19_TELKO_ONLINE_DATUM 0.083392 KBA13_HALTER_50 0.082006 KBA13_KRSSEG_KLEIN 0.071705 KBA13_HALTER_35 0.066029 SEMIO_TRADV 0.065164 RELAT_AB 0.063271 KBA13_ALTERHALTER_60 0.062184 KBA05_MODTEMP 0.061458 KBA05_SEG2 0.056769 KBA13_CCM_1600 0.053019 KBA05_CCM2 0.043953 KBA05_KRSHERST3 0.042246 KBA13_MAZDA 0.042119 KBA13_SEG_SONSTIGE 0.035403 KBA13_SEG_SPORTWAGEN 0.027128 KBA13_KRSSEG_VAN 0.025878 KBA05_ZUL3 0.020119 KBA13_SEG_MITTELKLASSE 0.016653 KBA13_HALTER_55 0.016024 KBA13_AUTOQUOTE 0.012462 KBA13_HALTER_60 0.011333 KBA13_KRSAQUOT 0.002995 KBA13_VW -0.000940 KBA13_KRSZUL_NEU -0.001710 KBA13_ALTERHALTER_45 -0.003607 KBA05_KRSKLEIN -0.016748 KBA05_VORB1 -0.018956 KBA13_BJ_2004 -0.021836 KBA13_HALTER_40 -0.024214 KBA05_BAUMAX -0.027831 KBA05_MOD2 -0.035122 KBA05_SEG4 -0.038797 KBA13_KRSSEG_OBER -0.038962 KBA13_HERST_EUROPA -0.040760 KBA13_VORB_1 -0.044775 KBA13_HERST_AUDI_VW -0.047903 ORTSGR_KLS9 -0.048133 KBA13_TOYOTA -0.050975 KBA05_MOD3 -0.061524 KBA05_KRSVAN -0.062562 KBA05_ZUL2 -0.069972 KBA13_SEG_MINIVANS -0.070305 KBA13_SEG_UTILITIES -0.073958 KBA05_FRAU -0.074949 KBA13_HALTER_45 -0.076390 KBA13_KRSHERST_AUDI_VW -0.077298 KBA13_KW_61_120 -0.078707 KONSUMNAEHE -0.082044 TITEL_KZ -0.088978 KBA05_KW2 -0.089199 KBA05_KRSZUL -0.091950 KBA05_KRSHERST2 -0.098949 FINANZTYP -0.099135 KBA13_SEG_MINIWAGEN -0.102925 ANZ_HH_TITEL -0.106395 KBA13_PEUGEOT -0.108959 KBA05_KRSOBER -0.109058 KBA13_BJ_2006 -0.111567 ANZ_TITEL -0.116347 KBA05_SEG10 -0.116636 KBA05_KRSHERST1 -0.116936 KBA05_SEG1 -0.119667 KBA05_ANHANG -0.122094 KBA05_ALTER3 -0.126568 KBA13_SEG_GELAENDEWAGEN -0.131786 KBA05_HERST2 -0.144248 ANZ_PERSONEN -0.170814 KBA05_DIESEL -0.172343 KBA05_SEG9 -0.177216 KBA13_HALTER_66 -0.182740 KBA13_SEG_VAN -0.186586 KBA05_MOTRAD -0.187396 KBA13_AUDI -0.192677 KBA05_MAXSEG -0.206315 KBA13_CCM_2000 -0.211507 KBA05_CCM3 -0.212631 KBA13_FIAT -0.214788 KBA05_ALTER4 -0.215743 KBA05_VORB0 -0.219103 KBA13_ALTERHALTER_61 -0.223802 ANZ_HAUSHALTE_AKTIV -0.231632 KBA05_SEG6 -0.236843 KBA13_KRSHERST_BMW_BENZ -0.237936 OST_WEST_KZ -0.250245 KBA05_ZUL4 -0.255168 SEMIO_LUST -0.256139 KBA05_ANTG4 -0.264332 KBA13_SEG_GROSSRAUMVANS -0.268187 KBA13_SITZE_6 -0.283625 MIN_GEBAEUDEJAHR -0.289009 KBA13_HALTER_65 -0.293609 WOHNLAGE -0.294268 KBA05_KRSAQUOT -0.311305 KBA13_SEG_OBEREMITTELKLASSE -0.314621 KBA05_MOD8 -0.323908 KBA05_CCM4 -0.324237 KBA13_VORB_0 -0.329168 KBA05_ANTG3 -0.332253 PLZ8_ANTG3 -0.360968 KBA13_KMH_110 -0.369128 KBA13_MERCEDES -0.372133 GFK_URLAUBERTYP -0.372786 KBA05_MAXHERST -0.376070 KBA13_KMH_251 -0.381844 PLZ8_BAUMAX -0.389929 KBA05_HERSTTEMP -0.391153 KBA05_ALTER1 -0.399129 KBA13_SITZE_4 -0.400570 KBA13_BMW -0.408205 SEMIO_KULT -0.436144 KBA13_HERST_BMW_BENZ -0.438397 KBA05_HERST1 -0.459315 KBA13_KW_30 -0.459594 GEBAEUDETYP -0.463861 KBA05_KW3 -0.478514 PLZ8_ANTG1 -0.484511 KBA05_SEG5 -0.485139 ONLINE_AFFINITAET -0.494927 PLZ8_GBZ -0.495926 RETOURTYP_BK_S -0.526643 PLZ8_HHZ -0.541027 HH_EINKOMMEN_SCORE -0.618683 KBA05_SEG8 -0.657677 PLZ8_ANTG4 -0.659320 KBA05_SEG7 -0.663704 LP_STATUS_GROB -0.713566 GREEN_AVANTGARDE -0.770770 KBA05_ANTG2 -0.799080 SEMIO_ERL -0.804464 KBA13_CCM_1500 -0.833547 KBA13_KMH_140 -0.874690 LP_LEBENSPHASE_GROB -0.918524 FINANZ_MINIMALIST -0.932023 LP_FAMILIE_GROB -0.939813 AGER_TYP -0.967536 LP_LEBENSPHASE_FEIN -0.970200 SEMIO_SOZ -1.011994 GEBURTSJAHR -1.024800 PRAEGENDE_JUGENDJAHRE -1.087389 ALTER_HH -1.163910 FINANZ_VORSORGER -1.188903 ALTERSKATEGORIE_GROB -1.239938 CAMEO_DEUG_2015 -1.269191 SHOPPER_TYP -1.297540 SEMIO_VERT -1.322535 HEALTH_TYP -1.508612 NATIONALITAET_KZ -1.550720 VERS_TYP -1.575848 Description FINANZ_ANLEGER financial typology: investor FINANZ_UNAUFFAELLIGER financial typology: unremarkable FINANZ_SPARER financial typology: money saver SEMIO_REL affinity indicating in what way the person is ... KBA13_ANZAHL_PKW number of cars in the PLZ8 SEMIO_KRIT affinity indicating in what way the person is ... D19_GESAMT_DATUM actuality of the last transaction with the com... CJT_GESAMTTYP Customer-Journey-Typology relating to the pref... SEMIO_PFLICHT affinity indicating in what way the person is ... SEMIO_KAEM affinity indicating in what way the person is ... SEMIO_DOM affinity indicating in what way the person is ... D19_VERSAND_DATUM actuality of the last transaction for the segm... SEMIO_FAM affinity indicating in what way the person is ... D19_GESAMT_ONLINE_DATUM actuality of the last transaction with the com... REGIOTYP AZ neighbourhood typology MOBI_REGIO moving patterns KBA05_MAXBJ most common age of the cars in the microcell D19_VERSAND_ONLINE_DATUM actuality of the last transaction for the segm... D19_GESAMT_OFFLINE_DATUM actuality of the last transaction with the com... W_KEIT_KIND_HH likelihood of a child present in this househol... KBA05_MAXAH most common age of car owners in the microcell D19_VERSAND_OFFLINE_DATUM actuality of the last transaction for the segm... SEMIO_RAT affinity indicating in what way the person is ... BALLRAUM distance to the next metropole SEMIO_MAT affinity indicating in what way the person is ... KBA13_VORB_3 share of cars with more than 2 preowner - PLZ8 EWDICHTE density of inhabitants per square kilometer KBA13_CCM_0_1400 share of cars with less than 1401ccm within th... KBA13_KW_60 share of cars with an engine power between 51 ... KKK purchasing power KBA13_CCM_1200 share of cars with less than 1000ccm within th... KBA13_KW_40 share of cars with an engine power between 31 ... KBA13_KW_50 share of cars with an engine power between 41 ... KBA05_ANTG1 number of 1-2 family houses in the cell KBA13_CCM_1000 share of cars with less than 1000ccm within th... KBA13_KMH_0_140 share of cars with max speed 140 km/h within t... KBA13_KW_70 share of cars with an engine power between 61 ... KBA13_KW_80 share of cars with an engine power between 71 ... KBA13_SITZE_5 number of cars with 5 seats in the PLZ8 KBA05_VORB2 share of cars with more than two preowner D19_TELKO_DATUM actuality of the last transaction for the segm... D19_BANKEN_DATUM actuality of the last transaction for the segm... KBA13_CCM_1800 share of cars with 1600ccm to 1799ccm within t... ZABEOTYP typification of energy consumers KBA13_HALTER_25 share of car owners between 21 and 25 within t... KBA13_KW_90 share of cars with an engine power between 81 ... KBA13_ALTERHALTER_30 share of car owners below 31 within the PLZ8 KBA05_AUTOQUOT share of cars per household ANREDE_KZ gender D19_BANKEN_ONLINE_DATUM actuality of the last transaction for the segm... KBA13_KMH_140_210 share of cars with max speed between 140 and 2... PLZ8_ANTG2 number of 3-5 family houses in the PLZ8 KBA13_KW_120 share of cars with an engine power between 111... KBA05_GBZ number of buildings in the microcell WOHNDAUER_2008 length of residenca KBA13_BJ_2000 share of cars built between 2000 and 2003 with... KBA13_MOTOR most common motor size within the PLZ8 D19_TELKO_OFFLINE_DATUM actuality of the last transaction for the segm... KBA13_KMH_180 share of cars with max speed between 110 km/h ... KBA05_HERST5 share of asian manufacturer (e.g. Toyota, Kia,... KBA13_KW_110 share of cars with an engine power between 91 ... KBA13_SEG_KOMPAKTKLASSE share of lowe midclass cars (Ford Focus etc.) ... KBA13_FAB_ASIEN share of other Asian Manufacturers within the ... KBA13_HERST_ASIEN share of asian cars within the PLZ8 KBA13_HALTER_20 share of car owners below 21 within the PLZ8 KBA13_HALTER_30 share of car owners between 26 and 30 within t... KBA13_SEG_KLEINWAGEN share of small and very small cars (Ford Fiest... KBA13_KW_0_60 share of cars with less than 61 KW engine powe... KBA13_SEG_WOHNMOBILE share of roadmobiles within the PLZ8 KBA05_MOD1 share of upper class cars (in an AZ specific d... KBA13_BJ_1999 share of cars built between 1995 and 1999 with... FINANZ_HAUSBAUER financial typology: main focus is the own house KBA13_BJ_2009 share of cars built in 2009 within the PLZ8 KBA13_CCM_2500 share of cars with 2000ccm to 2499ccm within t... KBA13_FAB_SONSTIGE share of other Manufacturers within the PLZ8 KBA13_HERST_SONST share of other cars within the PLZ8 KBA13_BJ_2008 share of cars built in 2008 within the PLZ8 D19_BANKEN_OFFLINE_DATUM actuality of the last transaction for the segm... KBA05_KW1 share of cars with less than 59 KW engine power KBA05_ZUL1 share of cars built before 1994 KBA13_NISSAN share of NISSAN within the PLZ8 KBA13_SEG_OBERKLASSE share of upper class cars (BMW 7er etc.) in th... KBA13_SEG_KLEINST share of very small cars (Ford Ka etc.) in the... KBA05_MOD4 share of small cars (in an AZ specific definit... GEBAEUDETYP_RASTER industrial areas KBA05_ALTER2 share of car owners inbetween 31 and 45 years ... KBA05_MOTOR most common engine size in the microcell KBA05_HERST4 share of European manufacturer (e.g. Fiat, Peu... KBA13_HERST_FORD_OPEL share of Ford & Opel/Vauxhall within the PLZ8 KBA13_KMH_250 share of cars with max speed between 210 and 2... KBA05_CCM1 share of cars with less than 1399ccm KBA13_KMH_211 share of cars with a greater max speed than 21... KBA13_KRSHERST_FORD_OPEL share of FORD/Opel (referred to the county ave... KBA13_VORB_2 share of cars with 2 preowner - PLZ8 KBA13_KW_121 share of cars with an engine power of more tha... KBA13_VORB_1_2 share of cars with 1 or 2 preowner - PLZ8 KBA13_CCM_2501 share of cars with more than 2501ccm within th... KBA05_HERST3 share of Ford/Opel KBA13_CCM_1400 share of cars with 1200ccm to 1399ccm within t... KBA13_FORD share of FORD within the PLZ8 KBA13_OPEL share of OPEL within the PLZ8 KBA13_RENAULT share of RENAULT within the PLZ8 KBA05_MAXVORB most common preowner structure in the microcell INNENSTADT distance to the city centre KBA05_SEG3 share of lowe midclass cars (Ford Focus etc.) ... D19_TELKO_ONLINE_DATUM actuality of the last transaction for the segm... KBA13_HALTER_50 share of car owners between 46 and 50 within t... KBA13_KRSSEG_KLEIN share of small cars (referred to the county av... KBA13_HALTER_35 share of car owners between 31 and 35 within t... SEMIO_TRADV affinity indicating in what way the person is ... RELAT_AB share of unemployed in relation to the county ... KBA13_ALTERHALTER_60 share of car owners between 46 and 60 within t... KBA05_MODTEMP Development of the most common car segment in ... KBA05_SEG2 share of small and very small cars (Ford Fiest... KBA13_CCM_1600 share of cars with 1500ccm to 1599ccm within t... KBA05_CCM2 share of cars with 1400ccm to 1799 ccm KBA05_KRSHERST3 share of Ford/Opel (reffered to the county ave... KBA13_MAZDA share of MAZDA within the PLZ8 KBA13_SEG_SONSTIGE share of other cars within the PLZ8 KBA13_SEG_SPORTWAGEN share of sportscars within the PLZ8 KBA13_KRSSEG_VAN share of vans (referred to the county average)... KBA05_ZUL3 share of cars built between 2001 and 2002 KBA13_SEG_MITTELKLASSE share of middle class cars (Ford Mondeo etc.) ... KBA13_HALTER_55 share of car owners between 51 and 55 within t... KBA13_AUTOQUOTE share of cars per household within the PLZ8 KBA13_HALTER_60 share of car owners between 56 and 60 within t... KBA13_KRSAQUOT share of cars per household (referred to the c... KBA13_VW share of VOLKSWAGEN within the PLZ8 KBA13_KRSZUL_NEU share of newbuilt cars (referred to the county... KBA13_ALTERHALTER_45 share of car owners between 31 and 45 within t... KBA05_KRSKLEIN share of small cars (referred to the county av... KBA05_VORB1 share of cars with one or two preowner KBA13_BJ_2004 share of cars built before 2004 within the PLZ8 KBA13_HALTER_40 share of car owners between 36 and 40 within t... KBA05_BAUMAX most common building-type within the cell KBA05_MOD2 share of middle class cars (in an AZ specific ... KBA05_SEG4 share of middle class cars (Ford Mondeo etc.) ... KBA13_KRSSEG_OBER share of upper class cars (referred to the cou... KBA13_HERST_EUROPA share of European cars within the PLZ8 KBA13_VORB_1 share of cars with 1 preowner - PLZ8 KBA13_HERST_AUDI_VW share of Volkswagen & Audi within the PLZ8 ORTSGR_KLS9 classified number of inhabitants KBA13_TOYOTA share of TOYOTA within the PLZ8 KBA05_MOD3 share of Golf-class cars (in an AZ specific de... KBA05_KRSVAN share of vans (referred to the county average) KBA05_ZUL2 share of cars built between 1994 and 2000 KBA13_SEG_MINIVANS share of minivans within the PLZ8 KBA13_SEG_UTILITIES share of MUVs/SUVs within the PLZ8 KBA05_FRAU share of female car owners KBA13_HALTER_45 share of car owners between 41 and 45 within t... KBA13_KRSHERST_AUDI_VW share of Volkswagen (referred to the county av... KBA13_KW_61_120 share of cars with an engine power between 61 ... KONSUMNAEHE distance from a building to PoS (Point of Sale) TITEL_KZ flag whether this person holds an academic title KBA05_KW2 share of cars with an engine power between 60 ... KBA05_KRSZUL share of newbuilt cars (referred to the county... KBA05_KRSHERST2 share of Volkswagen (reffered to the county av... FINANZTYP best descirbing financial type for the peron KBA13_SEG_MINIWAGEN share of minicars within the PLZ8 ANZ_HH_TITEL number of holders of an academic title in the ... KBA13_PEUGEOT share of PEUGEOT within the PLZ8 KBA05_KRSOBER share of upper class cars (referred to the cou... KBA13_BJ_2006 share of cars built between 2005 and 2006 with... ANZ_TITEL number of bearers of an academic title within ... KBA05_SEG10 share of more specific cars (Vans, convertable... KBA05_KRSHERST1 share of Mercedes/BMW (reffered to the county ... KBA05_SEG1 share of very small cars (Ford Ka etc.) in the... KBA05_ANHANG share of trailers in the microcell KBA05_ALTER3 share of car owners inbetween 45 and 60 years ... KBA13_SEG_GELAENDEWAGEN share of allterrain within the PLZ8 KBA05_HERST2 share of Volkswagen-Cars (including Audi) ANZ_PERSONEN number of persons known in this household KBA05_DIESEL share of cars with Diesel-engine in the microcell KBA05_SEG9 share of vans in the microcell KBA13_HALTER_66 share of car owners over 66 within the PLZ8 KBA13_SEG_VAN share of vans within the PLZ8 KBA05_MOTRAD share of motorcycles per household KBA13_AUDI share of AUDI within the PLZ8 KBA05_MAXSEG most common car segment in the microcell KBA13_CCM_2000 share of cars with 1800ccm to 1999ccm within t... KBA05_CCM3 share of cars with 1800ccm to 2499 ccm KBA13_FIAT share of FIAT within the PLZ8 KBA05_ALTER4 share of cars owners elder than 61 years KBA05_VORB0 share of cars with no preowner KBA13_ALTERHALTER_61 share of car owners elder than 60 within the PLZ8 ANZ_HAUSHALTE_AKTIV number of households known in this building KBA05_SEG6 share of upper class cars (BMW 7er etc.) in th... KBA13_KRSHERST_BMW_BENZ share of BMW/Mercedes Benz (referred to the co... OST_WEST_KZ flag indicating the former GDR/FRG KBA05_ZUL4 share of cars built from 2003 on SEMIO_LUST affinity indicating in what way the person is ... KBA05_ANTG4 number of >10 family houses in the cell KBA13_SEG_GROSSRAUMVANS share of big sized vans within the PLZ8 KBA13_SITZE_6 number of cars with more than 5 seats in the PLZ8 MIN_GEBAEUDEJAHR year the building was first mentioned in our d... KBA13_HALTER_65 share of car owners between 61 and 65 within t... WOHNLAGE neighbourhood-area (very good -> rather poor; ... KBA05_KRSAQUOT share of cars per household (reffered to count... KBA13_SEG_OBEREMITTELKLASSE share of upper middle class cars and upper cla... KBA05_MOD8 share of vans (in an AZ specific definition) KBA05_CCM4 share of cars with more than 2499ccm KBA13_VORB_0 share of cars with no preowner - PLZ8 KBA05_ANTG3 number of 6-10 family houses in the cell PLZ8_ANTG3 number of 6-10 family houses in the PLZ8 KBA13_KMH_110 share of cars with max speed 110 km/h within t... KBA13_MERCEDES share of MERCEDES within the PLZ8 GFK_URLAUBERTYP vacation habits KBA05_MAXHERST most common car manufacturer in the microcell KBA13_KMH_251 share of cars with a greater max speed than 25... PLZ8_BAUMAX most common building-type within the PLZ8 KBA05_HERSTTEMP Development of the most common car manufacture... KBA05_ALTER1 share of car owners less than 31 years old KBA13_SITZE_4 number of cars with less than 5 seats in the PLZ8 KBA13_BMW share of BMW within the PLZ8 SEMIO_KULT affinity indicating in what way the person is ... KBA13_HERST_BMW_BENZ share of BMW & Mercedes Benz within the PLZ8 KBA05_HERST1 share of top German manufacturer (Mercedes, BMW) KBA13_KW_30 share of cars up to 30 KW engine power - PLZ8 GEBAEUDETYP type of building (residential or commercial) KBA05_KW3 share of cars with an engine power of more tha... PLZ8_ANTG1 number of 1-2 family houses in the PLZ8 KBA05_SEG5 share of upper middle class cars and upper cla... ONLINE_AFFINITAET online affinity PLZ8_GBZ number of buildings within the PLZ8 RETOURTYP_BK_S return type PLZ8_HHZ number of households within the PLZ8 HH_EINKOMMEN_SCORE estimated household_net_income KBA05_SEG8 share of roadster and convertables in the micr... PLZ8_ANTG4 number of >10 family houses in the PLZ8 KBA05_SEG7 share of all-terrain vehicles and MUVs in the ... LP_STATUS_GROB social status rough GREEN_AVANTGARDE the environmental sustainability is the domina... KBA05_ANTG2 number of 3-5 family houses in the cell SEMIO_ERL affinity indicating in what way the person is ... KBA13_CCM_1500 share of cars with 1400ccm to 1499ccm within t... KBA13_KMH_140 share of cars with max speed between 110 km/h ... LP_LEBENSPHASE_GROB lifestage rough FINANZ_MINIMALIST financial typology: low financial interest LP_FAMILIE_GROB family type rough AGER_TYP best-ager typology LP_LEBENSPHASE_FEIN lifestage fine SEMIO_SOZ affinity indicating in what way the person is ... GEBURTSJAHR year of birth PRAEGENDE_JUGENDJAHRE dominating movement in the person's youth (ava... ALTER_HH main age within the household FINANZ_VORSORGER financial typology: be prepared ALTERSKATEGORIE_GROB age through prename analysis CAMEO_DEUG_2015 CAMEO_4.0: uppergroup SHOPPER_TYP shopping typology SEMIO_VERT affinity indicating in what way the person is ... HEALTH_TYP health typology NATIONALITAET_KZ nationaltity VERS_TYP insurance typology ###Markdown Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code # Loading the train data mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') # Head of train dataframe mailout_train.head() # Count of rows & columns of train dataset mailout_train.shape # Cleaning the data and concat the column 'RESPONSE' mailout_train_clean = pd.concat([data_cleaning(mailout_train.drop(['RESPONSE'], axis=1), 'mailout_train'), mailout_train['RESPONSE']], axis=1) # Building the X and y dataset X = mailout_train_clean.drop(['RESPONSE'], axis=1) y = mailout_train_clean['RESPONSE'] # Count of rows & columns of clean train dataset mailout_train_clean.shape def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None, n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)): """ Generate a simple plot of the test and traning learning curve. In accordance with: https://scikit-learn.org/0.15/auto_examples/plot_learning_curve.html Parameters ---------- estimator : object type that implements the "fit" and "predict" methods An object of that type which is cloned for each validation. title : string Title for the chart. X : array-like, shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape (n_samples) or (n_samples, n_features), optional Target relative to X for classification or regression; None for unsupervised learning. ylim : tuple, shape (ymin, ymax), optional Defines minimum and maximum yvalues plotted. cv : integer, cross-validation generator, optional If an integer is passed, it is the number of folds (defaults to 3). Specific cross-validation objects can be passed, see sklearn.cross_validation module for the list of possible objects n_jobs : integer, optional Number of jobs to run in parallel (default 1). """ plt.figure() plt.title(title) if ylim is not None: plt.ylim(*ylim) plt.xlabel("Training examples") plt.ylabel("Score") train_sizes, train_scores, test_scores = learning_curve( estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes) train_scores_mean = np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) test_scores_mean = np.mean(test_scores, axis=1) test_scores_std = np.std(test_scores, axis=1) plt.grid() plt.fill_between(train_sizes, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.1, color="r") plt.fill_between(train_sizes, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.1, color="g") plt.plot(train_sizes, train_scores_mean, 'o-', color="r", label="Training score") plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score") plt.legend(loc="best") print("Train score = {}".format(train_scores_mean[-1].round(3))) print("Validation score = {}".format(test_scores_mean[-1].round(3))) param_grid = {} gridcv = GridSearchCV(estimator=estimator, param_grid=param_grid, scoring='roc_auc') gridcv.fit(X, y) print("GritCv score = {}".format(gridcv.best_score_.round(3))) pass return plt title = "Learning Curves (Naive Bayes)" estimator = GaussianNB() plot_learning_curve(estimator, title, X, y); title = "RandomForestClassifier" estimator = RandomForestClassifier() plot_learning_curve(estimator, title, X, y); title = "GradientBoostingClassifier" estimator = GradientBoostingClassifier() plot_learning_curve(estimator, title, X, y); title = "AdaBoostClassifier" estimator = AdaBoostClassifier() plot_learning_curve(estimator, title, X, y); LogisticRegression(random_state=0) title = "LogisticRegression(random_state=0)" estimator = LogisticRegression(random_state=0) plot_learning_curve(estimator, title, X, y); DecisionTreeClassifier title = "DecisionTreeClassifier" estimator = DecisionTreeClassifier() plot_learning_curve(estimator, title, X, y); title = "XGBClassifier" estimator = xgb.XGBClassifier(eval_metric = 'auc') plot_learning_curve(estimator, title, X, y); xgb.XGBClassifier(eval_metric = 'auc').get_params param_grid = { 'n_estimators': [25, 50, 100], 'colsample_bytree': [0.5, 0.7, 0.8], 'learning_rate': [0.1, 0.2, 0.3], 'max_depth': [5, 10, 15], 'reg_alpha': [1.1, 1.2, 1.3], 'reg_lambda': [1.1, 1.2, 1.3], } gridcv = GridSearchCV(estimator=xgb.XGBClassifier(eval_metric = 'auc'), param_grid=param_grid, scoring='roc_auc') gridcv.fit(X, y) print(gridcv.best_score_) print(gridcv.best_estimator_) # Saving the optimized model clf = xgb.XGBClassifier(eval_metric = 'auc', n_estimators=50, colsample_bytree=0.8, learning_rate=0.3, max_depth=5, reg_alpha=1.3, reg_lambda=1.2 ) clf.fit(X,y) ###Output _____no_output_____ ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter.Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') # Separation of the column 'LNR' in diffrent dataframe LNR = mailout_test['LNR'] # Cleaning the test_data mailout_test_clean = data_cleaning(mailout_test, 'mailout_test') # Modeling the prediction y_pred = clf.predict_proba(mailout_test_clean) # Making the final dataframe for submission df_sub = pd.concat([pd.DataFrame(LNR), pd.DataFrame(y_pred)], axis=1) df_sub = df_sub[['LNR', 1]] df_sub.rename(columns={1: "RESPONSE"}, inplace=True) # Display the final dataframe df_sub.head() # Transforming the data frame to csv df_sub.to_csv('submission.csv', index=False) ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, we will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. We'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, we'll apply what we've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that we will use has been provided by Bertelsmann Arvato Analytics. ###Code # import neccessary libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # scikit learn from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA from sklearn.cluster import MiniBatchKMeans from sklearn.cluster import KMeans from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn.metrics import plot_roc_curve from sklearn.metrics import roc_auc_score from sklearn.feature_selection import SelectKBest, f_classif from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import AdaBoostClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis from sklearn.neural_network import MLPClassifier from sklearn.ensemble import GradientBoostingClassifier # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. We use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then we use our analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed. ###Code # load in the data azdias = pd.read_csv('data/azdias.csv') customers = pd.read_csv('data/customer.csv') ###Output /opt/anaconda3/envs/DataScience/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3051: DtypeWarning: Columns (18,19) have mixed types. Specify dtype option on import or set low_memory=False. interactivity=interactivity, compiler=compiler, result=result) ###Markdown Columns 'CAMEO_DEUG_2015' and 'CAMEO_INTL_2015' have mixed datatypes, which we are going to fix. ###Code def mixed_datatypes_handler(dataframes, mixed_datatypes): """ Takes as input a dictionary (mixed_datatypes) and makes specific columns consistent. args: - mixed_datatypes: dict with keys=columns and values=attributes - dataframes: pandas Dataframes """ for key, value in mixed_datatypes.items(): for frame in dataframes: frame[key].replace(value, [float(i) for i in value], inplace=True) # Same attributes are sometimes encoded as strings and sometimes as floats, e.g., 1 and '1' mixed_datatypes = {'CAMEO_DEUG_2015': [str(i) for i in range(1,10)], 'CAMEO_INTL_2015': ['22', '24', '41', '12', '54', '51', '44', '35', '23', '25', '14','34', '52', '55', '31', '32', '15', '13', '43', '33', '45']} mixed_datatypes_handler([azdias, customers], mixed_datatypes) ###Output _____no_output_____ ###Markdown General InformationDisplay some summary statistics and information about both datasets, azdias and customers. ###Code azdias.head() customers.head() ###Output _____no_output_____ ###Markdown Next, we get some information about the dtypes and shapes of "azdias" and "customers" ###Code print(azdias.info()) print() print(customers.info()) azdias.describe() customers.describe() ###Output _____no_output_____ ###Markdown First look at featuresFirst, we have a look at the number of features in each dataset. Afterwards, we identify the common features.Since there are only three columns that both datasets do not have in common, we discard them. ###Code common_attributes = set(customers.columns).intersection(set(azdias.columns)) print("Number of attributes for 'azdias': {}".format(len(azdias.columns))) print("Number of attributes for 'customers': {}".format(len(customers.columns))) print("Number of common attributes: {}".format(len(common_attributes))) # we store the uncommon variables as a global variable NOT_COMMON_ATTRIBUTES = list(set(customers.columns).symmetric_difference(set(azdias.columns))) print("Not common attributes: {}".format(NOT_COMMON_ATTRIBUTES)) # Delete columns that datasets do not have in common customers.drop(NOT_COMMON_ATTRIBUTES, axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Next, we will have a look at the categorical attributes of azdias and customers, respectively. ###Code print('Cat. attributes azdias:', azdias.select_dtypes(include=['object']).columns.values) print() print('Cat. attributes customers:', customers.select_dtypes(include=['object']).columns.values) ###Output Cat. attributes azdias: ['CAMEO_DEU_2015' 'CAMEO_DEUG_2015' 'CAMEO_INTL_2015' 'D19_LETZTER_KAUF_BRANCHE' 'EINGEFUEGT_AM' 'OST_WEST_KZ'] Cat. attributes customers: ['CAMEO_DEU_2015' 'CAMEO_DEUG_2015' 'CAMEO_INTL_2015' 'D19_LETZTER_KAUF_BRANCHE' 'EINGEFUEGT_AM' 'OST_WEST_KZ'] ###Markdown The column 'EINGEFUEGT_AM' doesn't seem that meaningful. Therefore, we discard that attribute from both datasets. ###Code azdias.drop(['EINGEFUEGT_AM'], axis=1, inplace=True) customers.drop(['EINGEFUEGT_AM'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Some of the categorical attributes contain values that do not have an encoded meaning. ###Code def attributes_to_replace(dataframes, attr_to_replace): """ Replace specific values in specific column by Nan values. args: - attr_to_replace: dict with key=columns and values=attributes - dataframes: pandas Dataframes """ for key, value in attr_to_replace.items(): for frame in dataframes: frame[key].replace(value, np.nan, inplace=True) # Some of the columns have attributes that do not occur in DIAS-Attributes col_attr_to_replace = {'CAMEO_DEU_2015': 'XX', 'CAMEO_DEUG_2015': 'X', 'CAMEO_INTL_2015': 'XX'} attributes_to_replace([azdias, customers], col_attr_to_replace) ###Output _____no_output_____ ###Markdown Get to know the description of the dataWe read in the description of the data, i.e., 'DIAS Attributes - Values 2017', and use it to identify additional missing values in our dataframes. ###Code info_table = pd.read_excel('DIAS Attributes - Values 2017.xlsx',header=1).drop(['Unnamed: 0'], axis=1) info_table.Attribute = info_table.Attribute.ffill() info_table.Description = info_table.Description.ffill() info_table.Meaning = info_table.Meaning.ffill() info_table.head() ###Output _____no_output_____ ###Markdown As we can see, some attributes encode numbers as missing or unknown values. For example: the value '-1' in the attribute 'AGER_TYP'Therefore, we create a dictionary, called 'value_meaning', to gather these information. ###Code # dict of attributes with the corresponding value that indicates that its meaning is unknown or missing value_meaning = {} for index, row in info_table.iterrows(): if 'unknown' in row.Meaning: value_meaning[row.Attribute] = list(map(lambda x: int(x), str(row.Value).split(','))) ###Output _____no_output_____ ###Markdown Next, we replace all unknown values in the corresponding columns by NaN values, i.e., np.nan. ###Code def add_nans(dataframes, nan_values): """ Replaces unknown values with np.nan. args: - nan_values: dict with keys=attributes and values=unknown values - dataframes: pandas Dataframes """ for frame in dataframes: for col in frame.columns: if col in nan_values.keys(): value = nan_values[col] if value != None: frame[col].replace(value, np.nan, inplace=True) # replace unknown values by np.nan add_nans([azdias, customers], value_meaning) ###Output _____no_output_____ ###Markdown Missing valuesIn this section, we investigate the distribution of missing values over the attributes in both dataframes. We decide on which attributes to discard from the dataframes and we will investigate how to fill in missing values. Distribution of missing values ###Code # Determine the distribution of missing values in 'azdias' azdias_missing_relative = [round(pd.isnull(azdias[col]).values.sum()/azdias.shape[0],2)*100 for col in azdias.columns] azdias_missing_total = list(map(lambda x: int(x*azdias.shape[0]), azdias_missing_relative)) azdias_missing = {'total': azdias_missing_total, 'relative': azdias_missing_relative} azdias_missing_df = pd.DataFrame(azdias_missing, index=azdias.columns) azdias_missing_df = azdias_missing_df.sort_values(by=['relative'], ascending=False) azdias_missing_df.head() # Determine the distribution of missing values in 'customers' customers_missing_relative = [round(pd.isnull(customers[col]).values.sum()/customers.shape[0],2)*100 for col in customers.columns] customers_missing_total = list(map(lambda x: int(x*customers.shape[0]), customers_missing_relative)) customers_missing = {'total': customers_missing_total, 'relative': customers_missing_relative} customers_missing_df = pd.DataFrame(customers_missing, index=customers.columns) customers_missing_df = customers_missing_df.sort_values(by=['relative'], ascending=False) customers_missing_df.head() ###Output _____no_output_____ ###Markdown Since the attributes 'ALTER_KIND1' until 'ALTER_KIND4' hava a high percentage of missing values, we replace these features by a new feature, called 'MEAN_ALTER_KIND', measuring the mean age of all kids in a household. ###Code df_azdias_kind = azdias[['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4']] azdias['ALTER_KIND_MEAN'] = np.sum(df_azdias_kind, axis=1) azdias['ALTER_KIND_MEAN'].loc[df_azdias_kind.isnull().sum(axis=1) == 2] = azdias[df_azdias_kind.isnull().sum(axis=1) == 2]['ALTER_KIND_MEAN']/2 azdias['ALTER_KIND_MEAN'].loc[df_azdias_kind.isnull().sum(axis=1) == 1] = azdias[df_azdias_kind.isnull().sum(axis=1) == 1]['ALTER_KIND_MEAN']/3 azdias['ALTER_KIND_MEAN'].loc[df_azdias_kind.isnull().sum(axis=1) == 0] = azdias[df_azdias_kind.isnull().sum(axis=1) == 0]['ALTER_KIND_MEAN']/4 azdias.drop(['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4'], axis=1, inplace=True) df_customers_kind = customers[['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4']] customers['ALTER_KIND_MEAN'] = np.sum(df_customers_kind, axis=1) customers['ALTER_KIND_MEAN'].loc[df_customers_kind.isnull().sum(axis=1) == 2] = customers[df_customers_kind.isnull().sum(axis=1) == 2]['ALTER_KIND_MEAN']/3 customers['ALTER_KIND_MEAN'].loc[df_customers_kind.isnull().sum(axis=1) == 1] = customers[df_customers_kind.isnull().sum(axis=1) == 1]['ALTER_KIND_MEAN']/3 customers['ALTER_KIND_MEAN'].loc[df_customers_kind.isnull().sum(axis=1) == 0] = customers[df_customers_kind.isnull().sum(axis=1) == 0]['ALTER_KIND_MEAN']/4 customers.drop(['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4'], axis=1, inplace=True) azdias_missing_df.drop(['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4'], inplace=True) customers_missing_df.drop(['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4'], inplace=True) ###Output _____no_output_____ ###Markdown Let us now revisit the distribution of missing values. ###Code f, axes = plt.subplots(1, 2, figsize=(25, 8)) chart = sns.distplot(azdias_missing_df['relative'], 8, kde=False, ax=axes[0]) axes[0].set_title('Distribution of missing values in "azdias"', fontsize=22) axes[0].set_ylabel('Count', fontsize=18) axes[0].set_xlabel('Percentage', fontsize=18) chart = sns.distplot(customers_missing_df['relative'], 8, kde=False, ax=axes[1]) axes[1].set_title('Distribution of missing values in "customers"', fontsize=22) axes[1].set_ylabel('Count', fontsize=18) axes[1].set_xlabel('Percentage', fontsize=18) sns.set_style('darkgrid') f.savefig("pictures/missing_values.pdf",format='pdf', bbox_inches='tight') ###Output _____no_output_____ ###Markdown The barplots displayed above indicate that most of the data features of both datasets have less than 40% missing data.After testing some percentage values, it seems like 31% is the lowest possible threshold such that both datasets have the same features with less than 31% of missing data. ###Code threshold_missing = 31 print("Number of features with less than {}% missing data for azdias:".format(threshold_missing)) print(len(azdias_missing_df[azdias_missing_df['relative'] <= threshold_missing])) print() print("Number of features with less than {}% missing data for customers:".format(threshold_missing)) print(len(customers_missing_df[customers_missing_df['relative'] <= threshold_missing])) print() print("Both datasets have the same features with less than {}% missing data:".format(threshold_missing)) print(set(azdias_missing_df[azdias_missing_df['relative'] <= threshold_missing].index) == set(customers_missing_df[customers_missing_df['relative'] <= threshold_missing].index)) ###Output Number of features with less than 31% missing data for azdias: 355 Number of features with less than 31% missing data for customers: 355 Both datasets have the same features with less than 31% missing data: True ###Markdown The figure below displays the attributes that we are going to remove. ###Code f, axes = plt.subplots(1, 2, figsize=(25, 8)) chart = sns.barplot(azdias_missing_df[azdias_missing_df['relative'] > threshold_missing].index, azdias_missing_df[azdias_missing_df['relative'] > threshold_missing]['relative'], ax=axes[0]) chart.set_title('Removed attributes "azdias"', fontsize=22) chart.set_xlabel('Attributes', fontsize=16) chart.set_ylabel('Percentage of missing values', fontsize=18) chart1 = sns.barplot(customers_missing_df[customers_missing_df['relative'] > threshold_missing].index, customers_missing_df[customers_missing_df['relative'] > threshold_missing]['relative'], ax=axes[1]) chart1.set_title('Removed attributes "customers"', fontsize=22) chart1.set_xlabel('Attributes', fontsize=18) chart1.set_ylabel('Percentage of missing values', fontsize=18); f.savefig("pictures/missing_values1.pdf",format='pdf', bbox_inches='tight') # Identify the attributes with more than 31% of missing data and discard them from both datasets. old_features = azdias_missing_df.index.values new_features = azdias_missing_df[azdias_missing_df['relative'] <= threshold_missing].index.values attributes_to_discard = list(set(old_features).symmetric_difference(set(new_features))) # Delete columns customers.drop(attributes_to_discard, axis=1, inplace=True) azdias.drop(attributes_to_discard, axis=1, inplace=True) print("Discarded attributes: {}".format(attributes_to_discard)) ###Output Discarded attributes: ['ALTER_HH', 'EXTSEL992', 'TITEL_KZ', 'KK_KUNDENTYP', 'AGER_TYP', 'KBA05_BAUMAX'] ###Markdown Categorical data First, we investigate the distribution of the categorical values. ###Code cat_cols = azdias.select_dtypes(include=['object']).columns.values cat_cols[1], cat_cols[2] = cat_cols[2], cat_cols[1] f, axes = plt.subplots(len(cat_cols), 2, figsize=(25, 14)) plt.subplots_adjust(hspace=0.5) for i in range(len(cat_cols)): for j in range(2): if j == 0: chart = sns.barplot(azdias[cat_cols[i]].value_counts().index, azdias[cat_cols[i]].value_counts(), alpha=0.9, ax=axes[i][j]) axes[i][j].set_title('Frequency Distribution of {} in azdias'.format(cat_cols[i]), fontsize=20) axes[i][j].set_ylabel('Number of Occurrences', fontsize=16) axes[i][j].set_xlabel('Value', fontsize=16) if i == 2: axes[i][j].set_xticklabels(azdias[cat_cols[i]].value_counts().index, rotation=90) if j == 1: chart = sns.barplot(customers[cat_cols[i]].value_counts().index, customers[cat_cols[i]].value_counts(), alpha=0.9, ax=axes[i][j]) axes[i][j].set_title('Frequency Distribution of {} in customers'.format(cat_cols[i]), fontsize=20) axes[i][j].set_ylabel('Number of Occurrences', fontsize=16) axes[i][j].set_xlabel('Value', fontsize=16) if i == 2: axes[i][j].set_xticklabels(azdias[cat_cols[i]].value_counts().index, rotation=90) f.savefig("pictures/categorical_variables.pdf",format='pdf', bbox_inches='tight') ###Output _____no_output_____ ###Markdown Fill in missing values We replace missing values depending on the dtype of the corresponding attribute. ###Code def replace_nans(dataframes): """ Replace NaNs in each dataframe depending on the attributes dtype. args: - dataframes """ for frame in dataframes: for col in frame.columns: if frame[col].dtype == 'object': frame[col].replace(np.nan, frame[col].value_counts(ascending=False).index[0], inplace=True) else: frame[col].replace(np.nan, frame[col].mean(), inplace=True) if True in [True in pd.isnull(frame[col]).values for col in frame.columns]: print("There are still missing values.") else: print("There are no missing values anymore.") replace_nans([azdias, customers]) ###Output There are no missing values anymore. There are no missing values anymore. ###Markdown Create dummy variables For all categorical attributes of 'azdias' and 'customers', we create dummy-variables. ###Code cat_cols_azdias = azdias.select_dtypes(include=['object']).columns.values azdias = pd.get_dummies(azdias, columns=cat_cols_azdias) azdias.head() cat_cols_customers = customers.select_dtypes(include=['object']).columns.values customers = pd.get_dummies(customers, columns=cat_cols_customers) customers.head() ###Output _____no_output_____ ###Markdown Lets check for common attributes, again. ###Code common_attributes = set(customers.columns).intersection(set(azdias.columns)) print("Number of attributes for 'azdias': {}".format(len(azdias.columns))) print("Number of attributes for 'customers': {}".format(len(customers.columns))) print("Number of common attributes: {}".format(len(common_attributes))) ###Output Number of attributes for 'azdias': 434 Number of attributes for 'customers': 434 Number of common attributes: 434 ###Markdown Normalization of the data We scale and translate each feature individually such that it is in the given range between 0 and 1. ###Code scaler = MinMaxScaler() azdias_scaled = pd.DataFrame(scaler.fit_transform(azdias.loc[:, azdias.columns != 'LNR'].astype(float))) azdias_scaled.columns = azdias.columns[1:] azdias_scaled.index = azdias.index azdias_scaled.head() customers_scaled = pd.DataFrame(scaler.fit_transform(customers.loc[:, customers.columns != 'LNR'].astype(float))) customers_scaled.columns = customers.columns[1:] customers_scaled.index = customers.index customers_scaled.head() ###Output _____no_output_____ ###Markdown Unsupervised learningHere, we aim at investigating the relationship between the demographics of the company's existing customers and the general population of Germany. First, we use PCA (Principal Component Analysis) to reduce the dimensionality of the input. ###Code pca = PCA() pca = pca.fit(azdias_scaled) ###Output _____no_output_____ ###Markdown Next, we investigate the cumulative explained variance. ###Code plt.figure(figsize=(13,6)) plt.plot(pca.explained_variance_ratio_.cumsum()*100) plt.ylabel('Explained variance [%]', fontsize=18) plt.xlabel('Number of principal components', fontsize=18) plt.title('Cumulative explained variance', fontsize=20); plt.savefig("pictures/pca_explained_var.pdf",format='pdf', bbox_inches='tight') ###Output _____no_output_____ ###Markdown It seems like using ca. 50 PC´s are sufficient to cover at least 60% of the explained variance. ###Code pca = PCA(n_components=50) pca_azdias = pca.fit_transform(azdias_scaled) df_expl_var = pd.DataFrame({'Explained variance': pca.explained_variance_ratio_*100, 'Cumulative sum': pca.explained_variance_ratio_.cumsum()*100}) fig, ax1 = plt.subplots(figsize=(15, 8)) df_expl_var['Explained variance'].plot(kind='bar') df_expl_var['Cumulative sum'].plot(kind='line', color='y') ax1.set_ylabel('Explained variance [%]', fontsize=16) ax1.set_xlabel('Principal components', fontsize=16) ax1.legend(fontsize=16); plt.savefig("pictures/pca_explained_var1.pdf",format='pdf', bbox_inches='tight') pca_azdias_df = pd.DataFrame(pca_azdias) pca_azdias_df.head() pca_customers = pca.transform(customers_scaled) pca_customers_df = pd.DataFrame(pca_customers) pca_customers_df.head() ###Output _____no_output_____ ###Markdown Next, we use KMeans-Clustering to identify groups within the data. We use MiniBatchKMeans to speed up computation time. Further, as a measurement for the error, we use inertia, which is defined as the sum of square distances of samples to their nearest neighbor. ###Code K = range(1,50) inertia = [] for k in K: kmeans = MiniBatchKMeans(n_clusters=k,random_state=0) kmeans.fit(pca_azdias_df) inertia.append(kmeans.inertia_) ###Output _____no_output_____ ###Markdown In the figure below, we conduct the Elbow Method to identify the optimal number of clusters to use for our analysis. ###Code plt.figure(figsize=(16,8)) plt.plot(K, inertia, '-bo') plt.xlabel('Number of Clusters', fontsize=16) plt.ylabel('Inertia', fontsize=16) plt.title('Elbow Method', fontsize=20) plt.savefig("pictures/elbow_method.pdf",format='pdf', bbox_inches='tight') plt.show() ###Output _____no_output_____ ###Markdown We choose k=17 for our final KMeans model. ###Code opt_k = 17 kmeans_model = KMeans(n_clusters=opt_k).fit(pca_azdias_df) kmeans_model_customers = kmeans_model.predict(pca_customers_df) ###Output _____no_output_____ ###Markdown Next, we calculate the cluster distribution among the individuals in azdias and customers. ###Code # distribution of different classes in customers and azdias count = pd.Series(kmeans_model.labels_).value_counts().append(pd.Series(kmeans_model_customers).value_counts()) cluster = pd.Series(kmeans_model.labels_).value_counts().index.append(pd.Series(kmeans_model_customers).value_counts().index) kmeans_results = {'count': count, 'class': ['azdias']*opt_k + ['customers']*opt_k, 'cluster': cluster } kmeans_results = pd.DataFrame(kmeans_results) kmeans_results['percentage'] = np.where(kmeans_results['class'] == 'azdias', round(kmeans_results['count']/pca_azdias_df.shape[0]*100,2), round(kmeans_results['count']/pca_customers_df.shape[0]*100,2)) kmeans_results.head() ###Output _____no_output_____ ###Markdown The graphic below shows the absoute and relative cluster distribution of azdias and customers, respectively. ###Code f, axes = plt.subplots(1, 2, figsize=(25, 8)) sns.barplot(x='cluster', hue='class', y='count',data=kmeans_results, ax=axes[0]) axes[0].set_xlabel('Cluster', fontsize=18) axes[0].set_ylabel('Absolute frequency', fontsize=18) axes[0].set_title('Absolute Cluster distribution', fontsize=22) axes[0].legend(fontsize=20) sns.barplot(x='cluster', hue='class', y='percentage',data=kmeans_results, ax=axes[1]) axes[1].set_xlabel('Cluster', fontsize=18) axes[1].set_ylabel('Relative frequency', fontsize=18) axes[1].legend(fontsize=20) axes[1].set_title('Relative Cluster distribution', fontsize=22); f.savefig("pictures/cluster_distribution.pdf",format='pdf', bbox_inches='tight') ###Output _____no_output_____ ###Markdown Next, we investigate which clusters in customers data are most significant with respect to the azdias dataframe. ###Code cluster_comparison = pd.DataFrame({'azdias': pd.Series(kmeans_model.labels_).value_counts()/pca_azdias_df.shape[0]*100, 'customers': pd.Series(kmeans_model_customers).value_counts()/pca_customers_df.shape[0]*100}) cluster_comparison['difference'] = abs(cluster_comparison['azdias'] - cluster_comparison['customers']) cluster_comparison.head() plt.figure(figsize=(15,6)) chart = sns.barplot(x=cluster_comparison.index, y='difference',data=cluster_comparison) chart.set_ylabel('Absolute difference', fontsize=18) chart.set_xlabel('Cluster', fontsize=18) chart.set_title('Absolute difference', fontsize=22); plt.savefig("pictures/cluster_differences.pdf",format='pdf', bbox_inches='tight') def inspect_cluster(cluster_num, components_num): """ Inspect specific cluster and display the first 'components_num' weights. args: - cluster_num: cluster to display - components_num: number of components sorted by weight to display """ inspect_df = pd.DataFrame() inspect_df['pca_abs_weight'] = abs(kmeans_model.cluster_centers_[cluster_num]) inspect_df['pca_weight'] = kmeans_model.cluster_centers_[cluster_num] inspect_df['pca_component'] = range(50) result = inspect_df.sort_values(by=['pca_abs_weight'], ascending=[0])[:components_num] return result # 3rd largest difference difference_3rd = cluster_comparison['difference'].sort_values(ascending=False).iloc[2] # condition to find the three largest difference values condition_3rd = cluster_comparison['difference'] >= difference_3rd # find the corresponding clusters clusters_to_inspect = cluster_comparison.loc[condition_3rd].index.values for index in clusters_to_inspect: print('Cluster: {}'.format(index)) print(inspect_cluster(index, 10)) print() ###Output Cluster: 2 pca_abs_weight pca_weight pca_component 2 1.126045 -1.126045 2 8 0.977291 -0.977291 8 0 0.948835 -0.948835 0 6 0.868821 -0.868821 6 10 0.705695 -0.705695 10 1 0.678865 -0.678865 1 4 0.348324 -0.348324 4 11 0.315399 0.315399 11 7 0.246026 -0.246026 7 3 0.219468 -0.219468 3 Cluster: 3 pca_abs_weight pca_weight pca_component 0 2.076285 2.076285 0 3 1.429899 1.429899 3 1 0.495931 -0.495931 1 2 0.397575 0.397575 2 5 0.381997 0.381997 5 7 0.365100 0.365100 7 12 0.187157 -0.187157 12 14 0.132874 -0.132874 14 39 0.116065 -0.116065 39 30 0.112845 -0.112845 30 Cluster: 8 pca_abs_weight pca_weight pca_component 0 2.775528 2.775528 0 2 0.523819 0.523819 2 5 0.437760 -0.437760 5 3 0.323666 -0.323666 3 10 0.225641 -0.225641 10 1 0.207289 0.207289 1 8 0.185177 -0.185177 8 20 0.184930 -0.184930 20 6 0.163173 0.163173 6 4 0.150872 -0.150872 4 ###Markdown Supervised Learning ModelNow that we've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, we'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code # read in the training data mailout_train = pd.read_csv('data/mailout_train.csv') mailout_train.head() ###Output _____no_output_____ ###Markdown Next, we apply all the preliminary steps to the training data, which we have conducted on azdias and customers. ###Code # Same attributes are sometimes encoded as strings and sometimes as floats, e.g., 1 and '1' mixed_datatypes = {'CAMEO_DEUG_2015': [str(i) for i in range(1,10)], 'CAMEO_INTL_2015': ['22', '24', '41', '12', '54', '51', '44', '35', '23', '25', '14','34', '52', '55', '31', '32', '15', '13', '43', '33', '45']} mixed_datatypes_handler([mailout_train], mixed_datatypes) mailout_train.drop(['EINGEFUEGT_AM'], axis=1, inplace=True) # Some of the columns have attributes that do not occur in DIAS-Attributes col_attr_to_replace = {'CAMEO_DEU_2015': 'XX', 'CAMEO_DEUG_2015': 'X', 'CAMEO_INTL_2015': 'XX'} attributes_to_replace([mailout_train], col_attr_to_replace) # replace unknown values by np.nan add_nans([mailout_train], value_meaning) df_mailout_train_kind = mailout_train[['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4']] mailout_train['ALTER_KIND_MEAN'] = np.sum(df_mailout_train_kind, axis=1) mailout_train['ALTER_KIND_MEAN'].loc[df_mailout_train_kind.isnull().sum(axis=1) == 2] = mailout_train[df_mailout_train_kind.isnull().sum(axis=1) == 2]['ALTER_KIND_MEAN']/2 mailout_train['ALTER_KIND_MEAN'].loc[df_mailout_train_kind.isnull().sum(axis=1) == 1] = mailout_train[df_mailout_train_kind.isnull().sum(axis=1) == 1]['ALTER_KIND_MEAN']/3 mailout_train['ALTER_KIND_MEAN'].loc[df_mailout_train_kind.isnull().sum(axis=1) == 0] = mailout_train[df_mailout_train_kind.isnull().sum(axis=1) == 0]['ALTER_KIND_MEAN']/4 mailout_train.drop(['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4'], axis=1, inplace=True) #remove attributes that we have discarded from azdias and customers mailout_train.drop(attributes_to_discard, axis=1, inplace=True) print(attributes_to_discard) replace_nans([mailout_train]) cat_cols_mailout_train = mailout_train.select_dtypes(include=['object']).columns.values mailout_train = pd.get_dummies(mailout_train, columns=cat_cols_mailout_train) ###Output _____no_output_____ ###Markdown We store the responses in a new variable and discard the response column from the training data. ###Code response = mailout_train['RESPONSE'] mailout_train.drop(['RESPONSE'], axis=1, inplace=True) common_attributes = set(customers.columns).intersection(set(mailout_train.columns)) print("Number of attributes for 'azdias': {}".format(len(azdias.columns))) print("Number of attributes for 'customers': {}".format(len(customers.columns))) print("Number of attributes for 'mailout_train': {}".format(len(mailout_train.columns))) print("Number of common attributes: {}".format(len(common_attributes))) scaler = MinMaxScaler() mailout_train_scaled = pd.DataFrame(scaler.fit_transform(mailout_train.loc[:, mailout_train.columns != 'LNR'].astype(float))) mailout_train_scaled.columns = mailout_train.columns[1:] mailout_train_scaled.index = mailout_train.index mailout_train_scaled.head() pca_mailout_train = pca.transform(mailout_train_scaled) pca_mailout_train_df = pd.DataFrame(pca_mailout_train) pca_mailout_train_df.head() kmeans_model_mailout_train = kmeans_model.predict(pca_mailout_train_df) ###Output _____no_output_____ ###Markdown Now, we'll start building our final training dataframe that will be used to train our classification models. ###Code df_train = pd.concat([pd.Series(kmeans_model_mailout_train, name='KmeansPrediction'),response], axis=1) df_train.head() ###Output _____no_output_____ ###Markdown One can see from the figure below that the response behavior is quite unbalanced. We have to account for this fact in our classification model. ###Code plt.figure(figsize=(15,6)) chart = sns.countplot(x="KmeansPrediction", hue="RESPONSE", data=df_train) chart.set_xlabel('Cluster', fontsize=18) chart.set_ylabel('Frequency', fontsize=18) chart.legend(fontsize=16) chart.set_title('Frequency distribution of responses between different clusters', fontsize=22); plt.savefig("pictures/cluster_distributionYN.pdf",format='pdf', bbox_inches='tight') def create_cluster_probability(clustermember, responses, num_cluster): """ 1) For each cluster, create a cluster probability indicating how likely it is for a customer to be in a specific cluster. 2) For each cluster, create a scaled probability of a positive response behavior. 3) For each cluster, create a scaled probability of a positive response behavior in that cluster. args: - clustermember: series object indicating the cluster membership of each individual - responses: series object indicating the response (0,1) of each individual - num_cluster: number of clusters """ table = clustermember.groupby(responses).value_counts() probaYes = {cluster: table[1][cluster]/(table[1][cluster]+table[0][cluster]) for cluster in range(num_cluster)} factor = 1/sum(probaYes.values()) probaYes = {key: value*factor for key, value in probaYes.items()} probaCluster = {cluster: (table[1][cluster]+table[0][cluster])/len(clustermember) for cluster in range(opt_k)} proba = {cluster: probaYes[cluster]*probaCluster[cluster] for cluster in range(opt_k)} factor = 1/sum(proba.values()) proba = {key: value*factor for key, value in proba.items()} return probaYes, probaCluster, proba probYes, probCluster, prob = create_cluster_probability(df_train['KmeansPrediction'], df_train['RESPONSE'], opt_k) probYes probCluster prob df_train['ClusterProb'] = [None]*df_train.shape[0] df_train['ProbYes'] = [None]*df_train.shape[0] df_train['Prob'] = [None]*df_train.shape[0] for i in range(opt_k): df_train['ClusterProb'].loc[df_train['KmeansPrediction'] == i] = probCluster[i] df_train['ProbYes'].loc[df_train['KmeansPrediction'] == i] = probYes[i] df_train['Prob'].loc[df_train['KmeansPrediction'] == i] = prob[i] df_train.head() ###Output _____no_output_____ ###Markdown In addition to the features ClusterProb, ProbYes and Prob, which we created based on our unsupervised learning model, we include some additional features of mailout_train_scaled based on univariate feature selection using F-test for feature scoring. ###Code # define univariate feature selection object using F-test for feature scoring selector = SelectKBest(score_func=f_classif, k=50) X = selector.fit_transform(mailout_train_scaled, response) # which cols of mailout_train_scaled have been chosen selected_cols = mailout_train_scaled.columns.values[selector.get_support()] # what are the scores of each column selected_scores = selector.scores_[selector.get_support()] selection = {'Col_names': selected_cols, 'Score': selected_scores} selection_results = pd.DataFrame(selection).sort_values(by=['Score'], ascending=False) selection_results plt.figure(figsize=(15,6)) chart = sns.barplot(x='Col_names', y='Score', data=selection_results) chart.set_xlabel('Column names', fontsize=18) chart.set_ylabel('Score', fontsize=18) chart.set_title('F-Test Score for univariate feature selection', fontsize=22) chart.set_xticklabels(selection_results['Col_names'], rotation=90); plt.savefig("pictures/ftest.pdf",format='pdf', bbox_inches='tight') ###Output _____no_output_____ ###Markdown Only a few attributes have a F-test score of more than 50. Therefore, we will include them in our training data along with the feature Prob. ###Code X_1 = mailout_train_scaled[selection_results.loc[selection_results['Score'] > 50]['Col_names'].values] X_2 = df_train[['Prob']] X = pd.concat([X_1, X_2], axis=1) X.head() y = df_train['RESPONSE'] factor = 1/sum(X['Prob'].values) weight = [value*factor for value in X['Prob'].values] ###Output _____no_output_____ ###Markdown Now, we are going to test different classification models on our training data. ###Code def model_eval(classifier, X, y, name): """ Displays the ROC curve and score w.r.t. a given classifier, a dataframe X and a series of responses y. args: - classifier: scikit learn classifier object - X: pandas Dataframe - y: pandas Series (responses) - name: name of classifier """ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=0) classifier.fit(X_train, y_train) y_pred = classifier.predict_proba(X_test)[:,1] clf_disp = plot_roc_curve(classifier, X_test, y_test, name=name) clf_disp.figure_.suptitle("ROC curve") plt.savefig("pictures/{}.pdf".format(name),format='pdf', bbox_inches='tight') print("AUC Score:", roc_auc_score(y_test, y_pred, average='weighted')) ###Output _____no_output_____ ###Markdown Logistic regression ###Code lr = LogisticRegression() model_eval(lr, X, y, "Logistic Regression") ###Output AUC Score: 0.6799533965115625 ###Markdown Random forest classifier ###Code rf = RandomForestClassifier() model_eval(rf, X, y, "Random Forest") ###Output AUC Score: 0.7717185305734129 ###Markdown K-nearest neighbor classifier ###Code kn = KNeighborsClassifier(n_neighbors=75) model_eval(kn, X, y, "Knearest Neighbor") ###Output AUC Score: 0.7203309813430409 ###Markdown AdaBoost classifier ###Code ada = AdaBoostClassifier() model_eval(ada, X, y, "AdaBoost") ###Output AUC Score: 0.7901575970510264 ###Markdown Decision tree classifier ###Code dt = DecisionTreeClassifier() model_eval(dt, X, y, "Decision Tree") ###Output AUC Score: 0.7689250349486256 ###Markdown Quadratic discriminant analysis ###Code qda = QuadraticDiscriminantAnalysis() model_eval(qda, X, y, "Quadratic Discriminant Analysis") ###Output AUC Score: 0.772381786248898 ###Markdown Gradient boosting classifier ###Code classifier = GradientBoostingClassifier() model_eval(classifier, X, y, "Gradient Boosting") ###Output AUC Score: 0.7895103042076828 ###Markdown Parameter Tuning In the last section, we have tested the performance of some models on our training data. For those who performed best, we will try to improve the AUC score by tuning the parameters. ###Code def parameter_tuning(classifier, parameters, cv, X, y, name): """ Receives a classifier along with a parameter grid to conduct hyperparameter tuning. Prints the best parameters along with the grid search results. Calls the function model_eval w.r.t. the best estimator. Returns the best estimator. args: - classifier: scikit learn classifier object - parameters: parameter grid for tuning - cv: cross validation - X: pandas Dataframe - y: pandas Series (responses) - name: name of classifier """ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=0) grid = GridSearchCV(classifier, param_grid=parameters, cv=cv, scoring='roc_auc') grid.fit(X_test, y_test) print('Best parameters:', grid.best_params_) model_eval(grid.best_estimator_, X, y, name=name) display(pd.DataFrame(grid.cv_results_)) return grid.best_estimator_ ###Output _____no_output_____ ###Markdown Decision tree classifier ###Code dt_clf = DecisionTreeClassifier() parameters = {'criterion': ['gini', 'entropy'], 'max_depth': [10, 50, 90, 200, None], 'min_samples_split': [2, 5, 10], 'min_samples_leaf': [1, 2, 4], 'class_weight': ['balanced'], 'ccp_alpha': [0.0, 0.1, 0.5] } best = parameter_tuning(dt_clf, parameters, 5, X, y, 'Decision Tree') ###Output Best parameters: {'ccp_alpha': 0.0, 'class_weight': 'balanced', 'criterion': 'entropy', 'max_depth': 10, 'min_samples_leaf': 1, 'min_samples_split': 2} AUC Score: 0.7800383586856363 ###Markdown AdaBoost classifier Now, we use the previously tuned decision tree classifier as a base estimator for Ada boost classifier. ###Code parameters = { 'n_estimators': [50, 100, 300], 'learning_rate' : [0.001, 0.05, 0.1, 0.6, 1] } best_final = parameter_tuning(AdaBoostClassifier(), parameters, 3, X, y, 'AdaBoost') ###Output Best parameters: {'learning_rate': 0.05, 'n_estimators': 100} AUC Score: 0.7825999941735663 ###Markdown Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code mailout_test = pd.read_csv('data/mailout_test.csv') mailout_test.head() # Same attributes are sometimes encoded as strings and sometimes as floats, e.g., 1 and '1' mixed_datatypes = {'CAMEO_DEUG_2015': [str(i) for i in range(1,10)], 'CAMEO_INTL_2015': ['22', '24', '41', '12', '54', '51', '44', '35', '23', '25', '14','34', '52', '55', '31', '32', '15', '13', '43', '33', '45']} mixed_datatypes_handler([mailout_test], mixed_datatypes) mailout_test.drop(['EINGEFUEGT_AM'], axis=1, inplace=True) # Some of the columns have attributes that do not occur in DIAS-Attributes col_attr_to_replace = {'CAMEO_DEU_2015': 'XX', 'CAMEO_DEUG_2015': 'X', 'CAMEO_INTL_2015': 'XX'} attributes_to_replace([mailout_test], col_attr_to_replace) # replace unknown values by np.nan add_nans([mailout_test], value_meaning) df_mailout_test_kind = mailout_test[['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4']] mailout_test['ALTER_KIND_MEAN'] = np.sum(df_mailout_test_kind, axis=1) mailout_test['ALTER_KIND_MEAN'].loc[df_mailout_test_kind.isnull().sum(axis=1) == 2] = mailout_test[df_mailout_test_kind.isnull().sum(axis=1) == 2]['ALTER_KIND_MEAN']/2 mailout_test['ALTER_KIND_MEAN'].loc[df_mailout_test_kind.isnull().sum(axis=1) == 1] = mailout_test[df_mailout_test_kind.isnull().sum(axis=1) == 1]['ALTER_KIND_MEAN']/3 mailout_test['ALTER_KIND_MEAN'].loc[df_mailout_test_kind.isnull().sum(axis=1) == 0] = mailout_test[df_mailout_test_kind.isnull().sum(axis=1) == 0]['ALTER_KIND_MEAN']/4 mailout_test.drop(['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4'], axis=1, inplace=True) #remove attributes that we have discarded from azdias and customers mailout_test.drop(attributes_to_discard, axis=1, inplace=True) print(attributes_to_discard) replace_nans([mailout_test]) cat_cols_mailout_test = mailout_test.select_dtypes(include=['object']).columns.values mailout_test = pd.get_dummies(mailout_test, columns=cat_cols_mailout_test) common_attributes = set(customers.columns).intersection(set(mailout_test.columns)) print("Number of attributes for 'azdias': {}".format(len(azdias.columns))) print("Number of attributes for 'customers': {}".format(len(customers.columns))) print("Number of attributes for 'mailout_test': {}".format(len(mailout_test.columns))) print("Number of common attributes: {}".format(len(common_attributes))) scaler = MinMaxScaler() mailout_test_scaled = pd.DataFrame(scaler.fit_transform(mailout_test.loc[:, mailout_test.columns != 'LNR'].astype(float))) mailout_test_scaled.columns = mailout_test.columns[1:] mailout_test_scaled.index = mailout_test.index mailout_test_scaled.head() pca_mailout_test = pca.transform(mailout_test_scaled) pca_mailout_test_df = pd.DataFrame(pca_mailout_test) pca_mailout_test_df.head() kmeans_model_mailout_test = pd.Series(kmeans_model.predict(pca_mailout_test_df), name='KmeansPrediction') # cluster probability p = {i: kmeans_model_mailout_test.value_counts()[i]/mailout_test.shape[0] for i in range(opt_k)} pp = {i: p[i]*probYes[i] for i in range(opt_k)} factor = 1/sum(pp.values()) pp = {key: value*factor for key, value in pp.items()} df_test = pd.DataFrame(kmeans_model_mailout_test) df_test['ClusterProb'] = [None]*df_test.shape[0] df_test['ProbYes'] = [None]*df_test.shape[0] df_test['Prob'] = [None]*df_test.shape[0] for i in range(opt_k): df_test['ClusterProb'].loc[df_test['KmeansPrediction'] == i] = p[i] df_test['ProbYes'].loc[df_test['KmeansPrediction'] == i] = probYes[i] df_test['Prob'].loc[df_test['KmeansPrediction'] == i] = pp[i] df_test.head() Y_1 = mailout_test_scaled[selection_results.loc[selection_results['Score'] > 50]['Col_names'].values] Y_2 = df_test[['Prob']] Y = pd.concat([Y_1, Y_2], axis=1) submission = pd.Series(best_final.predict_proba(Y)[:,1], name='RESPONSE') submission = pd.concat([mailout_test['LNR'], submission], axis=1) submission.to_csv('submission', index=False) ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here import numpy as np import pandas as pd import matplotlib.pyplot as plt import random import re import os import joblib import scipy.stats as stats from sklearn.base import BaseEstimator, TransformerMixin from sklearn.preprocessing import Imputer from sklearn.model_selection import train_test_split, GridSearchCV, StratifiedShuffleSplit from sklearn.pipeline import Pipeline from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier from sklearn.linear_model import LinearRegression, Ridge, Lasso, LogisticRegression from sklearn.metrics import roc_curve, roc_auc_score %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. First we load the customers and the general population (azdias) datasets. ###Code # load in the data azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';') customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') # Print the shapes of the loaded arrays print('Shape of the customers data:', customers.shape) print('Shape of the general population data:', azdias.shape) ###Output Shape of the customers data: (191652, 369) Shape of the general population data: (891221, 366) ###Markdown For data understanding, a reduced datasets is created with 'n_samples' randomly selected entries. ###Code # number of samples to select n_samples = 3000 # obtain random samples z1 = azdias.shape[0] x1 = range(1,z1) y1 = random.sample(x1, n_samples) # Obtain random samples z2 = customers.shape[0] x2 = range(1,z2) y2 = random.sample(x2, n_samples) # create reduced datasets for understanding the data azdias_red, customers_red = azdias.iloc[y1,:].copy(), customers.iloc[y2,:].copy() ###Output _____no_output_____ ###Markdown **Checkpoint**: Store the reduced datasets. ###Code #store the reduced datasets to pickle azdias_red.to_pickle("./azdias_red.pkl") customers_red.to_pickle("./customers_red.pkl") # remove old dataframes to save ram del azdias, customers, azdias_red, customers_red, n_samples, x1, x2, y1, y2, z1, z2 ###Output _____no_output_____ ###Markdown **START HERE FOR PRAXIS** Load the reduced datasets for data undersranding (**skip** this step for using the full datasets). ###Code #load the reduced dataframes from the pickle files azdias= pd.read_pickle("./azdias_red.pkl") customers = pd.read_pickle("./customers_red.pkl") ###Output _____no_output_____ ###Markdown Data UnderstandingThe data is structured into customers data (**customers**) and data for the general population (**azdias**). The cells below are examplary for data understanding. ###Code customers.head() # print info for customers data customers.info() # print info for general population data azdias.info() # describe 'ANZ_PERSONEN' azdias['ANZ_PERSONEN'].describe() # median of 'ANZ_PERSONEN' azdias['ANZ_PERSONEN'].median() # plot number of persons known in houshold fig = plt.figure() x=azdias['ANZ_PERSONEN'].dropna().values #ax.set(xlabel='ANZ_PERSONEN') #ax.set(xlabel = 'False Positive Rate', ylabel = 'True Positive Rate') r = np.array(range(-1,46))+0.5 n, bins, patches = plt.hist(x,r,density=True, facecolor='g', alpha=0.75, rwidth=0.6) plt.title('Number of adult persons in a household') #plt.text(60, .025, r'$\mu=100,\ \sigma=15$') plt.xlim(-0.5, 7.5) plt.xlabel('Persons per household', fontsize=13) plt.ylabel('Density', fontsize=13) #plt.ylim(0, 0.03) plt.grid(True) plt.show() fig.savefig('numb_persons.png', dpi=300) # Show extra columns in customers customers[['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP']].head() # Find non-numeric columns obj_cols=customers.select_dtypes(include= ['object']).columns print(obj_cols) # print columns with greater 50% of missing values customers.loc[:,customers.isnull().sum(axis=0)>0.5*customers.shape[0]].columns # count values in 'D19_LETZTER_KAUF_BRANCHE' customers['D19_LETZTER_KAUF_BRANCHE'].value_counts() ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. Data PreparationFor data preparation various **cleaning functions** are created. The cleaning steps are then performed with the function 'clean_df'. See the docstrings for a description of the function. ###Code # create cleaning functions def custom_cleaning(df): ''' This function performs custom cleaning steps. INPUT df - input data (DataFrame) OUTPUT df - cleaned output data (DataFrame) ''' # drop column with ID's df.drop(['LNR'], axis = 1, inplace=True) # replace values that represent 'unknown' with NaN and change dtypes for i, val in enumerate(df.columns): df[val].replace(to_replace=-1, value=float('nan'), inplace=True) df[val].replace(to_replace='-1', value=float('nan'), inplace=True) df[val].replace(to_replace='-1.0', value=float('nan'), inplace=True) if val[:2]=='LP': df[val]=df[val].astype('object', copy=False) if val[:4]=='SEMIO': df[val].replace(to_replace=9, value=float('nan'), inplace=True) # replace values that represent 'unknown' with NaN df['CAMEO_DEUG_2015'].replace(to_replace='X', value=float('nan'), inplace=True) df['CAMEO_INTL_2015'].replace(to_replace='XX', value=float('nan'), inplace=True) df['CAMEO_DEU_2015'].replace(to_replace='XX', value=float('nan'), inplace=True) df['ALTERSKATEGORIE_GROB'].replace(to_replace=0, value=float('nan'), inplace=True) df['ALTERSKATEGORIE_GROB'].replace(to_replace=9, value=float('nan'), inplace=True) df['ANREDE_KZ'].replace(to_replace=0, value=float('nan'), inplace=True) df['NATIONALITAET_KZ'].replace(to_replace=0, value=float('nan'), inplace=True) df['RETOURTYP_BK_S'].replace(to_replace=0, value=float('nan'), inplace=True) df['TITEL_KZ'].replace(to_replace=0, value=float('nan'), inplace=True) df['ZABEOTYP'].replace(to_replace=9, value=float('nan'), inplace=True) df['CJT_GESAMTTYP'].replace(to_replace=0, value=float('nan'), inplace=True) df['GEBAEUDETYP'].replace(to_replace=0, value=float('nan'), inplace=True) df['HH_EINKOMMEN_SCORE'].replace(to_replace=0, value=float('nan'), inplace=True) df['KKK'].replace(to_replace=0, value=float('nan'), inplace=True) df['REGIOTYP'].replace(to_replace=0, value=float('nan'), inplace=True) df['RELAT_AB'].replace(to_replace=9, value=float('nan'), inplace=True) df['WOHNDAUER_2008'].replace(to_replace=0, value=float('nan'), inplace=True) df['W_KEIT_KIND_HH'].replace(to_replace=0, value=float('nan'), inplace=True) df['D19_KONSUMTYP'].replace(to_replace=9, value=float('nan'), inplace=True) # change dtypes df['CAMEO_DEUG_2015']=df['CAMEO_DEUG_2015'].astype('float64', copy=False) df['CAMEO_INTL_2015']=df['CAMEO_INTL_2015'].astype('float64', copy=False) list_dtype = ['CAMEO_DEUG_2015', 'CAMEO_INTL_2015', 'FINANZTYP', 'SHOPPER_TYP', 'GFK_URLAUBERTYP', 'HEALTH_TYP', 'PRAEGENDE_JUGENDJAHRE', 'TITEL_KZ', 'ZABEOTYP', 'D19_KONSUMTYP', 'WOHNLAGE'] for _, val in enumerate(list_dtype): df[val]=df[val].astype('object', copy=False) return df # remove and store extra columns for customers def remove_extra_columns(df): ''' This function removes and stores extra columns in custormer df INPUT df - customers data (DataFrame) OUTPUT df_no_extra - customers data without extra columns (DataFrame) df_extra - extra columns from customers data (DataFrame) ''' df_no_extra = df.drop(['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], axis=1) df_extra = df[['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP']] return df_no_extra, df_extra # remove rows with greater than frac of nan values def remove_rows_nan(df, frac): ''' This function removes rows with more than a fraction of nan values INPUT df - input data (DataFrame) frac - fraction of allowed nan values in row (float) OUTPUT df_clean - data with removed rows (DataFrame) idx_rows - indices of the removed rows (Index object) ''' idx_rows = df.loc[df.isnull().sum(axis=1)>frac*df.shape[1]].index df_clean=df.drop(idx_rows, axis=0) return df_clean, idx_rows # remove columns with greater than frac of nan values def remove_cols_nan(df, frac): ''' This function removes columns with more than a fraction of nan values INPUT df - input data (DataFrame) frac - fraction of allowed nan values in column (float) OUTPUT df_clean - data with removed columns (DataFrame) idx_cols - labels of the removed columns (Index object) ''' idx_cols = df.loc[:,df.isnull().sum(axis=0)>frac*df.shape[0]].columns df_clean=df.drop(idx_cols, axis=1) return df_clean, idx_cols # remove duplicate data def remove_duplicated(df): ''' This function removes duplicated rows INPUT df - data (DataFrame) OUTOUT df_clean - data with removed rows (DataFrame) idx_duplicated - row indices of removed rows (Index object) ''' idx_duplicated = df.loc[df.duplicated()].index df_clean = df.drop(idx_duplicated, axis=0) return df_clean, idx_duplicated # parse str into int (e.g., 'CAMEO_DEUG_2015') def find_digits(s): """ This function extracts float and int numbers in strings. INPUT s - A string OUTPUT p or d- Floatingpoint number (float) s - The input string if no digits where detected (str) """ p=re.match('\d+\.\d+$', str(s)) d=re.match('\d+$', str(s)) if p: return float(p[0]) elif d: return float(d[0]) elif s=='NaN': return float('nan') else: return s # Extract year from date def parse_date(df): ''' This function outputs the year form a date in column 'EINGEFUEGT_AM' INPUT df - data with date (DataFrame) OUTPUT df - data with year (DataFrame) ''' df['EINGEFUEGT_AM']=df['EINGEFUEGT_AM'].apply(lambda x: pd.to_datetime(x).year if (x is not'NaN') else float('nan')) return df # create dummy variables for obj columns def create_dummy(df): ''' This function creates dummy variables for categorical attributes INPUT df - data (DataFrame) OUTPUT df - data with dummy variables (DataFrame) ''' # get list of dtypes object columns cat_cols_lst = df.select_dtypes(include= ['object']).columns # create DataFrame with dtype object columns df_cat = df[cat_cols_lst] # create dummy variables df_dummy=pd.get_dummies(df_cat, dummy_na = False) # drop dtype object columns from original DataFrame df.drop(cat_cols_lst, axis=1, inplace=True) # join dummy variables with original DataFrame df = pd.concat([df, df_dummy], axis=1, join='inner') return df # Drop columns with no variability def drop_cols_no_variability(df): ''' This function removes columns with no variability INPUT df - input data (DataFrame) OUTPUT df - data with removed columns (DataFrame) ''' #Drop columns with no variablility for col in df.columns: if df[col].value_counts().shape[0]==1: df.drop(columns=[col], axis=1, inplace=True) return df def clean_df(df, customer_in = True, frac = 0.9): ''' Function that applies cleaning steps: 1. Custom cleaning 2. Remove and store extra columns for customers data 3. Remove duplicate data 4. Remove columns with 'frac' nan values 5. Remove columns with no variability 6. Parse strings with digits into floats 7. Extract year from date 8. Create dummy variables for obj columns INPUT df - data (DataFrame) customer_in - set 'True' for customer data, 'False' for azdias data (boolean) frac - fraction of nan values allowed in a column (float) OUTPUT df - cleaned data (DataFrame) df_extra - extra data in customer data (DataFrame) ''' # Perform custom cleaning df = custom_cleaning(df) # Remove and store extra columns for customers data if customer_in==True: df, df_extra = remove_extra_columns(df) elif customer_in==False: df_extra=pd.DataFrame() else: raise ValueError('customer_in can be True or False only') # Remove duplicate data df, idx_rows = remove_duplicated(df) # Remove columns with many of nan values df, idx_cols = remove_cols_nan(df, frac = frac) # Remove columns with no variability df = drop_cols_no_variability(df) # Parse strings with digits into floats # find type obj columns cols_obj = df.select_dtypes(include= ['object']).columns # parse columns for col in cols_obj: df[col].apply(find_digits) # Extract year from date df = parse_date(df) # Create dummy variables for obj columns df = create_dummy(df) # remove rows from df_extra if customer_in==True: df_extra=df_extra.drop(idx_rows, axis = 0) return df, df_extra ###Output _____no_output_____ ###Markdown Execute the cleaning steps using 'clean_df'. ###Code # clean data df_customers, df_extra_customers = clean_df(customers, frac = 0.9) df_azdias, df_extra_azdias = clean_df(azdias, customer_in = False, frac = 0.9) # Obtain columns that are in azdias and in customers # Print the shapes of the cleaned arrays idx_cols_both=np.intersect1d(df_azdias.columns, df_customers.columns) print('Shape of general population data before intersection:', df_azdias.shape) print('Shape of customers data before intersection:', df_customers.shape) df_azdias=df_azdias[idx_cols_both] df_customers=df_customers[idx_cols_both] print('Shape of general population data after intersection:', df_azdias.shape) print('Shape of customers data after intersection:', df_customers.shape) ###Output Shape of general population data before intersection: (2855, 600) Shape of customers data before intersection: (2351, 600) Shape of general population data after intersection: (2855, 600) Shape of customers data after intersection: (2351, 600) ###Markdown Next, we check whether the columns in the cleaned DataFrames were created as expected. ###Code # show all the column labels in the cleaned DataFrame for i, val in enumerate(df_azdias.columns): print(val) ###Output AGER_TYP AKT_DAT_KL ALTERSKATEGORIE_FEIN ALTERSKATEGORIE_GROB ALTER_HH ANREDE_KZ ANZ_HAUSHALTE_AKTIV ANZ_HH_TITEL ANZ_KINDER ANZ_PERSONEN ANZ_STATISTISCHE_HAUSHALTE ANZ_TITEL ARBEIT BALLRAUM CAMEO_DEUG_2015_1.0 CAMEO_DEUG_2015_2.0 CAMEO_DEUG_2015_3.0 CAMEO_DEUG_2015_4.0 CAMEO_DEUG_2015_5.0 CAMEO_DEUG_2015_6.0 CAMEO_DEUG_2015_7.0 CAMEO_DEUG_2015_8.0 CAMEO_DEUG_2015_9.0 CAMEO_DEU_2015_1A CAMEO_DEU_2015_1B CAMEO_DEU_2015_1C CAMEO_DEU_2015_1D CAMEO_DEU_2015_1E CAMEO_DEU_2015_2A CAMEO_DEU_2015_2B CAMEO_DEU_2015_2C CAMEO_DEU_2015_2D CAMEO_DEU_2015_3A CAMEO_DEU_2015_3B CAMEO_DEU_2015_3C CAMEO_DEU_2015_3D CAMEO_DEU_2015_4A CAMEO_DEU_2015_4B CAMEO_DEU_2015_4C CAMEO_DEU_2015_4D CAMEO_DEU_2015_4E CAMEO_DEU_2015_5A CAMEO_DEU_2015_5B CAMEO_DEU_2015_5C CAMEO_DEU_2015_5D CAMEO_DEU_2015_5E CAMEO_DEU_2015_5F CAMEO_DEU_2015_6A CAMEO_DEU_2015_6B CAMEO_DEU_2015_6C CAMEO_DEU_2015_6D CAMEO_DEU_2015_6E CAMEO_DEU_2015_6F CAMEO_DEU_2015_7A CAMEO_DEU_2015_7B CAMEO_DEU_2015_7C CAMEO_DEU_2015_7D CAMEO_DEU_2015_7E CAMEO_DEU_2015_8A CAMEO_DEU_2015_8B CAMEO_DEU_2015_8C CAMEO_DEU_2015_8D CAMEO_DEU_2015_9A CAMEO_DEU_2015_9B CAMEO_DEU_2015_9C CAMEO_DEU_2015_9D CAMEO_DEU_2015_9E CAMEO_INTL_2015_12.0 CAMEO_INTL_2015_13.0 CAMEO_INTL_2015_14.0 CAMEO_INTL_2015_15.0 CAMEO_INTL_2015_22.0 CAMEO_INTL_2015_23.0 CAMEO_INTL_2015_24.0 CAMEO_INTL_2015_25.0 CAMEO_INTL_2015_31.0 CAMEO_INTL_2015_32.0 CAMEO_INTL_2015_33.0 CAMEO_INTL_2015_34.0 CAMEO_INTL_2015_35.0 CAMEO_INTL_2015_41.0 CAMEO_INTL_2015_43.0 CAMEO_INTL_2015_44.0 CAMEO_INTL_2015_45.0 CAMEO_INTL_2015_51.0 CAMEO_INTL_2015_52.0 CAMEO_INTL_2015_54.0 CAMEO_INTL_2015_55.0 CJT_GESAMTTYP CJT_KATALOGNUTZER CJT_TYP_1 CJT_TYP_2 CJT_TYP_3 CJT_TYP_4 CJT_TYP_5 CJT_TYP_6 D19_BANKEN_ANZ_12 D19_BANKEN_ANZ_24 D19_BANKEN_DATUM D19_BANKEN_DIREKT D19_BANKEN_GROSS D19_BANKEN_LOKAL D19_BANKEN_OFFLINE_DATUM D19_BANKEN_ONLINE_DATUM D19_BANKEN_ONLINE_QUOTE_12 D19_BANKEN_REST D19_BEKLEIDUNG_GEH D19_BEKLEIDUNG_REST D19_BILDUNG D19_BIO_OEKO D19_BUCH_CD D19_DIGIT_SERV D19_DROGERIEARTIKEL D19_ENERGIE D19_FREIZEIT D19_GARTEN D19_GESAMT_ANZ_12 D19_GESAMT_ANZ_24 D19_GESAMT_DATUM D19_GESAMT_OFFLINE_DATUM D19_GESAMT_ONLINE_DATUM D19_GESAMT_ONLINE_QUOTE_12 D19_HANDWERK D19_HAUS_DEKO D19_KINDERARTIKEL D19_KONSUMTYP_1.0 D19_KONSUMTYP_2.0 D19_KONSUMTYP_3.0 D19_KONSUMTYP_4.0 D19_KONSUMTYP_5.0 D19_KONSUMTYP_6.0 D19_KONSUMTYP_MAX D19_KOSMETIK D19_LEBENSMITTEL D19_LETZTER_KAUF_BRANCHE_D19_BANKEN_DIREKT D19_LETZTER_KAUF_BRANCHE_D19_BANKEN_GROSS D19_LETZTER_KAUF_BRANCHE_D19_BANKEN_LOKAL D19_LETZTER_KAUF_BRANCHE_D19_BANKEN_REST D19_LETZTER_KAUF_BRANCHE_D19_BEKLEIDUNG_GEH D19_LETZTER_KAUF_BRANCHE_D19_BEKLEIDUNG_REST D19_LETZTER_KAUF_BRANCHE_D19_BILDUNG D19_LETZTER_KAUF_BRANCHE_D19_BIO_OEKO D19_LETZTER_KAUF_BRANCHE_D19_BUCH_CD D19_LETZTER_KAUF_BRANCHE_D19_DIGIT_SERV D19_LETZTER_KAUF_BRANCHE_D19_DROGERIEARTIKEL D19_LETZTER_KAUF_BRANCHE_D19_ENERGIE D19_LETZTER_KAUF_BRANCHE_D19_FREIZEIT D19_LETZTER_KAUF_BRANCHE_D19_GARTEN D19_LETZTER_KAUF_BRANCHE_D19_HANDWERK D19_LETZTER_KAUF_BRANCHE_D19_HAUS_DEKO D19_LETZTER_KAUF_BRANCHE_D19_KINDERARTIKEL D19_LETZTER_KAUF_BRANCHE_D19_KOSMETIK D19_LETZTER_KAUF_BRANCHE_D19_LEBENSMITTEL D19_LETZTER_KAUF_BRANCHE_D19_LOTTO D19_LETZTER_KAUF_BRANCHE_D19_NAHRUNGSERGAENZUNG D19_LETZTER_KAUF_BRANCHE_D19_RATGEBER D19_LETZTER_KAUF_BRANCHE_D19_REISEN D19_LETZTER_KAUF_BRANCHE_D19_SAMMELARTIKEL D19_LETZTER_KAUF_BRANCHE_D19_SCHUHE D19_LETZTER_KAUF_BRANCHE_D19_SONSTIGE D19_LETZTER_KAUF_BRANCHE_D19_TECHNIK D19_LETZTER_KAUF_BRANCHE_D19_TELKO_MOBILE D19_LETZTER_KAUF_BRANCHE_D19_TELKO_REST D19_LETZTER_KAUF_BRANCHE_D19_TIERARTIKEL D19_LETZTER_KAUF_BRANCHE_D19_UNBEKANNT D19_LETZTER_KAUF_BRANCHE_D19_VERSAND_REST D19_LETZTER_KAUF_BRANCHE_D19_VERSICHERUNGEN D19_LETZTER_KAUF_BRANCHE_D19_VOLLSORTIMENT D19_LETZTER_KAUF_BRANCHE_D19_WEIN_FEINKOST D19_LOTTO D19_NAHRUNGSERGAENZUNG D19_RATGEBER D19_REISEN D19_SAMMELARTIKEL D19_SCHUHE D19_SONSTIGE D19_SOZIALES D19_TECHNIK D19_TELKO_ANZ_12 D19_TELKO_ANZ_24 D19_TELKO_DATUM D19_TELKO_MOBILE D19_TELKO_OFFLINE_DATUM D19_TELKO_ONLINE_DATUM D19_TELKO_ONLINE_QUOTE_12 D19_TELKO_REST D19_TIERARTIKEL D19_VERSAND_ANZ_12 D19_VERSAND_ANZ_24 D19_VERSAND_DATUM D19_VERSAND_OFFLINE_DATUM D19_VERSAND_ONLINE_DATUM D19_VERSAND_ONLINE_QUOTE_12 D19_VERSAND_REST D19_VERSICHERUNGEN D19_VERSI_ANZ_12 D19_VERSI_ANZ_24 D19_VERSI_DATUM D19_VERSI_OFFLINE_DATUM D19_VERSI_ONLINE_DATUM D19_VERSI_ONLINE_QUOTE_12 D19_VOLLSORTIMENT D19_WEIN_FEINKOST DSL_FLAG EINGEFUEGT_AM EINGEZOGENAM_HH_JAHR EWDICHTE EXTSEL992 FINANZTYP_1 FINANZTYP_2 FINANZTYP_3 FINANZTYP_4 FINANZTYP_5 FINANZTYP_6 FINANZ_ANLEGER FINANZ_HAUSBAUER FINANZ_MINIMALIST FINANZ_SPARER FINANZ_UNAUFFAELLIGER FINANZ_VORSORGER FIRMENDICHTE GEBAEUDETYP GEBAEUDETYP_RASTER GEBURTSJAHR GEMEINDETYP GFK_URLAUBERTYP_1.0 GFK_URLAUBERTYP_10.0 GFK_URLAUBERTYP_11.0 GFK_URLAUBERTYP_12.0 GFK_URLAUBERTYP_2.0 GFK_URLAUBERTYP_3.0 GFK_URLAUBERTYP_4.0 GFK_URLAUBERTYP_5.0 GFK_URLAUBERTYP_6.0 GFK_URLAUBERTYP_7.0 GFK_URLAUBERTYP_8.0 GFK_URLAUBERTYP_9.0 GREEN_AVANTGARDE HEALTH_TYP_1.0 HEALTH_TYP_2.0 HEALTH_TYP_3.0 HH_DELTA_FLAG HH_EINKOMMEN_SCORE INNENSTADT KBA05_ALTER1 KBA05_ALTER2 KBA05_ALTER3 KBA05_ALTER4 KBA05_ANHANG KBA05_ANTG1 KBA05_ANTG2 KBA05_ANTG3 KBA05_ANTG4 KBA05_AUTOQUOT KBA05_BAUMAX KBA05_CCM1 KBA05_CCM2 KBA05_CCM3 KBA05_CCM4 KBA05_DIESEL KBA05_FRAU KBA05_GBZ KBA05_HERST1 KBA05_HERST2 KBA05_HERST3 KBA05_HERST4 KBA05_HERST5 KBA05_HERSTTEMP KBA05_KRSAQUOT KBA05_KRSHERST1 KBA05_KRSHERST2 KBA05_KRSHERST3 KBA05_KRSKLEIN KBA05_KRSOBER KBA05_KRSVAN KBA05_KRSZUL KBA05_KW1 KBA05_KW2 KBA05_KW3 KBA05_MAXAH KBA05_MAXBJ KBA05_MAXHERST KBA05_MAXSEG KBA05_MAXVORB KBA05_MOD1 KBA05_MOD2 KBA05_MOD3 KBA05_MOD4 KBA05_MOD8 KBA05_MODTEMP KBA05_MOTOR KBA05_MOTRAD KBA05_SEG1 KBA05_SEG10 KBA05_SEG2 KBA05_SEG3 KBA05_SEG4 KBA05_SEG5 KBA05_SEG6 KBA05_SEG7 KBA05_SEG8 KBA05_SEG9 KBA05_VORB0 KBA05_VORB1 KBA05_VORB2 KBA05_ZUL1 KBA05_ZUL2 KBA05_ZUL3 KBA05_ZUL4 KBA13_ALTERHALTER_30 KBA13_ALTERHALTER_45 KBA13_ALTERHALTER_60 KBA13_ALTERHALTER_61 KBA13_ANTG1 KBA13_ANTG2 KBA13_ANTG3 KBA13_ANTG4 KBA13_ANZAHL_PKW KBA13_AUDI KBA13_AUTOQUOTE KBA13_BAUMAX KBA13_BJ_1999 KBA13_BJ_2000 KBA13_BJ_2004 KBA13_BJ_2006 KBA13_BJ_2008 KBA13_BJ_2009 KBA13_BMW KBA13_CCM_0_1400 KBA13_CCM_1000 KBA13_CCM_1200 KBA13_CCM_1400 KBA13_CCM_1401_2500 KBA13_CCM_1500 KBA13_CCM_1600 KBA13_CCM_1800 KBA13_CCM_2000 KBA13_CCM_2500 KBA13_CCM_2501 KBA13_CCM_3000 KBA13_CCM_3001 KBA13_FAB_ASIEN KBA13_FAB_SONSTIGE KBA13_FIAT KBA13_FORD KBA13_GBZ KBA13_HALTER_20 KBA13_HALTER_25 KBA13_HALTER_30 KBA13_HALTER_35 KBA13_HALTER_40 KBA13_HALTER_45 KBA13_HALTER_50 KBA13_HALTER_55 KBA13_HALTER_60 KBA13_HALTER_65 KBA13_HALTER_66 KBA13_HERST_ASIEN KBA13_HERST_AUDI_VW KBA13_HERST_BMW_BENZ KBA13_HERST_EUROPA KBA13_HERST_FORD_OPEL KBA13_HERST_SONST KBA13_HHZ KBA13_KMH_0_140 KBA13_KMH_110 KBA13_KMH_140 KBA13_KMH_140_210 KBA13_KMH_180 KBA13_KMH_210 KBA13_KMH_211 KBA13_KMH_250 KBA13_KMH_251 KBA13_KRSAQUOT KBA13_KRSHERST_AUDI_VW KBA13_KRSHERST_BMW_BENZ KBA13_KRSHERST_FORD_OPEL KBA13_KRSSEG_KLEIN KBA13_KRSSEG_OBER KBA13_KRSSEG_VAN KBA13_KRSZUL_NEU KBA13_KW_0_60 KBA13_KW_110 KBA13_KW_120 KBA13_KW_121 KBA13_KW_30 KBA13_KW_40 KBA13_KW_50 KBA13_KW_60 KBA13_KW_61_120 KBA13_KW_70 KBA13_KW_80 KBA13_KW_90 KBA13_MAZDA KBA13_MERCEDES KBA13_MOTOR KBA13_NISSAN KBA13_OPEL KBA13_PEUGEOT KBA13_RENAULT KBA13_SEG_GELAENDEWAGEN KBA13_SEG_GROSSRAUMVANS KBA13_SEG_KLEINST KBA13_SEG_KLEINWAGEN KBA13_SEG_KOMPAKTKLASSE KBA13_SEG_MINIVANS KBA13_SEG_MINIWAGEN KBA13_SEG_MITTELKLASSE KBA13_SEG_OBEREMITTELKLASSE KBA13_SEG_OBERKLASSE KBA13_SEG_SONSTIGE KBA13_SEG_SPORTWAGEN KBA13_SEG_UTILITIES KBA13_SEG_VAN KBA13_SEG_WOHNMOBILE KBA13_SITZE_4 KBA13_SITZE_5 KBA13_SITZE_6 KBA13_TOYOTA KBA13_VORB_0 KBA13_VORB_1 KBA13_VORB_1_2 KBA13_VORB_2 KBA13_VORB_3 KBA13_VW KKK KK_KUNDENTYP KOMBIALTER KONSUMNAEHE KONSUMZELLE LP_FAMILIE_FEIN_0.0 LP_FAMILIE_FEIN_1.0 LP_FAMILIE_FEIN_10.0 LP_FAMILIE_FEIN_11.0 LP_FAMILIE_FEIN_2.0 LP_FAMILIE_FEIN_3.0 LP_FAMILIE_FEIN_4.0 LP_FAMILIE_FEIN_5.0 LP_FAMILIE_FEIN_6.0 LP_FAMILIE_FEIN_7.0 LP_FAMILIE_FEIN_8.0 LP_FAMILIE_FEIN_9.0 LP_FAMILIE_GROB_0.0 LP_FAMILIE_GROB_1.0 LP_FAMILIE_GROB_2.0 LP_FAMILIE_GROB_3.0 LP_FAMILIE_GROB_4.0 LP_FAMILIE_GROB_5.0 LP_LEBENSPHASE_FEIN_0.0 LP_LEBENSPHASE_FEIN_1.0 LP_LEBENSPHASE_FEIN_10.0 LP_LEBENSPHASE_FEIN_11.0 LP_LEBENSPHASE_FEIN_12.0 LP_LEBENSPHASE_FEIN_13.0 LP_LEBENSPHASE_FEIN_14.0 LP_LEBENSPHASE_FEIN_15.0 LP_LEBENSPHASE_FEIN_16.0 LP_LEBENSPHASE_FEIN_17.0 LP_LEBENSPHASE_FEIN_18.0 LP_LEBENSPHASE_FEIN_19.0 LP_LEBENSPHASE_FEIN_2.0 LP_LEBENSPHASE_FEIN_20.0 LP_LEBENSPHASE_FEIN_21.0 LP_LEBENSPHASE_FEIN_22.0 LP_LEBENSPHASE_FEIN_23.0 LP_LEBENSPHASE_FEIN_24.0 LP_LEBENSPHASE_FEIN_25.0 LP_LEBENSPHASE_FEIN_26.0 LP_LEBENSPHASE_FEIN_27.0 LP_LEBENSPHASE_FEIN_28.0 LP_LEBENSPHASE_FEIN_29.0 LP_LEBENSPHASE_FEIN_3.0 LP_LEBENSPHASE_FEIN_30.0 LP_LEBENSPHASE_FEIN_31.0 LP_LEBENSPHASE_FEIN_32.0 LP_LEBENSPHASE_FEIN_33.0 LP_LEBENSPHASE_FEIN_34.0 LP_LEBENSPHASE_FEIN_35.0 LP_LEBENSPHASE_FEIN_36.0 LP_LEBENSPHASE_FEIN_37.0 LP_LEBENSPHASE_FEIN_38.0 LP_LEBENSPHASE_FEIN_39.0 LP_LEBENSPHASE_FEIN_4.0 LP_LEBENSPHASE_FEIN_40.0 LP_LEBENSPHASE_FEIN_5.0 LP_LEBENSPHASE_FEIN_6.0 LP_LEBENSPHASE_FEIN_7.0 LP_LEBENSPHASE_FEIN_8.0 LP_LEBENSPHASE_FEIN_9.0 LP_LEBENSPHASE_GROB_0.0 LP_LEBENSPHASE_GROB_1.0 LP_LEBENSPHASE_GROB_10.0 LP_LEBENSPHASE_GROB_11.0 LP_LEBENSPHASE_GROB_12.0 LP_LEBENSPHASE_GROB_2.0 LP_LEBENSPHASE_GROB_3.0 LP_LEBENSPHASE_GROB_4.0 LP_LEBENSPHASE_GROB_5.0 LP_LEBENSPHASE_GROB_6.0 LP_LEBENSPHASE_GROB_7.0 LP_LEBENSPHASE_GROB_8.0 LP_LEBENSPHASE_GROB_9.0 LP_STATUS_FEIN_1.0 LP_STATUS_FEIN_10.0 LP_STATUS_FEIN_2.0 LP_STATUS_FEIN_3.0 LP_STATUS_FEIN_4.0 LP_STATUS_FEIN_5.0 LP_STATUS_FEIN_6.0 LP_STATUS_FEIN_7.0 LP_STATUS_FEIN_8.0 LP_STATUS_FEIN_9.0 LP_STATUS_GROB_1.0 LP_STATUS_GROB_2.0 LP_STATUS_GROB_3.0 LP_STATUS_GROB_4.0 LP_STATUS_GROB_5.0 MIN_GEBAEUDEJAHR MOBI_RASTER MOBI_REGIO NATIONALITAET_KZ ONLINE_AFFINITAET ORTSGR_KLS9 OST_WEST_KZ_O OST_WEST_KZ_W PLZ8_ANTG1 PLZ8_ANTG2 PLZ8_ANTG3 PLZ8_ANTG4 PLZ8_BAUMAX PLZ8_GBZ PLZ8_HHZ PRAEGENDE_JUGENDJAHRE_0 PRAEGENDE_JUGENDJAHRE_1 PRAEGENDE_JUGENDJAHRE_10 PRAEGENDE_JUGENDJAHRE_11 PRAEGENDE_JUGENDJAHRE_12 PRAEGENDE_JUGENDJAHRE_13 PRAEGENDE_JUGENDJAHRE_14 PRAEGENDE_JUGENDJAHRE_15 PRAEGENDE_JUGENDJAHRE_2 PRAEGENDE_JUGENDJAHRE_3 PRAEGENDE_JUGENDJAHRE_4 PRAEGENDE_JUGENDJAHRE_5 PRAEGENDE_JUGENDJAHRE_6 PRAEGENDE_JUGENDJAHRE_7 PRAEGENDE_JUGENDJAHRE_8 PRAEGENDE_JUGENDJAHRE_9 REGIOTYP RELAT_AB RETOURTYP_BK_S RT_KEIN_ANREIZ RT_SCHNAEPPCHEN RT_UEBERGROESSE SEMIO_DOM SEMIO_ERL SEMIO_FAM SEMIO_KAEM SEMIO_KRIT SEMIO_KULT SEMIO_LUST SEMIO_MAT SEMIO_PFLICHT SEMIO_RAT SEMIO_REL SEMIO_SOZ SEMIO_TRADV SEMIO_VERT SHOPPER_TYP_0.0 SHOPPER_TYP_1.0 SHOPPER_TYP_2.0 SHOPPER_TYP_3.0 SOHO_KZ STRUKTURTYP UMFELD_ALT UMFELD_JUNG UNGLEICHENN_FLAG VERDICHTUNGSRAUM VERS_TYP VHA VHN VK_DHT4A VK_DISTANZ VK_ZG11 WOHNDAUER_2008 WOHNLAGE_0.0 WOHNLAGE_1.0 WOHNLAGE_2.0 WOHNLAGE_3.0 WOHNLAGE_4.0 WOHNLAGE_5.0 WOHNLAGE_7.0 WOHNLAGE_8.0 W_KEIT_KIND_HH ZABEOTYP_1 ZABEOTYP_2 ZABEOTYP_3 ZABEOTYP_4 ZABEOTYP_5 ZABEOTYP_6 ###Markdown **Checkpoint**: Store the cleaned DataFrames and column labels to disk so they can be loaded for unsupervised learning. ###Code # Store the cleaned data as csv files df_customers.to_csv('df_customers.csv', sep=';') df_azdias.to_csv('df_azdias.csv', sep=';') # Save extra data to pickle joblib.dump(df_extra_customers, 'df_extra_customers.pkl', compress = 1) # delete not used data del azdias, customers, df_extra_azdias, obj_cols, idx_cols_both, df_customers, \ df_azdias, df_extra_customers ###Output _____no_output_____ ###Markdown Unsupervised LearningTo find differences between the customers and the general population, we will perform a **Mann-Whitney U test** for each attribute of the data. The Mann-Whitney U test is a non-parametric test for statistical differences in the distribution of two datasets. We will use it to obtain a p-value that indicates the likelyhood for a statistical difference of the attributes. First we load the normalized DataFrames ###Code # load normalized DataFrames df_extra_customers = joblib.load('df_extra_customers.pkl') customers=pd.read_csv('df_customers.csv', sep=';', index_col=0) azdias=pd.read_csv('df_azdias.csv', sep=';', index_col=0) customers.head() ###Output _____no_output_____ ###Markdown Next we perform the Mann-Whitney U test. Note that this is performed on the reduced datasets, as else many p-values are gettin 0 and can't be sorted anymore. ###Code # Iterate through columns to find the p-value for each feature using a Mann-Whitney U test p_values = [] features = [] for col in customers.columns: x, y = customers[col].dropna(), azdias[col].dropna() p_val = stats.mannwhitneyu(x, y, alternative='two-sided') p_values.append(p_val[1]) features.append(col) # cast p_values and features into numpy array p_arr = np.array([p_values, features]).T # create a DataFrame and sort for p_values df_p_values=pd.DataFrame(p_arr, columns=['p_value', 'feature']) df_p_values_cp=df_p_values.astype({'p_value': 'float64', 'feature': 'object'}, copy=True) df_p_values_cp.sort_values(by='p_value', axis=0, inplace=True, ascending=True) df_p_values_cp.head(10) ###Output _____no_output_____ ###Markdown After the features were sorted by the p-values, we plot a top feature to visualize the difference in the distribution between customers and the general population. ###Code # plot comparison of distribution for customers and general population x1=customers['HH_EINKOMMEN_SCORE'].dropna().values x2=azdias['HH_EINKOMMEN_SCORE'].dropna().values bins = np.array(range(0,6,1))+0.5 fig=plt.figure() plt.subplot(2,1,1) n, bins, patches = plt.hist(x1, bins = bins, density=True, facecolor='g', alpha=0.75, rwidth=0.6, label='Customers') plt.xlabel('Houshold income (high to low)', fontsize=13) plt.ylabel('Density', fontsize=13) plt.ylim((0,0.4)) plt.legend(loc='upper left') plt.grid(True) plt.subplot(2,1,2) n, bins, patches = plt.hist(x2, bins = bins, density=True, facecolor='b', alpha=0.75, rwidth=0.6, label = 'General Population') plt.xlabel('Houshold income (high to low)', fontsize=13) plt.ylabel('Density', fontsize=13) plt.ylim((0,0.4)) plt.legend(loc='upper left') plt.grid(True) plt.show() fig.savefig('hh_income.png', dpi=300) ###Output _____no_output_____ ###Markdown To obtain a value indicating the direction of statistical differences, we print the difference in the mean for the customers and general population groups. ###Code # Print differenc in mean for the first ten features in the p-value list for i, val in enumerate(df_p_values_cp['feature']): if i < 20: m1 = customers[val].mean() m2 = azdias[val].mean() diff = m1-m2 print('For %s the difference in the mean is %f.' % (val, diff)) ###Output For VK_ZG11 the difference in the mean is -2.711149. For CJT_TYP_1 the difference in the mean is -1.270873. For D19_SOZIALES the difference in the mean is 0.579996. For CJT_TYP_2 the difference in the mean is -1.230367. For FINANZ_SPARER the difference in the mean is -1.235662. For FINANZ_VORSORGER the difference in the mean is 1.083675. For FINANZ_MINIMALIST the difference in the mean is 1.172948. For CJT_TYP_5 the difference in the mean is 1.106126. For CJT_TYP_3 the difference in the mean is 1.109050. For CJT_TYP_6 the difference in the mean is 1.034905. For FINANZ_ANLEGER the difference in the mean is -1.201918. For RT_KEIN_ANREIZ the difference in the mean is -1.189784. For VK_DISTANZ the difference in the mean is -2.875813. For AKT_DAT_KL the difference in the mean is -2.809055. For CJT_TYP_4 the difference in the mean is 1.059692. For KOMBIALTER the difference in the mean is 0.736799. For ALTERSKATEGORIE_FEIN the difference in the mean is -3.325616. For D19_KONSUMTYP_MAX the difference in the mean is -2.670803. For HH_EINKOMMEN_SCORE the difference in the mean is -1.222476. For D19_KONSUMTYP_3.0 the difference in the mean is 0.273680. ###Markdown For instance, we find for the feature 'ALTERSKATEGORIE_FEIN', that customers are younger in average than the general population. **Checkpoint**: We have obtained two sorted lists of features expressing the statistical difference between the customers and the general population. We will save the lists to disk and use it in the supervised learning part below. ###Code # store the p_values to disk df_p_values_cp.to_pickle("./df_p_values_frac_09.pkl") # delete variables del df_p_values_cp, df_p_values, col, features, p_arr, p_val, p_values, \ x, y, customers, df_extra_customers, azdias ###Output _____no_output_____ ###Markdown Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. First read the 'TRAIN' partition as a DataFrame. ###Code mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') # show the shape of the training DataFrame mailout_train.shape ###Output _____no_output_____ ###Markdown Load the sorted features list created in part 1. ###Code # load dataframes from the pickle files df_p_values = pd.read_pickle("./df_p_values_frac_09.pkl") df_p_values['feature'][:10] ###Output _____no_output_____ ###Markdown First we need to do some cleaning on the training data. Therefore we create the **TrainingDataCleaner class** that contains *fit* and *transform* methods. See the docstring of the *transform* method for the steps involved. ###Code # Create class for cleaning and normalization fit and transform class TrainingDataCleaner(BaseEstimator, TransformerMixin): ''' Parameters ---------- df - data to be cleaned Attributes ---------- features - list of features to keep (str) ''' def __init__(self, features): self.features = features def custom_cleaning(self, df): ''' This function performs custom cleaning steps. INPUT df - input data (DataFrame) OUTPUT df - cleaned output data (DataFrame) ''' # drop column with ID's df.drop(['LNR'], axis = 1, inplace=True) # replace values that represent 'unknown' with NaN and change dtypes for i, val in enumerate(df.columns): df[val].replace(to_replace=-1, value=float('nan'), inplace=True) df[val].replace(to_replace='-1', value=float('nan'), inplace=True) df[val].replace(to_replace='-1.0', value=float('nan'), inplace=True) if val[:2]=='LP': df[val]=df[val].astype('object', copy=False) if val[:4]=='SEMIO': df[val].replace(to_replace=9, value=float('nan'), inplace=True) # replace values that represent 'unknown' with NaN df['CAMEO_DEUG_2015'].replace(to_replace='X', value=float('nan'), inplace=True) df['CAMEO_INTL_2015'].replace(to_replace='XX', value=float('nan'), inplace=True) df['CAMEO_DEU_2015'].replace(to_replace='XX', value=float('nan'), inplace=True) df['ALTERSKATEGORIE_GROB'].replace(to_replace=0, value=float('nan'), inplace=True) df['ALTERSKATEGORIE_GROB'].replace(to_replace=9, value=float('nan'), inplace=True) df['ANREDE_KZ'].replace(to_replace=0, value=float('nan'), inplace=True) df['NATIONALITAET_KZ'].replace(to_replace=0, value=float('nan'), inplace=True) df['RETOURTYP_BK_S'].replace(to_replace=0, value=float('nan'), inplace=True) df['TITEL_KZ'].replace(to_replace=0, value=float('nan'), inplace=True) df['ZABEOTYP'].replace(to_replace=9, value=float('nan'), inplace=True) df['CJT_GESAMTTYP'].replace(to_replace=0, value=float('nan'), inplace=True) df['GEBAEUDETYP'].replace(to_replace=0, value=float('nan'), inplace=True) df['HH_EINKOMMEN_SCORE'].replace(to_replace=0, value=float('nan'), inplace=True) df['KKK'].replace(to_replace=0, value=float('nan'), inplace=True) df['REGIOTYP'].replace(to_replace=0, value=float('nan'), inplace=True) df['RELAT_AB'].replace(to_replace=9, value=float('nan'), inplace=True) df['WOHNDAUER_2008'].replace(to_replace=0, value=float('nan'), inplace=True) df['W_KEIT_KIND_HH'].replace(to_replace=0, value=float('nan'), inplace=True) df['D19_KONSUMTYP'].replace(to_replace=9, value=float('nan'), inplace=True) # change dtypes df['CAMEO_DEUG_2015']=df['CAMEO_DEUG_2015'].astype('float64', copy=False) df['CAMEO_INTL_2015']=df['CAMEO_INTL_2015'].astype('float64', copy=False) list_dtype = ['CAMEO_DEUG_2015', 'CAMEO_INTL_2015', 'FINANZTYP', 'SHOPPER_TYP', 'GFK_URLAUBERTYP', 'HEALTH_TYP', 'PRAEGENDE_JUGENDJAHRE', 'TITEL_KZ', 'ZABEOTYP', 'D19_KONSUMTYP', 'WOHNLAGE'] for _, val in enumerate(list_dtype): df[val]=df[val].astype('object', copy=False) return df def find_digits(self, s): """ This function extracts float and int numbers in strings. INPUT s - A string OUTPUT p or d - Floatingpoint number (float) s - The input string if no digits where detected (str) """ p=re.match('\d+\.\d+$', str(s)) d=re.match('\d+$', str(s)) if p: return float(p[0]) elif d: return float(d[0]) elif s=='NaN': return float('nan') else: return s def parse_date(self, df): ''' This function outputs the year form a date in column 'EINGEFUEGT_AM' INPUT df - data with date (DataFrame) OUTPUT df - data with year (DataFrame) ''' df['EINGEFUEGT_AM']=df['EINGEFUEGT_AM'].apply(lambda x: pd.to_datetime(x).year if (x is not'NaN') else float('nan')) return df def create_dummy(self, df): ''' This function creates dummy variables for categorical attributes INPUT df - data (DataFrame) OUTPUT df - data with dummy variables (DataFrame) ''' cat_cols_lst = df.select_dtypes(include= ['object']).columns df_cat = df[cat_cols_lst] df_dummy=pd.get_dummies(df_cat, dummy_na = False) df.drop(cat_cols_lst, axis=1, inplace=True) df = pd.concat([df, df_dummy], axis=1, join='inner') return df def reduce_df_to_features(self, df): ''' This function reduces the data to the features stored in the features variable. INPUT df - data (DataFrame) OUTPUT df - data reduced to features (DataFrame) ''' array_features = np.intersect1d(self.features,df.columns) df = df.loc[:,array_features] return df def create_nan_counts_col(self, df): ''' This function creates a DataFrame column counting the number of missing values in each row. INPUT df - data (DataFrame) OUTPUT df - data reduced to features (DataFrame) ''' df['nan_counts'] = df.isnull().sum(axis=1).tolist() return df def fit(self, X, y=None): return self def transform(self, X): ''' The transform method applies the following data cleaning steps: 1. Perform custom cleaning 3. Parse strings with digits into float 4. Extract year from date 5. Create dummy variables 6. Reduce DataFrame to input features 7. Create column that counts Nan values in a row INPUT X - data (DataFrame) OUTPUT df_trans - transformed data (DataFrame) ''' # Make a copy of the input DataFrame df_trans = X.copy(deep=True) # Perform custom cleaning df_trans = self.custom_cleaning(df_trans) # Parse strings with digits into floats # find type obj columns cols_obj = df_trans.select_dtypes(include= ['object']).columns # parse columns for col in cols_obj: df_trans[col].apply(self.find_digits) # Extract year from date df_trans = self.parse_date(df_trans) # Create dummy variables for obj columns df_trans = self.create_dummy(df_trans) #reduce datafream to input features df_trans = self.reduce_df_to_features(df_trans) # create columns that counts nan values in a row df_trans = self.create_nan_counts_col(df_trans) return df_trans ###Output _____no_output_____ ###Markdown Next, a **Scaler class** is created that contains *fit* and *transform* methods. It allows to perform min-max and standard scaling in a pipline. ###Code # Scaler class class Scaler(BaseEstimator, TransformerMixin): ''' Normalizing transform with min-max or standard normalization Attributes: ----------- mode : 'min-max' or 'standard' (str) params : for mode = 'min_max': list of (x_min, x_max) tuples for each column in data (float) for mode = 'standard': list of (mean, std) tuples for each column in data (float) ''' def __init__(self, mode): self.mode = mode def extract_column(self, X): ''' Geberator that yields the columns of a DataFrame object INPUT X - data (DataFrame) OUTPUT i - column index (int) col_array - single column of the data (numpy array) ''' for i, value in enumerate(X.columns): col_array = np.array(X[value]).astype('float32') yield i, col_array def x_min_max(self, data): ''' INPUT data - input data (Series) OUTPUT minimum - maximum value of the data (float) maximum - minimum value of the data (float) ''' minimum = np.nanmin(data, axis = 0) maximum = np.nanmax(data, axis = 0) return minimum, maximum def x_std(self, data): ''' INPUT data - input data (Series) OUTPUT mean - mean of the data (float) std - standard deviation of the data (float) ''' mean = np.nanmean(data, axis = 0) std = np.nanstd(data, axis = 0) return mean, std def fit(self, X, y=None): ''' Fit function INPUT: X - data (DataFrame) ''' self.params = [] if self.mode == 'min_max': for _, X_col in self.extract_column(X): self.params.append(self.x_min_max(X_col)) if self.mode == 'standard': for _, X_col in self.extract_column(X): self.params.append(self.x_std(X_col)) return self def transform(self, X): ''' Transfrom function INPUT X - data (DataFrame) OUTPUT X_trans - normalized data (DataFrame) ''' normalized = np.zeros((X.shape[0],1), dtype='float32') for i, X_col in self.extract_column(X): if self.mode == 'min_max': x_max = self.params[i][1] x_min = self.params[i][0] col_norm = (X_col-x_min)/(x_max-x_min) normalized = np.append(normalized, np.expand_dims(col_norm, axis=1), axis=1) elif self.mode == 'standard': x_std = self.params[i][1] x_mean = self.params[i][0] col_norm = (X_col-x_mean)/(x_std) normalized = np.append(normalized, np.expand_dims(col_norm, axis=1), axis=1) X_trans = pd.DataFrame(data=normalized[:,1:], columns=X.columns) return X_trans ###Output _____no_output_____ ###Markdown Next, a function is created to load the training data and return explanatory and response variables. ###Code def load_data(): ''' This function loads the TRAIN data. INPUT None OUTPUT X - explanatory variables (DataFrame) y - response variable (DataFrame) ''' mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';', low_memory=False) y = mailout_train['RESPONSE'] X = mailout_train.drop(['RESPONSE'], axis=1) return X, y ###Output _____no_output_____ ###Markdown Below the models are created and trained. We use a transform pipeline to scale the data and impute missing values, followed by different estimators. The best model is found using a grid search for the parameters of the model. We optimize the data by including different numbers of features in the data, as obtained from the unsupervised learning part. For the scoring we use ROC-AUC that is good for inbalanced datasets (other metrics such as accuracy work poorly on inbalanced datasets). We start with training a **Logistic Regression** classifier. ###Code # build model for Logistic Regression def build_model(): ''' This function builds the ML model using the following steps: 1. Create pipline object using a list of (key, value) pairs 2. Create dictinary of parameters to be optimized 3. Create GridSearchCV object INPUT None OUTPUT cv - GridSearchCV object ''' pipeline = Pipeline([ ('scaler', Scaler(mode=None)), ('imputer', Imputer()), ('clf', LogisticRegression()) ]) parameters = { 'imputer__strategy': ['mean','median'], 'scaler__mode': ['min_max','standard'], 'clf__C': [0.001, 0.01, 0.1, 1, 10, 100], }, cv = GridSearchCV(pipeline, param_grid = parameters, cv=3, scoring='roc_auc', refit=True) return cv # main function def main(): ''' This function finds the best model using the folowing steps: 1. Load explanatory and response variables 2. Perform a loop for an increasing number of features 3. Clean the explanatory variables 3. Split the data in train and test datasets 4. Train model and predict on test dataset 5. Store the best estimator, parameters, number of features and AUC score INPUT None OUTPUT fpr - false positive rates for the test data for the best model (ndarray) tpr - true positive rates for the test data for the best model (ndarray) threshold - thresholds on the decision function used to compute fpr and tpr (ndarray) y_scores - class probabilities of the test samples for the best model (ndarray) best_parameters - parameter setting for the best estimator (dict) best_estimator - estimator which gave highest score (object) results - number of features for best scores (dict) best_auc - mean cross-validated auc-score for the best model (float) auc_test_select - auc-score for the best model on the test data (float) ''' # load explanatory and response variables X, y = load_data() best_auc = 0 # make loop over features for f in range(2,30,2): features = df_p_values['feature'][:f] # clean explanatory variables cleaner_obj_X = TrainingDataCleaner(features) X_1 = cleaner_obj_X.fit_transform(X) # split the data in train and test datasets X_train, X_test, y_train, y_test = train_test_split(X_1, y, test_size=0.33, random_state=42) # train model and predict on test data model = build_model() model.fit(X_train, y_train) auc = model.best_score_ y_probas = model.predict_proba(X_test) y_scores = y_probas[:,1] auc_test = roc_auc_score(y_test, y_scores) # store best estimator, parameters, number of features and auc scores if auc > best_auc: fpr, tpr, threshold = roc_curve(y_test, y_scores) best_parameters = model.best_params_ best_estimator = model.best_estimator_ results={'num_features': len(features)} auc_test_select = auc_test best_auc = auc return fpr, tpr, threshold, y_scores, best_parameters, best_estimator, results, best_auc, auc_test_select fpr, tpr, threshold, y_scores, best_parameters, best_estimator, results, best_auc, auc_test_select = main() # plot roc curve fig, ax = plt.subplots() ax.plot(fpr, tpr) ax.set(xlabel = 'False Positive Rate', ylabel = 'True Positive Rate') ax.yaxis.label.set_size(13) ax.xaxis.label.set_size(13) fig.savefig('roc_curve_LR.png', dpi=300) print('Best AUC:', best_auc) print('AUC for test set:', auc_test_select) print('Number of features:', results['num_features']) print('Best parameters:') print(best_parameters) # store best estimator and number of features joblib.dump(best_estimator, 'best_estimator_logistic.pkl', compress = 1) joblib.dump(results['num_features'], 'num_features_logistic.pkl', compress = 1) ###Output Best AUC: 0.693449511999 AUC for test set: 0.69732494859 Number of features: 20 Best parameters: {'clf__C': 1, 'imputer__strategy': 'median', 'scaler__mode': 'min_max'} ###Markdown Next we train a **Random Forest** classifier. ###Code # build model for RandomForest def build_model(): ''' This function builds the ML model using the following steps: 1. Create pipline object using a list of (key, value) pairs 2. Create dictinary of parameters to be optimized 3. Create GridSearchCV object INPUT None OUTPUT cv - GridSearchCV object ''' pipeline = Pipeline([ ('scaler', Scaler(mode=None)), ('imputer', Imputer()), ('clf', RandomForestClassifier()) ]) parameters = { 'imputer__strategy': ['mean','median'], 'scaler__mode': ['min_max','standard'], 'clf__min_weight_fraction_leaf': [0.005, 0.01, 0.02, 0.03, 0.04, 0.05], 'clf__n_estimators': [100] }, cv = GridSearchCV(pipeline, param_grid = parameters, cv=3, scoring='roc_auc', refit=True) return cv # main function def main(): ''' This function finds the best model using the folowing steps: 1. Load explanatory and response variables 2. Perform a loop for an increasing number of features 3. Clean the explanatory variables 3. Split the data in train and test datasets 4. Train model and predict on test dataset 5. Store the best estimator, parameters, number of features and AUC score INPUT None OUTPUT fpr - false positive rates for the test data for the best model (ndarray) tpr - true positive rates for the test data for the best model (ndarray) threshold - thresholds on the decision function used to compute fpr and tpr (ndarray) y_scores - class probabilities of the test samples for the best model (ndarray) best_parameters - parameter setting for the best estimator (dict) best_estimator - estimator which gave highest score (object) results - number of features for best scores (dict) best_auc - mean cross-validated auc-score for the best model (float) auc_test_select - auc-score for the best model on the test data (float) ''' # load explanatory and response variables X, y = load_data() best_auc = 0 # make loop over features for f in range(2,30,2): features = df_p_values['feature'][:f] # clean explanatory variables cleaner_obj_X = TrainingDataCleaner(features) X_1 = cleaner_obj_X.fit_transform(X) # split the data in train and test datasets X_train, X_test, y_train, y_test = train_test_split(X_1, y, test_size=0.33, random_state=42) # train model and predict on test data model = build_model() model.fit(X_train, y_train) auc = model.best_score_ y_probas = model.predict_proba(X_test) y_scores = y_probas[:,1] auc_test = roc_auc_score(y_test, y_scores) # store best estimator, parameters, number of features and auc scores if auc > best_auc: fpr, tpr, threshold = roc_curve(y_test, y_scores) best_parameters = model.best_params_ best_estimator = model.best_estimator_ results={'num_features': len(features)} auc_test_select = auc_test best_auc = auc return fpr, tpr, threshold, y_scores, best_parameters, best_estimator, results, best_auc, auc_test_select fpr, tpr, threshold, y_scores, best_parameters, best_estimator, results, best_auc, auc_test_select = main() # plot roc curve fig, ax = plt.subplots() ax.plot(fpr, tpr) ax.set(xlabel = 'False Positive Rate', ylabel = 'True Positive Rate') ax.yaxis.label.set_size(13) ax.xaxis.label.set_size(13) fig.savefig('roc_curve_RF.png', dpi=300) print('Best AUC:', best_auc) print('AUC for test set:', auc_test_select) print('Number of features:', results['num_features']) print('Best parameters:') print(best_parameters) # store best estimator and number of features joblib.dump(best_estimator, 'best_estimator_forest.pkl', compress = 1) joblib.dump(results['num_features'], 'num_features_forest.pkl', compress = 1) ###Output Best AUC: 0.766185713533 AUC for test set: 0.750616265088 Number of features: 10 Best parameters: {'clf__min_weight_fraction_leaf': 0.01, 'clf__n_estimators': 100, 'imputer__strategy': 'mean', 'scaler__mode': 'standard'} ###Markdown Finally we train a **Gradient Boosting** classifier. ###Code # build model for Gradient Boost def build_model(): ''' This function builds the ML model using the following steps: 1. Create pipline object using a list of (key, value) pairs 2. Create dictinary of parameters to be optimized 3. Create GridSearchCV object INPUT None OUTPUT cv - GridSearchCV object ''' pipeline = Pipeline([ ('scaler', Scaler(mode=None)), ('imputer', Imputer()), ('clf', GradientBoostingClassifier()) ]) parameters = { 'imputer__strategy': ['mean','median'], 'scaler__mode': ['min_max','standard'], 'clf__min_weight_fraction_leaf': [0.005, 0.01, 0.02, 0.03, 0.04, 0.05], 'clf__n_estimators': [100] }, cv = GridSearchCV(pipeline, param_grid = parameters, cv=3, scoring='roc_auc', refit=True) return cv # main function def main(): ''' This function finds the best model using the folowing steps: 1. Load explanatory and response variables 2. Perform a loop for an increasing number of features 3. Clean the explanatory variables 3. Split the data in train and test datasets 4. Train model and predict on test dataset 5. Store the best estimator, parameters, number of features and AUC score INPUT None OUTPUT fpr - false positive rates for the test data for the best model (ndarray) tpr - true positive rates for the test data for the best model (ndarray) threshold - thresholds on the decision function used to compute fpr and tpr (ndarray) y_scores - class probabilities of the test samples for the best model (ndarray) best_parameters - parameter setting for the best estimator (dict) best_estimator - estimator which gave highest score (object) results - number of features for best scores (dict) best_auc - mean cross-validated auc-score for the best model (float) auc_test_select - auc-score for the best model on the test data (float) ''' # load explanatory and response variables X, y = load_data() best_auc = 0 # make loop over features for f in range(2,30,2): features = df_p_values['feature'][:f] # clean explanatory variables cleaner_obj_X = TrainingDataCleaner(features) X_1 = cleaner_obj_X.fit_transform(X) # split the data in train and test datasets X_train, X_test, y_train, y_test = train_test_split(X_1, y, test_size=0.33, random_state=42) # train model and predict on test data model = build_model() model.fit(X_train, y_train) auc = model.best_score_ y_probas = model.predict_proba(X_test) y_scores = y_probas[:,1] auc_test = roc_auc_score(y_test, y_scores) # store best estimator, parameters, number of features and auc scores if auc > best_auc: fpr, tpr, threshold = roc_curve(y_test, y_scores) best_parameters = model.best_params_ best_estimator = model.best_estimator_ results={'num_features': len(features)} auc_test_select = auc_test best_auc = auc return fpr, tpr, threshold, y_scores, best_parameters, best_estimator, results, best_auc, auc_test_select fpr, tpr, threshold, y_scores, best_parameters, best_estimator, results, best_auc, auc_test_select = main() # plot roc curve fig, ax = plt.subplots() ax.plot(fpr, tpr) ax.set(xlabel = 'False Positive Rate', ylabel = 'True Positive Rate') ax.yaxis.label.set_size(13) ax.xaxis.label.set_size(13) fig.savefig('roc_curve_GB.png', dpi=300) print('Best AUC:', best_auc) print('AUC for test set:', auc_test_select) print('Number of features:', results['num_features']) print('Best parameters:') print(best_parameters) # store best estimator and number of features joblib.dump(best_estimator, 'best_estimator_boost.pkl', compress = 1) joblib.dump(results['num_features'], 'num_features_boost.pkl', compress = 1) ###Output Best AUC: 0.77557512672 AUC for test set: 0.748891082005 Number of features: 22 Best parameters: {'clf__min_weight_fraction_leaf': 0.04, 'clf__n_estimators': 100, 'imputer__strategy': 'mean', 'scaler__mode': 'min_max'} ###Markdown When comparing the models, we notice that the Random Forest classifier performs substantially better than the Logistic Regresion classifier, and the Gradient Boosting classifier performs slightly better than the Random Forest classifier. The AUC score (value under the curve) for cross-validation is 0.69 for Logistic Regression, 0.77 for Random Forest, and 0.78 for Gradient Boosting. The scores for the test dataset are similar to the scores obtained from cross-validation. The ROC-AUC metric seems to work well for the imbalanced dataset used here. The number of features is reduced from 600 down to 10-22, showing that a Mann-Whitney U test can be used to reduce the dimensionality of a dataset. PART 3 The best model is then used to predict a response for the '...TEST.csv' data. ###Code def load_test_data(): ''' This function loads the TEST data. INPUT None OUTPUT X - explanatory variables (DataFrame) LNR - individuals ID's (Series) ''' mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';', low_memory=False) X, LNR = mailout_test, mailout_test['LNR'] return X, LNR # DEPLOY ON TEST DATA best_estimator = joblib.load('best_estimator_boost.pkl') num_features = joblib.load('num_features_boost.pkl') model = build_model() best_pipe = best_estimator #load data X, LNR = load_test_data() features = df_p_values['feature'][:num_features] cleaner_obj = TrainingDataCleaner(features) X_trans = cleaner_obj.fit_transform(X) # make predicions y_probas = best_pipe.predict_proba(X_trans) y_scores = y_probas[:,1] ###Output _____no_output_____ ###Markdown The predictions are saved in a csv file together with the Individuals ID's ###Code # Create and save a DataFrame for submission my_submission = pd.DataFrame({'LNR': LNR, 'RESPONSE': y_scores}) my_submission.to_csv('predict_proba.csv', index=False) ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # importing libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import pprint import operator import time import ast from sklearn.preprocessing import Imputer from sklearn.cluster import KMeans from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression from sklearn.ensemble import AdaBoostClassifier from sklearn.ensemble import GradientBoostingClassifier from sklearn.ensemble import BaggingClassifier from sklearn.pipeline import Pipeline # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code start = time.time() # load in the data azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';') #customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # Be sure to add in a lot more cells (both markdown and code) to document your # approach and findings! # Check the structure of the data after it's loaded (e.g. print the number of # rows and columns, print the first few rows). print(azdias.shape) # print the first 10 rows of the dataset azdias.head(10) azdias['ALTERSKATEGORIE_GROB'].describe() azdias['ALTERSKATEGORIE_GROB'].median() azdias['ALTERSKATEGORIE_GROB'].describe(percentiles=[0.80]) # replacing values for CAMEO_DEUG_2015 azdias['CAMEO_DEUG_2015'] = azdias['CAMEO_DEUG_2015'].replace('X',-1) azdias['CAMEO_DEUG_2015'].describe() azdias['CAMEO_DEUG_2015'].median() azdias['HH_EINKOMMEN_SCORE'].describe() azdias['HH_EINKOMMEN_SCORE'].median() # read the attributes details of the dataset feat_info = pd.read_excel('DIAS Attributes - Values 2017.xlsx') del feat_info['Unnamed: 0'] feat_info.head(15) # Fill the attribute column where the values are NaNs using ffill feat_info_attribute = feat_info['Attribute'].fillna(method='ffill') feat_info['Attribute'] = feat_info_attribute feat_info.head(10) # Get the encoded values that are actually missing or unknown values # Subset the meaning column to contain only those values using "unknown" or "no " terms feat_info = feat_info[(feat_info['Meaning'].str.contains("unknown") | feat_info['Meaning'].str.contains("no "))] pd.set_option('display.max_rows', 500) feat_info # Convert to a list of strings feat_info.loc[feat_info['Attribute'] == 'AGER_TYP', 'Value'].astype(str).str.cat(sep=',').split(',') # Because both of the first 2 rows of feat_info belong to the same attribute, combine the values # for each row into a single list of strings unknowns = [] for attribute in feat_info['Attribute'].unique(): _ = feat_info.loc[feat_info['Attribute'] == attribute, 'Value'].astype(str).str.cat(sep=',') _ = _.split(',') unknowns.append(_) unknowns = pd.concat([pd.Series(feat_info['Attribute'].unique()), pd.Series(unknowns)], axis=1) unknowns.columns = ['attribute', 'missing_or_unknown'] feat_info = unknowns feat_info start = time.time() # Converting the missing values to Nans in the dataset missing_values = pd.Series(feat_info['missing_or_unknown'].values, index=feat_info.index).to_dict() azdias[azdias.isin(missing_values)] = np.nan end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) azdias.shape #azdias.head(10) start = time.time() # Checking how much missing data there is in each column of the dataset. missing_col = azdias.isnull().sum() end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # Investigate patterns in the amount of missing data in each column. plt.hist(missing_col, bins=15, facecolor='b', alpha=1) plt.xlabel('Count of missing values in column') plt.ylabel('Number of columns') plt.title('Histogram for the count of missing values in columns') plt.grid(True) plt.show() missing_columns = missing_col[missing_col>0] missing_columns.sort_values(inplace=True) missing_columns.plot.bar(figsize=(20,15), facecolor='b') plt.xlabel('Column name with missing values') plt.ylabel('Number of missing values') plt.grid(True) plt.title('Column Name vs missing values') plt.show() start = time.time() # This operation is to remove the outlier columns from the dataset. # identify the columns having more than 20K missing values missing_col_updated = missing_col[missing_col>200000] # dropping those columns from the data set azdias.drop(missing_col_updated.index, axis=1, inplace=True) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # listing the dropped columns print(missing_col_updated) azdias.shape start = time.time() # Separate the data into two subsets based on the number of missing # values in each row. # Keep the the rows having less than 20 missing values for the analyis n_missing = azdias.isnull().transpose().sum() azdias_missing_low = azdias[n_missing<20] # rows having less than 20 missing values #azdias_missing_high = azdias[n_missing>=20]; # rows having more or equal to 20 missing values n_missing_low = azdias_missing_low.isnull().transpose().sum() #n_missing_high = azdias_missing_high.isnull().transpose().sum() end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # check the count for remaining number of rows azdias_missing_low.shape # The only binary categorical variable that does not take integer values is OST_WEST_KZ which uses either W or O # Re-encoding with 1 and 0. azdias_missing_low['OST_WEST_KZ'].replace(['W', 'O'], [1, 0], inplace=True) azdias_missing_low['OST_WEST_KZ'].head() # For columns > 10 different values, drop for # simplicity. cat_cols_to_drop = ['CAMEO_DEU_2015', 'GFK_URLAUBERTYP', 'LP_FAMILIE_FEIN', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB', 'LP_STATUS_FEIN', 'PRAEGENDE_JUGENDJAHRE','EINGEFUEGT_AM'] # For columns < 10 levels, re-encode using dummy variables. cat_cols_to_dummy = ['CJT_GESAMTTYP', 'FINANZTYP', 'GEBAEUDETYP', 'GEBAEUDETYP_RASTER', 'HEALTH_TYP', 'KBA05_HERSTTEMP', 'KBA05_MAXHERST', 'KBA05_MODTEMP', 'LP_FAMILIE_GROB', 'LP_STATUS_GROB', 'NATIONALITAET_KZ', 'SHOPPER_TYP', 'VERS_TYP'] start = time.time() # Dropping the categorical columns azdias_missing_low.drop(cat_cols_to_drop, axis=1, inplace = True) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) start = time.time() # Creating dummy variables for columns with less than 10 categories unique values # then drop the original columns for col in cat_cols_to_dummy: dummy = pd.get_dummies(azdias_missing_low[col], prefix = col) azdias_missing_low = pd.concat([azdias_missing_low, dummy], axis = 1) print("Dropping the dummied columns") azdias_missing_low.drop(cat_cols_to_dummy, axis=1, inplace = True) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) print(azdias_missing_low.shape) # replacing values for CAMEO_INTL_2015 azdias_missing_low['CAMEO_INTL_2015'] = azdias_missing_low['CAMEO_INTL_2015'].replace('XX',-1) # replacing values for CAMEO_DEUG_2015 azdias_missing_low['CAMEO_DEUG_2015'] = azdias_missing_low['CAMEO_DEUG_2015'].replace('X',-1) # Create a cleaning function so the same changes can be done on the customer dataset as it was on the # general population dataset. def clean_data(azdias, feat_info): """ INPUT: azdias: Population/Customer demographics DataFrame feat_info: feat info DataFrame OUTPUT: Trimmed and cleaned demographics DataFrame """ # Convert missing values to Nans print("Convert missing values") missing_values = pd.Series(feat_info['missing_or_unknown'].values, index=feat_info.index).to_dict() azdias[azdias.isin(missing_values)] = np.nan missing_col = azdias.isnull().sum() print("dropping missing_col_updated") # Remove the outlier columns from the dataset missing_col_updated = missing_col[missing_col>200000] #taking out the columns having more than 20K missing values azdias.drop(missing_col_updated.index, axis=1, inplace=True) # dropping those columns from the data set n_missing = azdias.isnull().transpose().sum() azdias_missing_low = azdias[n_missing<20] # rows having less than 20 missing values n_missing_low = azdias_missing_low.isnull().transpose().sum() # The only binary categorical variable that does not take integer values is OST_WEST_KZ which uses either W or O print("replacing values for OST_WEST_KZ") azdias_missing_low['OST_WEST_KZ'].replace(['W', 'O'], [1, 0], inplace=True) azdias_missing_low['OST_WEST_KZ'].head() # For columns > 10 different values, drop for # simplicity. # For columns < 10 levels, re-encode using dummy variables. cat_cols_to_drop = ['CAMEO_DEU_2015', 'GFK_URLAUBERTYP', 'LP_FAMILIE_FEIN', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB', 'LP_STATUS_FEIN', 'PRAEGENDE_JUGENDJAHRE','EINGEFUEGT_AM'] cat_cols_to_dummy = ['CJT_GESAMTTYP', 'FINANZTYP', 'GEBAEUDETYP', 'GEBAEUDETYP_RASTER', 'HEALTH_TYP', 'KBA05_HERSTTEMP', 'KBA05_MAXHERST', 'KBA05_MODTEMP', 'LP_FAMILIE_GROB', 'LP_STATUS_GROB', 'NATIONALITAET_KZ', 'SHOPPER_TYP', 'VERS_TYP'] print("Dropping categorical columns with 10 or more values") # Drop categorical columns with 10 or more values azdias_missing_low.drop(cat_cols_to_drop, axis=1, inplace = True) for col in cat_cols_to_dummy: dummy = pd.get_dummies(azdias_missing_low[col], prefix = col) azdias_missing_low = pd.concat([azdias_missing_low, dummy], axis = 1) print("Dropping dummies") azdias_missing_low.drop(cat_cols_to_dummy, axis=1, inplace = True) # replacing values for CAMEO_INTL_2015 azdias_missing_low['CAMEO_INTL_2015'] = azdias_missing_low['CAMEO_INTL_2015'].replace('XX',-1) # replacing values for CAMEO_DEUG_2015 azdias_missing_low['CAMEO_DEUG_2015'] = azdias_missing_low['CAMEO_DEUG_2015'].replace('X',-1) # Return the cleaned dataframe. return azdias_missing_low azdias_missing_low.head(10) start = time.time() azdias_missing_low=azdias_missing_low.fillna(0) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) scaler = StandardScaler() # Apply feature scaling to the population data. start = time.time() azdias_missing_low = pd.DataFrame(scaler.fit_transform(azdias_missing_low), columns = azdias_missing_low.columns) end = time.time() print("Total execution time: {:.2f} seconds".format(end-start)) ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. ###Code customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') customers.shape customers.head(10) start = time.time() # Run the clean_data function on the population dataset customers = clean_data(customers, feat_info) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) customers.shape start = time.time() customers=customers.fillna(0) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # Drop the extra columns of customers dataset. customers.drop(columns=['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], inplace=True) cols_to_drop = ['AGER_TYP', 'ALTER_HH', 'ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4', 'ALTERSKATEGORIE_FEIN', 'D19_BANKEN_ANZ_12', 'D19_BANKEN_ANZ_24', 'D19_BANKEN_DATUM', 'D19_BANKEN_OFFLINE_DATUM', 'D19_BANKEN_ONLINE_DATUM', 'D19_BANKEN_ONLINE_QUOTE_12', 'D19_GESAMT_ANZ_12', 'D19_GESAMT_ANZ_24', 'D19_GESAMT_DATUM', 'D19_GESAMT_OFFLINE_DATUM', 'D19_GESAMT_ONLINE_DATUM', 'D19_GESAMT_ONLINE_QUOTE_12', 'D19_KONSUMTYP', 'D19_LETZTER_KAUF_BRANCHE', 'D19_LOTTO', 'D19_SOZIALES', 'D19_TELKO_ANZ_12', 'D19_TELKO_ANZ_24', 'D19_TELKO_DATUM', 'D19_TELKO_OFFLINE_DATUM', 'D19_TELKO_ONLINE_DATUM', 'D19_TELKO_ONLINE_QUOTE_12', 'D19_VERSAND_ANZ_12', 'D19_VERSAND_ANZ_24', 'D19_VERSAND_DATUM', 'D19_VERSAND_OFFLINE_DATUM', 'D19_VERSAND_ONLINE_DATUM', 'D19_VERSAND_ONLINE_QUOTE_12', 'D19_VERSI_ANZ_12', 'D19_VERSI_ANZ_24', 'D19_VERSI_ONLINE_QUOTE_12', 'EXTSEL992', 'KBA05_ANTG1', 'KBA05_ANTG2', 'KBA05_ANTG3', 'KBA05_ANTG4', 'KBA05_BAUMAX', 'KBA05_MAXVORB', 'KK_KUNDENTYP', 'TITEL_KZ'] customers.drop(cols_to_drop, axis=1, inplace = True) customers.shape # Apply feature scaling to the population data. start = time.time() customers = pd.DataFrame(scaler.fit_transform(customers), columns = customers.columns) end = time.time() print("Total execution time: {:.2f} seconds".format(end-start)) customers.head(10) # Apply PCA to the population data. start = time.time() pca = PCA() customers = pca.fit_transform(customers) end = time.time() print("Total execution time: {:.2f} seconds".format(end-start)) # Investigate the variance accounted for by each principal component. n_components = min(np.where(np.cumsum(pca.explained_variance_ratio_)>0.8)[0]+1) # 80% of variance selected fig = plt.figure() ax = fig.add_axes([0,0,1,1],True) ax2 = ax.twinx() ax.plot(pca.explained_variance_ratio_, label='Variance',) ax2.plot(np.cumsum(pca.explained_variance_ratio_), label='Cumulative Variance',color = 'red'); ax.set_title('n_components needed for >%80 explained variance: {}'.format(n_components)); ax.axvline(n_components, linestyle='dashed', color='black') ax2.axhline(np.cumsum(pca.explained_variance_ratio_)[n_components], linestyle='dashed', color='black') fig.legend(loc=(0.8,0.2)); # Re-apply PCA to the data while selecting for number of components to retain. start = time.time() pca = PCA(n_components=60, random_state=10) azdias_pca = pca.fit_transform(customers) end = time.time() print("Total execution time of this procedure: {:.2f} seconds".format(end-start)) # check the sum of the explained variance pca.explained_variance_ratio_.sum() def plot_pca(data, pca, n_components): ''' Plot the features with the most absolute variance for given pca component ''' compo = pd.DataFrame(np.round(pca.components_, 4), columns = data.keys()).iloc[n_components-1] compo.sort_values(ascending=False, inplace=True) compo = pd.concat([compo.head(5), compo.tail(5)]) compo.plot(kind='bar', title='Component ' + str(n_components)) ax = plt.gca() ax.grid(linewidth='0.5', alpha=0.5) ax.set_axisbelow(True) plt.show() # plot_pca(customers, pca, 2) from sklearn.cluster import KMeans, MiniBatchKMeans start = time.time() kmeans_scores = [] for i in range(2,30,2): #run k-means clustering on the data kmeans = MiniBatchKMeans(i) kmeans.fit(azdias_pca) #compute the average within-cluster distances. #print(i,kmeans.score(azdias_pca)) kmeans_scores.append(-kmeans.score(azdias_pca)) end = time.time() print("Total execution time: {:.2f} seconds".format(end-start)) kmeans_scores # Investigate the change in within-cluster distance across number of clusters. # HINT: Use matplotlib's plot function to visualize this relationship. # Plot elbow plot x = range(2, 30, 2) plt.figure(figsize=(8, 4)) plt.plot(x, kmeans_scores, marker='o') plt.xticks(x) plt.xlabel('K') plt.ylabel('SSE'); # Re-fit the k-means model with the selected number of clusters (20) and obtain # cluster predictions for the general population demographics data. start = time.time() kmeans_20 = KMeans(20, random_state=10) clusters_pop = kmeans_20.fit_predict(azdias_pca) end = time.time() print("Total execution time of this procedure: {:.2f} seconds".format(end-start)) #general_prop = [] customers_prop = [] x = [i+1 for i in range(20)] for i in range(20): #general_prop.append((clusters_pop == i).sum()/len(clusters_pop)) customers_prop.append((clusters_pop == i).sum()/len(clusters_pop)) df_general = pd.DataFrame({'cluster' : x, 'Customers pop':customers_prop}) #ax = sns.countplot(x='index', y = df_general['prop_1', 'prop_2'], data=df_general ) df_general.plot(x='cluster', y = ['Customers pop'], kind='bar', figsize=(9,6)) plt.ylabel('proportion of persons in each cluster') plt.show() ###Output _____no_output_____ ###Markdown Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') mailout_train.info() mailout_train.shape mailout_train.head() # Imbalance of REPONSE column vc = mailout_train['RESPONSE'].value_counts() vc # positive response vc[1]/(vc[0]+vc[1]) # negative response vc[0]/(vc[0]+vc[1]) mailout_train.head(10) # find features to drop because of many missing values missing_per_column = mailout_train.isnull().mean() plt.hist(missing_per_column, bins=34) start = time.time() # clean data, no splitting of rows necessary mailout_train = clean_data(mailout_train, feat_info) mailout_train.shape end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # this columns has data values as time mailout_train.drop(labels=['D19_LETZTER_KAUF_BRANCHE'], axis=1, inplace=True) mailout_train.shape mailout_train.head(10) # extract RESPONSE column response = mailout_train['RESPONSE'] # drop RESPONSE column mailout_train.drop(labels=['RESPONSE'], axis=1, inplace=True) # impute median and scale azdias imputer = Imputer(strategy='median') scaler = StandardScaler() mailout_train_imputed = pd.DataFrame(imputer.fit_transform(mailout_train)) mailout_train_scaled = scaler.fit_transform(mailout_train_imputed) mailout_train_scaled.shape response.shape ###Output _____no_output_____ ###Markdown Dataset has been preprocessed for further analysis ###Code def classify(clf, param_grid, X_train=mailout_train_scaled, y_train=response): """ Fits a classifier to its training data and prints its ROC AUC score. INPUT: - clf (classifier): classifier to fit - param_grid (dict): classifier parameters used with GridSearchCV - X_train (DataFrame): training input - y_train (DataFrame): training output OUTPUT: - classifier: input classifier fitted to the training data """ # cv uses StratifiedKFold grid = GridSearchCV(estimator=clf, param_grid=param_grid, scoring='roc_auc', cv=5) grid.fit(X_train, y_train) print(grid.best_score_) return grid.best_estimator_ start = time.time() # LogisticRegression logreg = LogisticRegression(random_state=12) classify(logreg, {}) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) start = time.time() # BaggingClassifier bac = BaggingClassifier(random_state=12) classify(bac, {}) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) start = time.time() # AdaBoostClassifier abc = AdaBoostClassifier(random_state=12) abc_best_est = classify(abc, {}) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) start = time.time() # GradientBoostingClassifier gbc = GradientBoostingClassifier(random_state=12) classify(gbc, {}) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # Tuning the model which gave the best result # tune with the help of GridSearchCV # the result is our model that will be used with the test set gbc = GradientBoostingClassifier(random_state=12) param_grid = {'loss': ['deviance', 'exponential'], 'max_depth': [2, 3], 'n_estimators':[80], 'random_state': [12] } start = time.time() gbc_tuned = classify(gbc, param_grid) gbc_tuned end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) # Import StratifiedKFold from sklearn.model_selection import StratifiedKFold # Initialize 5 stratified folds skf = StratifiedKFold(n_splits=5, random_state=12) skf.get_n_splits(mailout_train, response) print(skf) def create_pipeline(clf): # Create machine learning pipeline pipeline = Pipeline([ ('imp', imputer), ('scale', scaler), ('clf', clf) ]) return pipeline def cross_validate(clf): pipeline = create_pipeline(clf) scores = [] i = 0 # Perform 5-fold validation for train_index, test_index in skf.split(mailout_train, response): i+=1 print('Fold {}'.format(i)) # Split the data into training and test sets X_train, X_test = mailout_train.iloc[train_index], mailout_train.iloc[test_index] y_train, y_test = response.iloc[train_index], response.iloc[test_index] # Train using the pipeline pipeline.fit(X_train, y_train) #Predict on the test data y_pred = pipeline.predict(X_test) score = roc_auc_score(y_test, y_pred) scores.append(score) print("Score: {}".format(score)) return scores from sklearn.metrics import roc_auc_score start = time.time() tuned_scores = cross_validate(gbc) end = time.time() elapsed = end - start print("Total execution time: {:.2f} seconds".format(elapsed)) ###Output Fold 1 Score: 0.4995483288166215 Fold 2 Score: 0.5054254312384393 Fold 3 Score: 0.5 Fold 4 Score: 0.49992471013401596 Fold 5 Score: 0.4994729709381117 Total execution time: 242.16 seconds ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') mailout_test.info() mailout_test.head(10) # extract lnr for later generation of the competition result file # lnr = mailout_test.LNR # clean data mailout_test = clean_data(mailout_test, feat_info) mailout_test.shape # this columns has data values as time mailout_test.drop(labels=['D19_LETZTER_KAUF_BRANCHE'], axis=1, inplace=True) lnr = mailout_test.LNR mailout_test.shape # impute median and scale azdias mailout_test_imputed = pd.DataFrame(imputer.transform(mailout_test)) mailout_test_scaled = scaler.transform(mailout_test_imputed) # use the trained model from Part 2 to predict the probabilties of the testing data response_test = gbc_tuned.predict_proba(mailout_test_scaled) response_test response_test.shape # generate result file for the competition result = pd.DataFrame({'LNR':lnr, 'RESPONSE':response_test[:,0]}) result.head(10) ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set(style="darkgrid") # magic word for producing visualizations in notebook %matplotlib inline from matplotlib import pyplot from sklearn.cluster import KMeans from sklearn.metrics import silhouette_score from sklearn.preprocessing import StandardScaler from sklearn.metrics.cluster import adjusted_rand_score from sklearn.linear_model import LogisticRegression from sklearn.metrics import classification_report from sklearn.model_selection import train_test_split from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score from sklearn.model_selection import GridSearchCV from sklearn import metrics from sklearn.model_selection import KFold from sklearn.metrics import accuracy_score ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. Data Wrangling ###Code # load in the data azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';') customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') azdias.head() customers.head() customers.info() # Check string data types customers.loc[:, customers.dtypes == object] # Check mixed data types customers['CAMEO_DEUG_2015'].value_counts() # Fix mixed data types customers['CAMEO_DEUG_2015'] = customers['CAMEO_DEUG_2015'].replace(['X'],'-1') # Change datat type customers['CAMEO_DEUG_2015'] = customers['CAMEO_DEUG_2015'].astype(float) # Check mixed data types customers['CAMEO_INTL_2015'].value_counts() # Fix mixed data types customers['CAMEO_INTL_2015'] = customers['CAMEO_INTL_2015'].replace(['XX'],'-1') # Change datat types customers['CAMEO_INTL_2015'] = customers['CAMEO_INTL_2015'].astype(float) # Check if there are any missing values customers.isnull().sum().sort_values(ascending=False) # Check if there are any missing values azdias.isnull().sum().sort_values(ascending=False) # drop columns that only appears in customer dataset customers.drop(['CUSTOMER_GROUP', 'ONLINE_PURCHASE','PRODUCT_GROUP'], axis=1, inplace=True) # drop columns with more thna 50% nan customers.drop(['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4','KK_KUNDENTYP','EXTSEL992'], axis=1, inplace=True) # drop columns not needed for features customers.drop(['LNR','CAMEO_DEU_2015','D19_LETZTER_KAUF_BRANCHE','EINGEFUEGT_AM','OST_WEST_KZ'], axis=1, inplace=True) # drop null values before scaling customers.dropna(inplace=True) # Save clean dataframe customers.to_csv('customers_clean.csv', index=False) def clean_data(df): """Function to clean the data set. Args: df: dataframe to be cleaned Returns: df: clean dataframe """ # read dataframe df = df # Replace wrong data type df['CAMEO_DEUG_2015'] = df['CAMEO_DEUG_2015'].replace(['X'],'-1') # Change data typr from string to float df['CAMEO_DEUG_2015'] = df['CAMEO_DEUG_2015'].astype(float) df['CAMEO_INTL_2015'] = df['CAMEO_INTL_2015'].replace(['XX'],'-1') df['CAMEO_INTL_2015'] = df['CAMEO_INTL_2015'].astype(float) # drop columns with more thna 50% nan df.drop(['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4','KK_KUNDENTYP','EXTSEL992'], axis=1, inplace=True) # drop columns not needed for features df.drop(['LNR','CAMEO_DEU_2015','D19_LETZTER_KAUF_BRANCHE','EINGEFUEGT_AM','OST_WEST_KZ'], axis=1, inplace=True) # drop null values before scaling df.dropna(inplace=True) # save clean dataframe return df clean_data(azdias) # Save clean dataframe azdias.to_csv('azdias_clean.csv', index=False) plt.figure(figsize = [16, 5]) base_color = sns.color_palette()[0] plt.subplot(1, 2, 1) ax1 = sns.countplot(data = customers, x = 'ALTERSKATEGORIE_GROB') plt.xticks(rotation=90) plt.title('Customer Age Distribution 1< 30 years;2 30 - 45 years;3 46 - 60 years;4 > 60 years'); plt.subplot(1, 2, 2) sns.countplot(data = azdias, x = 'ALTERSKATEGORIE_GROB') plt.xticks(rotation=90) plt.title('Population Age Distribution'); plt.figure(figsize = [16, 5]) base_color = sns.color_palette()[0] plt.subplot(1, 2, 1) ax1 = sns.countplot(data = customers, x = 'ANREDE_KZ') plt.xticks(rotation=90) plt.title('Customer Gender Distribution, 1=Male 2=Female'); plt.subplot(1, 2, 2) sns.countplot(data = azdias, x = 'ANREDE_KZ') plt.xticks(rotation=90) plt.title('Population Gender Distribution, 1=Male 2=Female'); ###Output _____no_output_____ ###Markdown Data wrangling SummaryI assessed the data for quality and tidiness issues.The following issues were identified and corrected:1. Mixed data types in column 18 and 19 changed to float2. dropped columns that only appears in customer dataset3. dropped columns with more thna 50% of values missing4. dropped columns not needed for features5. dropped null values to avoid errors when scaling Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. Customers data set ###Code customers_clean = pd.read_csv('customers_clean.csv') # Preprocessing using standard scaler scaler = StandardScaler() scaled_features = scaler.fit_transform(customers_clean) # Use elbow method to find the correct value for k sse = [] krange = range(1,10) for k in krange: km = KMeans(n_clusters=k) km.fit(scaled_features) sse.append(km.inertia_) # Elbow plot plt.xlabel('K') plt.ylabel('Squared error') plt.plot(krange,sse); # instantitate kmeans customers_km = KMeans(n_clusters=6) # fit alogorith to features cust_pred = customers_km.fit_predict(scaled_features) cust_pred customers_clean['cluster'] = cust_pred # Calculate Proportion of features which belong to different clusters cust_proportions = customers_clean['cluster'].value_counts()/customers_clean.shape[0] cust_proportions # Plot the data clusters cust_proportions.plot(kind = "bar", legend = True) plt.title('Customer clusters proportions') plt.xlabel('Clusters') plt.ylabel('Count'); customers_clean.groupby('cluster').mean() labels_true = customers_km.labels_ labels_true silhouette_score(scaled_features, labels_true, metric = 'euclidean',sample_size=10000) ###Output _____no_output_____ ###Markdown Population data set ###Code azdias_clean = pd.read_csv('azdias_clean.csv') scaler1 = StandardScaler() # Preprocessing using standard scaler features = scaler1.fit_transform(azdias_clean) # instantitate kmeans population_km = KMeans(n_clusters=6) # fit alogorith to features pop_pred = population_km.fit_predict(features) pop_pred azdias_clean['cluster'] = pop_pred # Calculate Proportion of features which belong to different clusters pop_proportions = azdias_clean['cluster'].value_counts()/azdias_clean.shape[0] pop_proportions # Plot the data clusters pop_proportions.plot(kind = "bar", legend = True) plt.title('Population clusters proportions') plt.xlabel('Clusters') plt.ylabel('Count'); azdias_clean.groupby('cluster').mean() labels_pred = population_km.labels_ labels_pred silhouette_score(features, labels_pred, metric = 'euclidean',sample_size=10000) ###Output _____no_output_____ ###Markdown Customer Segmentation SummaryMore than 89% of features belong to clusters 3, 0 and 4 for the customer data setMore than 84% of feature belong to clusters 4, 0, 2, and 1 for the population data set.The marketing campaign should target customerss in segments 4 and 0 for maximum impact. Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') clean_data(mailout_train) # Save clean dataframe mailout_train.to_csv('mailout_train_clean.csv', index=False) mailout_train_clean = pd.read_csv('mailout_train_clean.csv') # Define features and varibale y = mailout_train_clean.RESPONSE X = mailout_train_clean.drop('RESPONSE', axis=1) # split data into train and tesr X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3) # train classifier clf = LogisticRegression().fit(X_train, y_train) # predict on test data predictions = clf.predict(X_test) # display test results print(classification_report(y_test, predictions)) cnf_matrix = metrics.confusion_matrix(y_test, predictions) cnf_matrix # display test results print("Accuracy:",metrics.accuracy_score(y_test, predictions)) # generate a no response predictionnr nr_probs = [0 for _ in range(len(y_test))] # predict probabilities lr_probs = clf.predict_proba(X_test) # keep probabilities for the positive outcome only lr_probs = lr_probs[:, 1] # calculate scores nr_auc = roc_auc_score(y_test, nr_probs) lr_auc = roc_auc_score(y_test, lr_probs) # calculate roc curves nr_fpr, nr_tpr, _ = roc_curve(y_test, nr_probs) lr_fpr, lr_tpr, _ = roc_curve(y_test, lr_probs) # plot the roc curve for the model pyplot.plot(nr_fpr, nr_tpr, linestyle='--', label='No Response') pyplot.plot(lr_fpr, lr_tpr, marker='.', label='Logistic'); #Grid Search clf = LogisticRegression() grid_values = {'penalty': ['l1', 'l2'],'C':[0.001,.009,0.01,.09,1,5,10,25]} grid_clf_acc = GridSearchCV(clf, param_grid = grid_values,scoring = 'roc_auc') grid_clf_acc.fit(X_train, y_train) print("tuned hpyerparameters :(best parameters) ",grid_clf_acc.best_params_) # train classifier clf = LogisticRegression(C=0.09,penalty="l1").fit(X_train, y_train) # predict on test data predictions = clf.predict(X_test) # display test results print(classification_report(y_test, predictions)) cnf_matrix = metrics.confusion_matrix(y_test, predictions) cnf_matrix # display test results print("Accuracy:",metrics.accuracy_score(y_test, predictions)) # generate a no response predictionnr nr_probs = [0 for _ in range(len(y_test))] # predict probabilities lr_probs = clf.predict_proba(X_test) # keep probabilities for the positive outcome only lr_probs = lr_probs[:, 1] # calculate scores nr_auc = roc_auc_score(y_test, nr_probs) lr_auc = roc_auc_score(y_test, lr_probs) # calculate roc curves nr_fpr, nr_tpr, _ = roc_curve(y_test, nr_probs) lr_fpr, lr_tpr, _ = roc_curve(y_test, lr_probs) # plot the roc curve for the model pyplot.plot(nr_fpr, nr_tpr, linestyle='--', label='No Response') pyplot.plot(lr_fpr, lr_tpr, marker='.', label='Logistic'); ###Output _____no_output_____ ###Markdown K-fold Cross-Validation ###Code k = 10 cv = KFold(n_splits=k, random_state=42, shuffle=False) acc_score = [] for train_index , test_index in cv.split(X): X_train , X_test = X.iloc[train_index,:],X.iloc[test_index,:] y_train , y_test = y[train_index] , y[test_index] clf.fit(X_train,y_train) pred_values = clf.predict(X_test) acc = accuracy_score(pred_values , y_test) acc_score.append(acc) avg_acc_score = sum(acc_score)/k print('accuracy of each fold - {}'.format(acc_score)) print('Avg accuracy : {}'.format(avg_acc_score)) ###Output accuracy of each fold - [0.98468345813478553, 0.98876786929884275, 0.98604492852280468, 0.98468345813478553, 0.98638529611980941, 0.98570456092579983, 0.98638529611980941, 0.99081007488087136, 0.99081007488087136, 0.98433775961865855] Avg accuracy : 0.9868612776637038 ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter.Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') # Fix mixed data types mailout_test['CAMEO_DEUG_2015'] = mailout_test['CAMEO_DEUG_2015'].replace(['X'],'-1') # Change datat type mailout_test['CAMEO_DEUG_2015'] = mailout_test['CAMEO_DEUG_2015'].astype(float) # Fix mixed data types mailout_test['CAMEO_INTL_2015'] = mailout_test['CAMEO_INTL_2015'].replace(['XX'],'-1') # Change datat type mailout_test['CAMEO_INTL_2015'] = mailout_test['CAMEO_INTL_2015'].astype(float) # drop columns with more thna 50% nan mailout_test.drop(['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4','KK_KUNDENTYP','EXTSEL992'], axis=1, inplace=True) # drop columns not needed for features mailout_test.drop(['CAMEO_DEU_2015','D19_LETZTER_KAUF_BRANCHE','EINGEFUEGT_AM','OST_WEST_KZ'], axis=1, inplace=True) # Impute missing values with means mailout_test.fillna(mailout_test.mean(), inplace=True) # Save cleaned data set mailout_test.to_csv('mailout_test_clean.csv', index=False) mailout_test_clean = pd.read_csv('mailout_test_clean.csv') # define feature X1 = mailout_test_clean.drop('LNR', axis=1) # Predict RESPONSE = clf.predict_proba(X1)[:,1] RESPONSE # Create new columns mailout_test_clean['RESPONSE'] = RESPONSE # format dataframe for kaggle submission mailout = mailout_test_clean[['LNR', 'RESPONSE']].copy() # Save File mailout.to_csv('mailout.csv', index=False) ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import pickle # magic word for producing visualizations in notebook %matplotlib inline from yellowbrick.cluster import KElbowVisualizer # Importing Elbow Method Library from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.cluster import KMeans # Importing K-Means algorithm from sklearn.metrics import mean_squared_error # Evaluation metric from sklearn.model_selection import train_test_split # Preprocessing for training and testing data splits from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import make_pipeline from sklearn.model_selection import GridSearchCV from sklearn.svm import SVC import xgboost as xgb from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score from sklearn.metrics import roc_auc_score ###Output C:\Users\Daniel\Anaconda3\lib\site-packages\pandas\compat\_optional.py:138: UserWarning: Pandas requires version '2.7.0' or newer of 'numexpr' (version '2.6.9' currently installed). warnings.warn(msg, UserWarning) ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # load in the data azdias = pd.read_csv('Udacity_AZDIAS_052018.csv', sep=';') customers = pd.read_csv('Udacity_CUSTOMERS_052018.csv', sep=';') # Be sure to add in a lot more cells (both markdown and code) to document your # approach and findings! customers.head() ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. Data Exploration: 1. Observe datatypes 2. Find percentage of NaN values ###Code print("The number of customers left after processing: {}".format(len(customers))) customers.dtypes.unique() null_count = [] for i in customers: value = customers[i].isnull().sum() null_count.append(value) print("Percentage of null values in each column:") for i in range(len(null_count)-1): print("{}: {:.2f}%".format(customers.columns[i], (null_count[i]/customers.shape[0] * 100))) #for i in customers: # print(i) # print(customers[i].unique()) for i in range(0, customers.shape[1]): if (customers.iloc[:, i].dtypes == 'object'): print(customers.columns[i]) customers['CAMEO_DEU_2015'].unique() customers['CAMEO_DEUG_2015'].unique() customers['CAMEO_INTL_2015'].unique() customers['D19_LETZTER_KAUF_BRANCHE'].unique() customers['EINGEFUEGT_AM'].unique() customers['OST_WEST_KZ'].unique() ###Output _____no_output_____ ###Markdown Data Preprocessing- The goal here is to remove any duplicates, remove missing values and drop unnecessary columns. ###Code customers.drop_duplicates(keep = 'first', inplace = True) azdias.drop_duplicates(keep = 'first', inplace = True) def first_preprocessing (dataframe): """Cleaning of dataframe for better data processing. """ dataframe = dataframe.copy() dataframe = dataframe.set_index(['LNR']) # Set the Customer ID to index of dataframe dataframe.replace(-1, float('NaN'), inplace=True) # -1 values represent missing values, and will be replaced with NaN values dataframe.replace(0, float('NaN'), inplace=True) # 0 values represent unknown values, and will be replaced with NaN values dataframe['CAMEO_DEU_2015'].replace('XX', dataframe['CAMEO_DEU_2015'].mode().iloc[0], inplace=True) # Replace unknown string to mode value dataframe['CAMEO_DEUG_2015'].replace('X', dataframe['CAMEO_DEUG_2015'].mode().iloc[0], inplace=True) dataframe['CAMEO_DEUG_2015'] = dataframe['CAMEO_DEUG_2015'].apply(pd.to_numeric) # Convert to integer values dataframe['CAMEO_INTL_2015'].replace('XX', dataframe['CAMEO_INTL_2015'].mode().iloc[0], inplace=True) dataframe['CAMEO_INTL_2015'] = dataframe['CAMEO_INTL_2015'].apply(pd.to_numeric) # Convert to integer values new_list = [] for i in dataframe: dataframe[i] = dataframe[i].fillna(dataframe[i].mode().iloc[0]) # Mode is used to replace NaN values due to categorical values for i in range(0, dataframe.shape[1]): if (dataframe.iloc[:, i].dtypes == 'object'): # All object dtypes to be converted to categorical values dataframe.iloc[:, i] = pd.Categorical(dataframe.iloc[:, i]) dataframe.iloc[:, i] = dataframe.iloc[:, i].cat.codes dataframe.iloc[:, i] = dataframe.iloc[:, i].astype('int64') new_list.append(dataframe.columns[i]) return dataframe # return cleaned dataframe # Preprocessing of Customers dataframe cleaned_customers = first_preprocessing(customers) cleaned_customers.head() # Dropping specific columns with greater than 40% NaN values cleaned_customers.drop(['ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4','KBA05_BAUMAX', 'KK_KUNDENTYP', 'TITEL_KZ', 'EINGEFUEGT_AM'], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Data Processing Confirmation- This step is to confirm that the data has been thoroughly cleaned for data analysis. ###Code print("The number of customers left after processing: {}".format(len(cleaned_customers))) print("The number of missing values within the dataframe after processing: {}".format(cleaned_customers.isnull().sum().sum())) ###Output The number of missing values within the dataframe after processing: 0 ###Markdown General Population Data Processing ###Code cleaned_population = first_preprocessing(azdias) cleaned_population.head() # Dropping specific columns with greater than 40% NaN values cleaned_population.drop(['ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4','KBA05_BAUMAX', 'KK_KUNDENTYP', 'TITEL_KZ', 'EINGEFUEGT_AM'], axis=1, inplace=True) cleaned_population.head() ###Output _____no_output_____ ###Markdown Side note: - Due to the size of the data, processing the data can be computationally intensive. The preferred method for imputation is KNN imputation. However, this takes roughly 1 hour for processing the Customers csv file and likely much longer for the general population. The decision was made to use Mode for imputation instead. KNN imputation would have been more representative of the actual data. ###Code #from sklearn.impute import KNNImputer # Importing K Nearest Neighbors Algorithm # K Nearest Neighbours algorithm is used to replace values with its nearest neighbours - or most similar row data #def knn_imputation(dataframe): # cleaned_dataframe = pd.DataFrame(KNNImputer(n_neighbors=5, weights='uniform', metric='nan_euclidean').fit(dataframe).transform(dataframe), columns = dataframe.columns) # return cleaned_dataframe # Code for viewing distinct values within a column #for i in cleaned_customers: # print(cleaned_customers[i].unique()) ###Output _____no_output_____ ###Markdown Data Modelling:- Now that the data has been cleaned, the data is now available for modelling. In this stage, we will opt to use Principal Component Analysis (PCA) to reduce the dimensionality of the data. In other words, we will take the dataset of 365 columns and reduce it to just a few. - After the dimensionality has been reduced, the data will be clustered to form distinct groups of customers using K-means clustering. 1. Principal Component Analysis ###Code pca = PCA(n_components=20) X_df = pca.fit(cleaned_customers).transform(cleaned_customers) PCA_components = pd.DataFrame(X_df) sum(pca.explained_variance_ratio_) pc_range = range(1, pca.n_components_+1) plt.title("Variance vs Number of Principal Components", size=20) plt.bar(pc_range, pca.explained_variance_ratio_, color='blue') plt.xlabel('Principal Components') plt.ylabel('Variance %') plt.xticks(pc_range) ###Output _____no_output_____ ###Markdown Observation: From the chart, we can see a distinct drop-off after the second component. This means that the majority of the data can be explained by using only two principal components. ###Code pca = PCA(n_components=2) X_df = pca.fit(cleaned_customers).transform(cleaned_customers) pca.explained_variance_ratio_ ###Output _____no_output_____ ###Markdown 2. K-Means Clustering ###Code model = KMeans() visualizer = KElbowVisualizer(model, k=(1,15)) # Loop through model to find ideal number of clusters within the data visualizer.fit(PCA_components) visualizer.show() k_means_model = KMeans(n_clusters = 3, init = "k-means++") k_means_pred = k_means_model.fit_predict(X_df) # Fitting the data onto the K-means clustering algorithm uniq = np.unique(k_means_pred) plt.figure(figsize=(15,15)) for i in uniq: plt.scatter(X_df[k_means_pred == i , 0] , X_df[k_means_pred == i , 1] , label = i) plt.xlabel([]) plt.xlabel('Principal Component 1') plt.ylabel('Principal Component 2') plt.legend() plt.show() # Plotting the data onto a chart cleaned_customers['cluster'] = k_means_model.labels_ # Adding extra column to the Customers dataframe to allocate data to separate groups cleaned_customers.head() # Locating the central locations of the clusters array = k_means_model.cluster_centers_ array = array.astype(int) array # Allocating the clustered groups onto different dataframes dataframe_cluster_0 = cleaned_customers[cleaned_customers['cluster'] == 0] dataframe_cluster_1 = cleaned_customers[cleaned_customers['cluster'] == 1] dataframe_cluster_2 = cleaned_customers[cleaned_customers['cluster'] == 2] #define data data = [len(dataframe_cluster_0), len(dataframe_cluster_1), len(dataframe_cluster_2)] labels = ['Cluster 1', 'Cluster 2', 'Cluster 3'] #define Seaborn color palette to use colors = sns.color_palette('pastel')[0:5] #create pie chart plt.title('Percentage of Total Customer Individuals in Each Cluster', size = 20) plt.pie(data, labels = labels, colors = colors, autopct='%.0f%%') plt.show() ###Output _____no_output_____ ###Markdown Modelling of General Population ###Code pca = PCA(n_components=2) pop_X_df = pca.fit(cleaned_population).transform(cleaned_population) PCA_components = pd.DataFrame(pop_X_df) print("The 2 principal components are able to explain {:.2f}% of the data.".format(sum(pca.explained_variance_ratio_) * 100)) k_means_pred = k_means_model.fit_predict(cleaned_population) cleaned_population['cluster'] = k_means_model.labels_ # Adding extra column to the Customers dataframe to allocate data to separate groups cleaned_population.head() # Allocating the clustered groups onto different dataframes pop_dataframe_cluster_0 = cleaned_population[cleaned_population['cluster'] == 0] pop_dataframe_cluster_1 = cleaned_population[cleaned_population['cluster'] == 1] pop_dataframe_cluster_2 = cleaned_population[cleaned_population['cluster'] == 2] #define data data = [len(pop_dataframe_cluster_0), len(pop_dataframe_cluster_1), len(pop_dataframe_cluster_2)] labels = ['Cluster 1', 'Cluster 2', 'Cluster 3'] #define Seaborn color palette to use colors = sns.color_palette('pastel')[0:5] #create pie chart plt.title('Percentage of Total Population Individuals in Each Cluster', size = 20) plt.pie(data, labels = labels, colors = colors, autopct='%.0f%%') plt.show() ###Output _____no_output_____ ###Markdown Feature Importance in Cluster allocation- After finding our clusters, it is important to identify the most influential features from the original dataframe. This will tells us which features will yield important information on our customers. ###Code rfc_df = cleaned_customers.copy() rfc_y = rfc_df.pop('cluster') rfc_X = rfc_df[:] rfc_X.head() rfc_y.head() rfc_X_train, rfc_X_test, rfc_y_train, rfc_y_test = train_test_split(rfc_X, rfc_y) # Splitting the data into training and testing datasets rfc = RandomForestClassifier() rfc.fit(rfc_X_train, rfc_y_train) rfc_pred = rfc.predict(rfc_X_test) print ("Accuracy : {:.2f}%".format(accuracy_score(rfc_y_test, rfc_pred)*100)) rfc_array = rfc.feature_importances_ #df = pd.DataFrame(array.reshape(1, 368), columns=X.columns) # Arranging the most important features into a list rfc_importances = [] count = 0 for i in rfc_array: rfc_importances.append([i, rfc_X.columns[count]]) count += 1 # Sorting the feature importances from maximum importance to least. rfc_importances.sort(reverse=True) rfc_labels = [] rfc_values = [] for i in rfc_importances[0:20]: rfc_labels.append(i[1]) rfc_values.append(i[0]) rfc_importances[0:20] sns.barplot(rfc_labels, rfc_values) plt.xticks(rotation=90) plt.title('Feature Importances in Cluster Allocation', size = 20) ###Output C:\Users\Daniel\Anaconda3\lib\site-packages\seaborn\_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarning ###Markdown Important: The biggest factor seems to the number of cars in the postal code area. This could indicate the type of financial product Arvato is selling. Exploratory Data Analysis- Here we will be looking at some characteristics of the customers population, as well as the most important features in the dataset. - Due to the nature of the dataset, we will refer to the "DIAS Attributes - Values 2017" csv file to understand the meaning of the categorical values. ###Code figure, axes = plt.subplots(1, 3, sharex=True, figsize=(15,5)) figure.suptitle('Distribution of Age in Each Cluster', fontsize=20) sns.distplot(dataframe_cluster_0['ALTERSKATEGORIE_GROB'], ax=axes[0], color ='red', bins = 10, kde = False) sns.distplot(dataframe_cluster_1['ALTERSKATEGORIE_GROB'], ax=axes[1], color ='red', bins = 10, kde = False) sns.distplot(dataframe_cluster_2['ALTERSKATEGORIE_GROB'], ax=axes[2], color ='red', bins = 10, kde = False) ###Output _____no_output_____ ###Markdown Oberservations:- The first cluster has a younger population with individuals falling into the category of less than 30 years of age, and between the ages of 30 and 45. - The second cluster tends to have a larger population of middle aged individuals ranging from 46 - 60 years of age. - The third cluster has the largest percentage of individuals who are over the age of 60 years of age ###Code figure, axes = plt.subplots(1, 3, sharex=True, figsize=(15,5)) figure.suptitle('Class of Each Cluster', fontsize=20) sns.distplot(dataframe_cluster_0['CAMEO_DEUG_2015'], ax=axes[0], color ='red', bins = 10, kde = False) sns.distplot(dataframe_cluster_1['CAMEO_DEUG_2015'], ax=axes[1], color ='red', bins = 10, kde = False) sns.distplot(dataframe_cluster_2['CAMEO_DEUG_2015'], ax=axes[2], color ='red', bins = 10, kde = False) dataframe_cluster_1['CAMEO_DEUG_2015'].unique() figure, axes = plt.subplots(1, 3, sharex=True, figsize=(15,5)) figure.suptitle('Number of Cars in Postal Code Area of Each Cluster', fontsize=20) sns.distplot(dataframe_cluster_0['KBA13_ANZAHL_PKW'], ax=axes[0], color ='red', bins = 10, kde = False) sns.distplot(dataframe_cluster_1['KBA13_ANZAHL_PKW'], ax=axes[1], color ='red', bins = 10, kde = False) sns.distplot(dataframe_cluster_2['KBA13_ANZAHL_PKW'], ax=axes[2], color ='red', bins = 10, kde = False) figure, axes = plt.subplots(1, 3, sharex=True, figsize=(15,5)) figure.suptitle('Development of Most Recent Car Manufacturer in Postal Code Area of Each Cluster', fontsize=20) sns.distplot(dataframe_cluster_0['KBA05_HERSTTEMP'], ax=axes[0], color ='red', bins = 10, kde = False) sns.distplot(dataframe_cluster_1['KBA05_HERSTTEMP'], ax=axes[1], color ='red', bins = 10, kde = False) sns.distplot(dataframe_cluster_2['KBA05_HERSTTEMP'], ax=axes[2], color ='red', bins = 10, kde = False) figure, axes = plt.subplots(1, 3, sharex=True, figsize=(15, 5)) figure.suptitle('Number of Buildings in Postal Code Area of Each Cluster', fontsize=20) sns.distplot(dataframe_cluster_0['PLZ8_GBZ'], ax=axes[0], color ='red', bins = 10, kde = False) sns.distplot(dataframe_cluster_1['PLZ8_GBZ'], ax=axes[1], color ='red', bins = 10, kde = False) sns.distplot(dataframe_cluster_2['PLZ8_GBZ'], ax=axes[2], color ='red', bins = 10, kde = False) figure, axes = plt.subplots(1, 3, sharex=True, figsize=(15, 5)) figure.suptitle('Number of buildings in Postal Codes of Each Cluster', fontsize=20) sns.distplot(dataframe_cluster_0['PLZ8_HHZ'], ax=axes[0], color ='red', bins = 5, kde = False) sns.distplot(dataframe_cluster_1['PLZ8_HHZ'], ax=axes[1], color ='red', bins = 5, kde = False) sns.distplot(dataframe_cluster_2['PLZ8_HHZ'], ax=axes[2], color ='red', bins = 5, kde = False) figure, axes = plt.subplots(1, 3, sharex=True, figsize=(15, 5)) figure.suptitle('Unemployment of Each Cluster', fontsize=20) sns.distplot(dataframe_cluster_0['RELAT_AB'], ax=axes[0], color ='red', bins = 6, kde = False) sns.distplot(dataframe_cluster_1['RELAT_AB'], ax=axes[1], color ='red', bins = 6, kde = False) sns.distplot(dataframe_cluster_2['RELAT_AB'], ax=axes[2], color ='red', bins = 6, kde = False) filtered_customers = cleaned_customers.filter(['AGER_TYP', 'ALTERSKATEGORIE_GROB', 'ANREDE_KZ', 'BALLRAUM', 'D19_BANKEN_ANZ_12', 'D19_BANKEN_ANZ_24', 'D19_BANKEN_ANZ_24', 'D19_BANKEN_DATUM', 'D19_VERSAND_OFFLINE_DATUM', 'D19_VERSAND_ONLINE_DATUM', 'D19_VERSAND_DATUM', 'D19_VERSAND_ONLINE_QUOTE_12', 'FINANZ_MINIMALIST', 'FINANZ_SPARER', 'FINANZ_VORSORGER', 'FINANZ_ANLEGER', 'FINANZ_UNAUFFAELLIGER', 'FINANZ_HAUSBAUER', 'FINANZTYP', 'GEBURTSJAHR', 'GFK_URLAUBERTYP', 'GREEN_AVANTGARDE', 'HEALTH_TYP', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB', 'LP_FAMILIE_FEIN', 'LP_FAMILIE_GROB', 'LP_STATUS_FEIN', 'LP_STATUS_GROB', 'NATIONALITAET_KZ', 'PRAEGENDE_JUGENDJAHRE', 'RETOURTYP_BK_S', 'SEMIO_SOZ', 'SEMIO_FAM', 'SEMIO_REL', 'SEMIO_MAT', 'SEMIO_VERT', 'SEMIO_LUST', 'SEMIO_ERL', 'SEMIO_KULT', 'SEMIO_RAT', 'SEMIO_KRIT', 'SEMIO_DOM', 'SEMIO_KAEM', 'SEMIO_PFLICHT', 'SEMIO_TRADV', 'SHOPPER_TYP', 'SOHO_FLAG', 'TITEL_KZ', 'VERS_TYP', 'ZABEOTYP', 'GEBAEUDETYP_RASTER', 'KKK', 'MOBI_REGIO', 'ONLINE_AFFINITAET', 'REGIOTYP']) for i in filtered_customers: filtered_customers[i] = filtered_customers[i].astype(int) #sns.set_theme(style="white") # Obtaining correlation matrix #corr_df = filtered_customers.copy() #.drop(['cluster'], axis=1) corr = filtered_customers.corr() # Matplotlib graph setup f, ax = plt.subplots(figsize=(20, 20)) # Generating Seaplot colormap cmap = sns.diverging_palette(230, 20, as_cmap=True) sns.heatmap(corr, cmap=cmap, vmax=1, center=0, square=True, linewidths=1, cbar_kws={"shrink": 1}, fmt=".2f") ###Output _____no_output_____ ###Markdown Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code train_csv = pd.read_csv('Udacity_MAILOUT_052018_TRAIN.csv', sep=';') train_csv.head() train_csv.drop_duplicates(keep = 'first', inplace = True) def second_preprocessing (dataframe): """Cleaning of dataframe for better data processing. """ dataframe = dataframe.copy() dataframe = dataframe.set_index(['LNR']) # Set the Customer ID to index of dataframe dataframe.drop(['ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4', 'KBA05_BAUMAX', 'KK_KUNDENTYP', 'TITEL_KZ', 'EINGEFUEGT_AM'], axis=1, inplace=True) # Drops LNR which is the customer id dataframe.replace(-1, float('NaN'), inplace=True) # -1 values represent missing values, and will be replaced with NaN values dataframe.replace(0, float('NaN'), inplace=True) # 0 values represent unknown values, and will be replaced with NaN values dataframe['CAMEO_DEU_2015'].replace('XX', dataframe['CAMEO_DEU_2015'].mode().iloc[0], inplace=True) # Replace unknown string to mode value dataframe['CAMEO_DEUG_2015'].replace('X', dataframe['CAMEO_DEUG_2015'].mode().iloc[0], inplace=True) dataframe['CAMEO_DEUG_2015'] = dataframe['CAMEO_DEUG_2015'].apply(pd.to_numeric) # Convert to integer values dataframe['CAMEO_INTL_2015'].replace('XX', dataframe['CAMEO_INTL_2015'].mode().iloc[0], inplace=True) dataframe['CAMEO_INTL_2015'] = dataframe['CAMEO_INTL_2015'].apply(pd.to_numeric) # Convert to integer values new_list = [] for i in dataframe: dataframe[i] = dataframe[i].fillna(dataframe[i].mode().iloc[0]) # Mode is used to replace NaN values due to categorical values dataframe = pd.get_dummies(dataframe) """for i in range(0, dataframe.shape[1]): if (dataframe.iloc[:, i].dtypes == 'object'): # All object dtypes to be converted to categorical values dataframe.iloc[:, i] = pd.Categorical(dataframe.iloc[:, i]) dataframe.iloc[:, i] = dataframe.iloc[:, i].cat.codes dataframe.iloc[:, i] = dataframe.iloc[:, i].astype('int64') new_list.append(dataframe.columns[i])""" return dataframe # return cleaned dataframe # Function for viewing the number of positive responses def response_counter(response_array): """Counts the number of positive and negative responses in purchasing. """ number_of_yes = 0 number_of_no = 0 for i in response_array: if i == 1: number_of_yes += 1 else: number_of_no += 1 return number_of_yes, number_of_no # Obtaining obtaining target features y = train_csv.pop('RESPONSE') # Pop off target fature X = train_csv[:] # Store features in separate variable for processing X.head() X = second_preprocessing(X) X.head() # Viewing number of target feature rows print(y.unique()) # Number of unique values in target feature print(y.nunique()) number_of_yes, number_of_no = response_counter(y) print("The number of yes responses in target column is {}, and the number of no responses is {}".format(number_of_yes, number_of_no)) ###Output The number of yes responses in target column is 532, and the number of no responses is 42430 ###Markdown Modelling of Supervised Learning Model - The goal is to: - Normalize the features - Create training, testing and validation datasets - Train the model using a Random Forest Classifier or XGBoost Model - Evaluate the model ###Code sc = StandardScaler() scaled_X = sc.fit_transform(X) response_counter(y) # Split data into training and testing dataset X_train, X_test, y_train, y_test = train_test_split(scaled_X, y, test_size=0.2, random_state=2, stratify=y) ###Output _____no_output_____ ###Markdown XGBoost ###Code xgboost_model = xgb.XGBClassifier(e_label_encoder=False) param_grid = { 'n_estimators': [10, 100, 200], 'min_child_weight': [10], 'gamma': [0.5, 1, 1.5, 2, 5], 'subsample': [0.6, 0.8, 1.0], 'colsample_bytree': [0.6, 0.8, 1.0], 'max_depth': [3, 4, 5] } xgboost_model = GridSearchCV(model, param_grid, scoring='roc_auc', verbose=3) xgboost_model.fit(X_train, y_train) xgb_pred = xgboost_model.predict(X_test) ###Output _____no_output_____ ###Markdown The best parameters for the XGBoost Grid Search was: - (colsample_bytree=0.6, gamma=1, max_depth=5, min_child_weight=10, n_estimators=10, subsample=0.8) ###Code # Assessing the accuracy of the XGBoost from sklearn.metrics import accuracy_score xgb_pred = xgboost_model.predict(X_test) print ("Accuracy : {:.2f}%".format(accuracy_score(y_test, xgb_pred)*100)) from sklearn.metrics import classification_report print(classification_report(y_test, xgb_pred)) from sklearn.metrics import roc_auc_score roc_auc_score(y_test, xgb_pred) import pickle filename = 'XGBoost.pkl' pickle.dump(xgboost_model, open(filename, 'wb')) ###Output _____no_output_____ ###Markdown Random Forest Classifier ###Code rfc = RandomForestClassifier(random_state = 42) parameters = { 'n_estimators': [200, 500], 'max_features': ['auto', 'sqrt', 'log2'], 'max_depth' : [4,5,6,7,8], 'criterion' :['gini', 'entropy'] } cv = GridSearchCV(rfc, param_grid=parameters, verbose = 3, scoring='roc_auc') rfc.fit(X_train, y_train) ###Output _____no_output_____ ###Markdown The best parameters for the Random Forest Classifier was: - ('n_estimators': [200], 'max_features': ['auto'], 'max_depth' : [4], 'criterion' :['gini']) ###Code rfc_pred = rfc.predict(X_test) print ("Accuracy : {:.2f}%".format(accuracy_score(y_test, rfc_pred)*100)) print(classification_report(y_test, rfc_pred)) print(roc_auc_score(y_test, rfc_pred)) rfc_pred = rfc.predict(X_test) print ("Accuracy : {:.2f}%".format(accuracy_score(y_test, rfc_pred)*100)) print(classification_report(y_test, rfc_pred)) filename = 'RandomForestClassifier.pkl' pickle.dump(rfc, open(filename, 'wb')) ###Output _____no_output_____ ###Markdown Test Dataset- The goal is to test the models on Kaggle to validate the results. The results on Kaggle will take precendence over the other performances measures in previous evaluations. ###Code test_csv = pd.read_csv('Udacity_MAILOUT_052018_TEST.csv', sep=';') test_csv.head() X = test_csv[:] X = second_preprocessing(test_csv) scaled_X = sc.fit_transform(X) r_pred = rfc.predict_proba(scaled_X) xgb_pred = xgboost_model.predict_proba(scaled_X) pred_csv = pd.DataFrame() pred_csv['LNR'] = test_csv['LNR'] pred_csv.head() # Ensemble technique to improve accuracy of predictions on test dataset pred_csv['rfc_response'] = r_pred[:, 1] pred_csv['xgb_response'] = xgb_pred[:, 1] pred_csv['RESPONSE'] = (pred_csv['rfc_response']+ pred_csv['xgb_response'])/2 # Return the average probablity between the models pred_csv.head() pred_csv.drop(['rfc_response', 'xgb_response'], axis=1, inplace=True) pred_csv.head() pred_csv.to_csv("Arvato_Test_prediction.csv", index=False) ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. 1.Business Understanding: Determine how a mail order company selling organic products can acquire new clients more efficiently Build a model to predict which individuals are most likely to become new customers for the company. Steps involved are as follows:1. Investigate attributes/demographics of existing company clients. Understand which attributes are most meaninful for the business and use these as a basis for the model2. Identify demographics of people in Germany most likely to be the new customers for the mail order company (use some sort of model to segment customers to determine best attributes for identifying how to Market to customers).3. Predict individuals to target for mail order campaigns ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns #import scikitplot as skplt #in terminal, first use pip to get latest....pip install scikit-plot NO #pip install --upgrade scikit-learn NO #pip install scikit-learn==0.22.2 #from kneed import KneeLocator from sklearn.datasets import make_blobs, make_classification from sklearn.cluster import KMeans from sklearn.metrics import silhouette_score from sklearn.preprocessing import StandardScaler from sklearn.datasets import load_iris, load_digits from sklearn.metrics.pairwise import euclidean_distances from sklearn.decomposition import PCA from sklearn.metrics import silhouette_score, adjusted_rand_score from sklearn.pipeline import Pipeline, make_pipeline from sklearn.preprocessing import LabelEncoder, MinMaxScaler from mpl_toolkits.mplot3d import Axes3D from sklearn.model_selection import train_test_split from sklearn.svm import LinearSVC from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import classification_report, confusion_matrix from sklearn import metrics from sklearn.metrics import roc_auc_score, roc_curve, auc, accuracy_score, f1_score, classification_report from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier, AdaBoostClassifier from sklearn.model_selection import learning_curve, KFold, GridSearchCV from sklearn.tree import DecisionTreeClassifier import time # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. 2. Data Understanding ###Code dias_attr = pd.read_excel('DIAS Attributes - Values 2017.xlsx', index_col=0) dias_info = pd.read_excel('DIAS Information Levels - Attributes 2017.xlsx', index_col=0) #https://dev.to/chanduthedev/how-to-display-all-rows-from-data-frame-using-pandas-dha #https://stackoverflow.com/questions/52580111/how-do-i-set-the-column-width-when-using-pandas-dataframe-to-html/52580495 pd.set_option('display.max_rows', None) pd.set_option('display.max_colwidth', 90) dias_info[['Attribute','Description']].sort_values(by='Attribute') dias_attr.head(5) #view unique values and counts of values per attribute. Seems there are 'unknowns', value = -1. will count these in each #data set after removing un-needed columns and decide what to do later #https://stackoverflow.com/questions/26977076/pandas-unique-values-multiple-columns dias_attr.groupby(['Value','Meaning']).size().reset_index().rename(columns={0:'count'}).sort_values(by='Meaning') #view unique values and counts of values per attribute. Seems there are 'unknowns', value = -1. will count these in each #column CAMEO_DEUG_2015 is string with numeric alpha. Seems important, has lifestyle, however, #another column CAMEO_DEU_2015 is similar with more detail. will drop CAMEO_DEUG_2015 #data set after removing un-needed columns and decide what to do later #https://stackoverflow.com/questions/26977076/pandas-unique-values-multiple-columns dias_attr.groupby(['Attribute','Value']).size().reset_index().rename(columns={0:'count'}).sort_values(by='Attribute') # load in the data, specify dtypes str to change all mixed types to string #azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';', dtype = 'str') azdias = pd.read_csv('Udacity_AZDIAS_052018.csv', sep=';', dtype = 'str') #customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';', dtype = 'str') customers = pd.read_csv('Udacity_CUSTOMERS_052018.csv', sep=';', dtype = 'str') ###### LOOK AT METRICS, EXISTING CUSTOMERS #look at existing and non-existing german pop individuals who have 'decent' mail order activity. Then group by age, sex, income #and count azdias_met = azdias.copy() customers_met = customers.copy() #exploratory metrics, german pop #https://stackoverflow.com/questions/49228596/pandas-case-when-default-in-pandas #https://stackoverflow.com/questions/31258134/how-to-plot-two-dataframe-on-same-graph-for-comparison azdias_met['mailorder_12mo_actvt'] = np.select( [ azdias_met['D19_VERSAND_ANZ_12'] == '0', azdias_met['D19_VERSAND_ANZ_12'] == '1', azdias_met['D19_VERSAND_ANZ_12'] == '2', azdias_met['D19_VERSAND_ANZ_12'] == '3', azdias_met['D19_VERSAND_ANZ_12'] == '4', azdias_met['D19_VERSAND_ANZ_12'] == '5', azdias_met['D19_VERSAND_ANZ_12'] == '6', ], [ 'no transactions known', 'very low activity', 'low activity', 'slightly increased activity', 'increased activity', 'high activity', 'very high activity', ], default='no transactions known' ) ######################################################## azdias_met['Age'] = np.select( [ azdias_met['ALTERSKATEGORIE_GROB'] == '0', azdias_met['ALTERSKATEGORIE_GROB'] == '-1', azdias_met['ALTERSKATEGORIE_GROB'] == '1', azdias_met['ALTERSKATEGORIE_GROB'] == '2', azdias_met['ALTERSKATEGORIE_GROB'] == '3', azdias_met['ALTERSKATEGORIE_GROB'] == '4', azdias_met['ALTERSKATEGORIE_GROB'] == '9', ], [ 'unknown age', 'unknown age', '< 30 years', '30 - 45 years', '46 - 60 years', '> 60 years', 'uniformly distributed', ], default='unknown age' ) ######################################################### azdias_met['Gender'] = np.select( [ azdias_met['ANREDE_KZ'] == '0', azdias_met['ANREDE_KZ'] == '-1', azdias_met['ANREDE_KZ'] == '1', azdias_met['ANREDE_KZ'] == '2', ], [ 'unknown', 'unknown', 'male', 'female', ], default='Unknown' ) ####################################################### azdias_met['HH_Net_Income'] = np.select( [ azdias_met['HH_EINKOMMEN_SCORE'] == '0', azdias_met['HH_EINKOMMEN_SCORE'] == '-1', azdias_met['HH_EINKOMMEN_SCORE'] == '1', azdias_met['HH_EINKOMMEN_SCORE'] == '2', azdias_met['HH_EINKOMMEN_SCORE'] == '3', azdias_met['HH_EINKOMMEN_SCORE'] == '4', azdias_met['HH_EINKOMMEN_SCORE'] == '5', azdias_met['HH_EINKOMMEN_SCORE'] == '6', ], [ 'unknown', 'unknown', 'highest income', 'very high income', 'high income', 'average income', 'lower income', 'very low income', ], default='Unknown' ) #exploratory metrics customer base #https://stackoverflow.com/questions/49228596/pandas-case-when-default-in-pandas #https://stackoverflow.com/questions/31258134/how-to-plot-two-dataframe-on-same-graph-for-comparison customers_met['mailorder_12mo_actvt'] = np.select( [ customers_met['D19_VERSAND_ANZ_12'] == '0', customers_met['D19_VERSAND_ANZ_12'] == '1', customers_met['D19_VERSAND_ANZ_12'] == '2', customers_met['D19_VERSAND_ANZ_12'] == '3', customers_met['D19_VERSAND_ANZ_12'] == '4', customers_met['D19_VERSAND_ANZ_12'] == '5', customers_met['D19_VERSAND_ANZ_12'] == '6', ], [ 'no transactions known', 'very low activity', 'low activity', 'slightly increased activity', 'increased activity', 'high activity', 'very high activity', ], default='no transactions known' ) ######################################################## customers_met['Age'] = np.select( [ customers_met['ALTERSKATEGORIE_GROB'] == '0', customers_met['ALTERSKATEGORIE_GROB'] == '-1', customers_met['ALTERSKATEGORIE_GROB'] == '1', customers_met['ALTERSKATEGORIE_GROB'] == '2', customers_met['ALTERSKATEGORIE_GROB'] == '3', customers_met['ALTERSKATEGORIE_GROB'] == '4', customers_met['ALTERSKATEGORIE_GROB'] == '9', ], [ 'unknown age', 'unknown age', '< 30 years', '30 - 45 years', '46 - 60 years', '> 60 years', 'uniformly distributed', ], default='unknown age' ) ######################################################### customers_met['Gender'] = np.select( [ customers_met['ANREDE_KZ'] == '0', customers_met['ANREDE_KZ'] == '-1', customers_met['ANREDE_KZ'] == '1', customers_met['ANREDE_KZ'] == '2', ], [ 'unknown', 'unknown', 'male', 'female', ], default='Unknown' ) ####################################################### customers_met['HH_Net_Income'] = np.select( [ customers_met['HH_EINKOMMEN_SCORE'] == '0', customers_met['HH_EINKOMMEN_SCORE'] == '-1', customers_met['HH_EINKOMMEN_SCORE'] == '1', customers_met['HH_EINKOMMEN_SCORE'] == '2', customers_met['HH_EINKOMMEN_SCORE'] == '3', customers_met['HH_EINKOMMEN_SCORE'] == '4', customers_met['HH_EINKOMMEN_SCORE'] == '5', customers_met['HH_EINKOMMEN_SCORE'] == '6', ], [ 'unknown', 'unknown', 'highest income', 'very high income', 'high income', 'average income', 'lower income', 'very low income', ], default='Unknown' ) #German pop with decent or greater mail order activity azdias_met2 = azdias_met[['LNR','mailorder_12mo_actvt','Age','Gender','HH_Net_Income']] \ [azdias_met.mailorder_12mo_actvt.isin(['high activity', 'very high activity','increased activity', 'slightly increased activity'])] #existing cust pop with decent or greater mail order activity customers_met2 = customers_met[['LNR','mailorder_12mo_actvt','Age','Gender','HH_Net_Income']] \ [customers_met.mailorder_12mo_actvt.isin(['high activity', 'very high activity','increased activity', 'slightly increased activity'])] azdias_met2.to_csv('Udacity_AZDIAS_met.csv', sep=';', index = False) customers_met2.to_csv('Udacity_cust_met.csv', sep=';', index = False) cust_age_met = customers_met2.groupby(['Age'],as_index = False).agg({'LNR':'count'}).copy() cust_age_met.rename(columns={"LNR": "Existing_customer_count"}, inplace = True) #cust_age_met[percent] = (cust_age_met['Existing_customer_count'] / cust_age_met['Existing_customer_count'].sum()) * 100 cust_age_met.sort_values(by = 'Existing_customer_count',ascending = False) azdias_age_met = azdias_met2.groupby(['Age']).agg({'LNR':'count'},as_index = False).sort_values(by='Age').copy() azdias_age_met.rename(columns={"LNR": "German_Pop_count"}, inplace = True) azdias_age_met.sort_values(by = 'German_Pop_count',ascending = False) cust_g_met = customers_met2.groupby(['Gender'],as_index = False).agg({'LNR':'count'}).copy() cust_g_met.rename(columns={"LNR": "Existing_customer_count"}, inplace = True) cust_g_met.sort_values(by = 'Existing_customer_count',ascending = False) azdias_g_met = azdias_met2.groupby(['Gender'],as_index = False).agg({'LNR':'count'}).copy() azdias_g_met.rename(columns={"LNR": "German_Pop_count"}, inplace = True) azdias_g_met.sort_values(by = 'German_Pop_count',ascending = False) cust_inc_met = customers_met2.groupby(['HH_Net_Income'],as_index = False).agg({'LNR':'count'}).copy() cust_inc_met.rename(columns={"LNR": "Existing_customer_count"}, inplace = True) cust_inc_met.sort_values(by = 'Existing_customer_count',ascending = False) azdias_inc_met = azdias_met2.groupby(['HH_Net_Income'],as_index = False).agg({'LNR':'count'}).copy() azdias_inc_met.rename(columns={"LNR": "German_Pop_count"}, inplace = True) azdias_inc_met.sort_values(by = 'German_Pop_count',ascending = False) #view top 5 records, all columns. #as mentionened before, will remove 'CAMEO_DEU_2015' #D19_LETZTER_KAUF_BRANCHE is text, EINGEFUEGT_AM is a date/time value, EINGEZOGENAM_HH_JAHR is year, #PRODUCT_GROUP and MULTI_BUYER are text, OST_WEST_KZ is alpha. #CAMEO_DEU_2015: CAMEO_4.0: specific group>>>WILL REMOVE #D19_LETZTER_KAUF_BRANCHE: not in excel metadata >>>>WILL REMOVE #EINGEFUEGT_AM: not in original excel metadata>>>>>WILL REMOVE #EINGEZOGENAM_HH_JAHR: not in original excel metadata>>>>>WILL REMOVE #OST_WEST_KZ: lag indicating the former GDR/FRG >>>>WILL REMOVE, don't see this being high impact #PRODUCT_GROUP and CUSTOMER_GROUP: This contains broad info about the customer. Will keep this, and convert text vals to num pd.set_option('display.max_columns', None) customers.head(5) #3 distinct values for product group, 2 for customer group. will replace 1st with 1,2,3, 2nd with 1 and 2 print(sorted(customers['PRODUCT_GROUP'].unique())), print(sorted(customers['CUSTOMER_GROUP'].unique())) #view top 5 records, all columns. #as mentionened before, will remove 'CAMEO_DEU_2015' #D19_LETZTER_KAUF_BRANCHE is text, EINGEFUEGT_AM is a date/time value, EINGEZOGENAM_HH_JAHR is year, OST_WEST_KZ is alpha. #CAMEO_DEU_2015: CAMEO_4.0: specific group>>>WILL REMOVE #D19_LETZTER_KAUF_BRANCHE: not in excel metadata >>>>WILL REMOVE #EINGEFUEGT_AM: not in original excel metadata>>>>>WILL REMOVE #EINGEZOGENAM_HH_JAHR: not in original excel metadata>>>>>WILL REMOVE #OST_WEST_KZ: lag indicating the former GDR/FRG >>>>WILL REMOVE, don't see this being high impact azdias.head(5) #columns having -1 in azdias: ['AGER_TYP', 'HEALTH_TYP', 'SHOPPER_TYP', 'VERS_TYP'] #During data prep we will replace -1's(unknowns) with NaNs then impute with the mean #https://stackoverflow.com/questions/50923707/get-column-name-which-contains-a-specific-value-at-any-rows-in-python-pandas azdias.columns[azdias.isin(['-1']).any()] #columns having -1 in customers: ['AGER_TYP', 'HEALTH_TYP', 'SHOPPER_TYP', 'VERS_TYP'] customers.columns[customers.isin(['-1']).any()] #It appears columns in the .csv files that start with 'D19' do not end with 'RZ' as specified in the data dictionaries. #Example: D19_VOLLSORTIMENT_RZ is 'D19_VOLLSORTIMENT' in the .csv files. #https://stackoverflow.com/questions/21285380/find-column-whose-name-contains-a-specific-string customers.filter(regex='D19').head(5) customers.shape customers.head(5) #unique LNR/persons...as count matches total rows in DF (1 row for each person) customers.LNR.nunique() azdias.head(5) #unique LNR/persons...as count matches total rows in DF azdias.LNR.nunique() azdias.shape, customers.shape #reset display options #https://stackoverflow.com/questions/26246864/restoring-the-default-display-context-in-pandas pd.reset_option('^display.', silent=True) ###Output _____no_output_____ ###Markdown 3. Data Preparation ###Code #load in demographics sets azdias2 = azdias.copy() customers2 = customers.copy() #first replace product and customer group customers data with numeric vals customers2['PRODUCT_GROUP'].replace({'COSMETIC': 1, 'COSMETIC_AND_FOOD': 2, 'FOOD': 3}, inplace = True) customers2['CUSTOMER_GROUP'].replace({'MULTI_BUYER': 1, 'SINGLE_BUYER': 2}, inplace = True) #columns having -1 in azdias: ['AGER_TYP', 'HEALTH_TYP', 'SHOPPER_TYP', 'VERS_TYP'] #replace -1's(unknowns) with NaNs then impute with the mean #https://stackoverflow.com/questions/29247712/how-to-replace-a-value-in-pandas-with-nan azdias2['AGER_TYP'].replace({'-1': np.NaN}, inplace = True) azdias2['HEALTH_TYP'].replace({'-1': np.NaN}, inplace = True) azdias2['SHOPPER_TYP'].replace({'-1': np.NaN}, inplace = True) azdias2['VERS_TYP'].replace({'-1': np.NaN}, inplace = True) customers2['AGER_TYP'].replace({'-1': np.NaN}, inplace = True) customers2['HEALTH_TYP'].replace({'-1': np.NaN}, inplace = True) customers2['SHOPPER_TYP'].replace({'-1': np.NaN}, inplace = True) customers2['VERS_TYP'].replace({'-1': np.NaN}, inplace = True) #percent of nulls in each column, german population #4 rows have > 90% nulls, 2 > 60%, some ~28%, many around 10%. #Want to keep most fields to retain value. The AGER_TYP field seems important though it holds 76% nulls #will remove columns ALTER_KIND1-4 (>90% nulls), and keep remaining columns pd.set_option('display.max_rows', None) (np.sum(azdias2.isnull() == True)/azdias2.shape[0])*100 #percent of nulls in each column, existing customer population #similar to the German population 4 rows have > 90% nulls #will remove columns ALTER_KIND1-4 (>90% nulls), and keep remaining columns (np.sum(customers2.isnull() == True)/customers2.shape[0])*100 #drop unwanted columns cols_drop = ['CAMEO_DEU_2015','D19_LETZTER_KAUF_BRANCHE','EINGEFUEGT_AM','EINGEZOGENAM_HH_JAHR','OST_WEST_KZ', 'ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4'] azdias2.drop(cols_drop, axis = 1, inplace = True) customers2.drop(cols_drop, axis = 1, inplace = True) #convert values to numeric, rogue/error values to nan with coerce #https://stackoverflow.com/questions/36814100/pandas-to-numeric-for-multiple-columns cols2 = customers2.columns customers2[cols2] = customers2[cols2].apply(pd.to_numeric, errors='coerce') #impute nulls with mean customers2.fillna(customers2.mean(), inplace = True) #no nulls exist for cust pop (np.sum(customers2.isnull() == True)/customers2.shape[0])*100 #https://stackoverflow.com/questions/30673684/pandas-dataframe-first-x-columns #split dataframes into iterations of 50 cols, from 357 cols, 7 new dfs #concat later on axis 1 (azdias3 = pd.concat([az_1,az_2,az_3,az_4,az_5,az_6,az_7], axis = 1)) az_1 = azdias2.iloc[:, : 50].copy() az_2 = azdias2.iloc[:, 50: 100].copy() az_3 = azdias2.iloc[:, 100: 150].copy() az_4 = azdias2.iloc[:, 150: 200].copy() az_5 = azdias2.iloc[:, 200: 250].copy() az_6 = azdias2.iloc[:, 250: 300].copy() az_7 = azdias2.iloc[:, 300: 357].copy() #convert values to numeric, rogue/error values to nan with coerce############################ cols = az_1.columns az_1[cols] = az_1[cols].apply(pd.to_numeric, errors='coerce') cols = az_2.columns az_2[cols] = az_2[cols].apply(pd.to_numeric, errors='coerce') cols = az_3.columns az_3[cols] = az_3[cols].apply(pd.to_numeric, errors='coerce') cols = az_4.columns az_4[cols] = az_4[cols].apply(pd.to_numeric, errors='coerce') cols = az_5.columns az_5[cols] = az_5[cols].apply(pd.to_numeric, errors='coerce') cols = az_6.columns az_6[cols] = az_6[cols].apply(pd.to_numeric, errors='coerce') cols = az_7.columns az_7[cols] = az_7[cols].apply(pd.to_numeric, errors='coerce') #NOW IMPUTE WITH MEAN####################################################################### az_1.fillna(az_1.mean(), inplace = True) az_2.fillna(az_2.mean(), inplace = True) az_3.fillna(az_3.mean(), inplace = True) az_4.fillna(az_4.mean(), inplace = True) az_5.fillna(az_5.mean(), inplace = True) az_6.fillna(az_6.mean(), inplace = True) az_7.fillna(az_7.mean(), inplace = True) #combine 7 back to 1 df###################################################################### azdias3 = pd.concat([az_1,az_2,az_3,az_4,az_5,az_6,az_7],axis = 1, ignore_index=False) azdias3.head(5) #validate no nulls exist in German gen pop df. confirmed... (np.sum(azdias3.isnull() == True)/azdias3.shape[0])*100 #reduce each population to 30% for faster loading, final set, more manageable for modeling... azdias4_f = azdias3.sample(frac =.3).copy() customers3_F = customers2.sample(frac =.3).copy() #export reduced azdias and customers for final #current population sizes too big to work with azdias4_f.to_csv('Udacity_AZDIAS_fin.csv', sep=';', index = False) customers3_F.to_csv('Udacity_cust_fin.csv', sep=';', index = False) pd.reset_option('^display.', silent=True) ###Output _____no_output_____ ###Markdown 4. Modeling Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. ###Code #did some browsing on how to reduce number of features in clustering to a 'feasible' number. Will use Elbow with k-means #to understand the right number of features to use and reduce properly #https://www.datacamp.com/community/tutorials/k-means-clustering-python #https://www.datacamp.com/community/tutorials/k-means-clustering-r #https://stats.stackexchange.com/questions/285323/what-should-be-the-optimum-number-of-features-for-10-million-observations-for-km #https://realpython.com/k-means-clustering-python/ #https://support.minitab.com/en-us/minitab/18/help-and-how-to/modeling-statistics/multivariate/how-to/cluster-k-means/interpret-the-results/key-results/ #https://towardsdatascience.com/the-easiest-way-to-interpret-clustering-result-8137e488a127 #https://towardsdatascience.com/understanding-k-means-clustering-in-machine-learning-6a6e67336aa1 #https://towardsdatascience.com/clustering-with-more-than-two-features-try-this-to-explain-your-findings-b053007d680a ### load in the final data having 30% of original records azdias_mod = pd.read_csv('Udacity_AZDIAS_fin.csv', sep=';') customers_mod = pd.read_csv('Udacity_cust_fin.csv', sep=';') azdias_mod.shape, customers_mod.shape azdias_mod.head(3) customers_mod.head(3) #remove customer identifier from existing customer features and Germany population data customers_feat = customers_mod.drop(columns='LNR',axis=1) pop_feat = azdias_mod.drop(columns='LNR',axis=1) #https://scikit-plot.readthedocs.io/en/stable/decomposition.html #https://towardsdatascience.com/principal-component-analysis-pca-with-scikit-learn-1e84a0c731b0 # target variance at 75%, #scale data first #looks like 91 components at 75% #will reduce components to 91 X_pca = pop_feat.values scaler = StandardScaler() scaler.fit(X_pca) X_pca_scaled = scaler.transform(X_pca) pca = PCA(random_state=1) pca.fit(X_pca_scaled) skplt.decomposition.plot_pca_component_variance(pca,target_explained_variance=0.75) plt.show() #Reviewed the 'DIAS Attributes - Values 2017.xlsx' spreadsheet in the data understanding section, #going through each attribute and description, and keep 91 attributes that appear relevent/helpful to the mail order business #Most of the 'KB' attributes are related to automobiles and not relevant to mail order. These also make up > 30% of the attributes. #So no issues dropping most of them cols_keep2 = ['ALTERSKATEGORIE_GROB', 'ALTER_HH', 'ANREDE_KZ','BALLRAUM','ANZ_HH_TITEL','CAMEO_DEUG_2015', 'CAMEO_INTL_2015','D19_BANKEN_DATUM','D19_BANKEN_OFFLINE_DATUM','D19_BIO_OEKO','D19_BILDUNG', 'D19_ENERGIE','D19_GARTEN', 'D19_GESAMT_OFFLINE_DATUM','D19_GESAMT_ONLINE_DATUM','D19_KONSUMTYP', 'D19_KOSMETIK','D19_LEBENSMITTEL','D19_NAHRUNGSERGAENZUNG','D19_TIERARTIKEL','D19_VERSAND_ANZ_12', 'D19_VERSAND_DATUM','D19_VERSAND_OFFLINE_DATUM','D19_VERSAND_ONLINE_DATUM','D19_VOLLSORTIMENT','EWDICHTE', 'FINANZTYP','EWDICHTE','FINANZ_MINIMALIST','FINANZ_SPARER','GEBAEUDETYP','GEBAEUDETYP_RASTER','GEBURTSJAHR', 'GREEN_AVANTGARDE','GFK_URLAUBERTYP', 'HEALTH_TYP','HH_EINKOMMEN_SCORE','INNENSTADT','KBA05_ALTER1', 'KBA05_ALTER2','KBA05_ALTER3', 'KBA05_ALTER4', 'KBA05_ANTG1','KBA05_ANTG2','KBA05_ANTG3','KBA05_ANTG4', 'KBA05_BAUMAX','KBA05_AUTOQUOT', 'KBA05_FRAU', 'KKK', 'KONSUMNAEHE','LP_FAMILIE_FEIN','LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB', 'LP_STATUS_FEIN', 'LP_STATUS_GROB', 'MIN_GEBAEUDEJAHR', 'MOBI_REGIO','NATIONALITAET_KZ', 'ONLINE_AFFINITAET', 'ORTSGR_KLS9','PLZ8_ANTG1', 'PLZ8_ANTG2','PLZ8_ANTG3','PLZ8_ANTG4', 'PLZ8_BAUMAX', 'PLZ8_GBZ', 'PLZ8_HHZ', 'PRAEGENDE_JUGENDJAHRE','REGIOTYP','RELAT_AB','SEMIO_DOM','SEMIO_ERL','SEMIO_FAM', 'SEMIO_KAEM','SEMIO_KRIT','SEMIO_KULT','SEMIO_LUST','SEMIO_MAT', 'SEMIO_PFLICHT','SEMIO_RAT','SEMIO_REL', 'SEMIO_SOZ','SEMIO_TRADV','SEMIO_VERT','SHOPPER_TYP','SOHO_KZ','RETOURTYP_BK_S','TITEL_KZ','WOHNDAUER_2008', 'WOHNLAGE'] len(cols_keep2) customers_feat2 = customers_feat[cols_keep2].copy() pop_feat2 = pop_feat[cols_keep2].copy() customers_feat2.shape, pop_feat2.shape #initiate K means, fit existing customers df, iterate up to K clusters. #For K I randomly chose 15 to provide a good spread for the elbow graphs #identify where 'elbow' occurs, IE, SSE lowers and starts really tapering off, this is the point of best trade off, #indicating best number of 'k' values to use with K means model #https://realpython.com/k-means-clustering-python/ new_c = customers_feat2.values scaler = StandardScaler() scaler.fit(new_c) new_c_scaled2 = scaler.transform(new_c) kmeans_kwargs = {"init": "random","n_init": 10,"max_iter": 300,"random_state": 42,} # A list holds the SSE values for each k value sse = [] for k in range(1, 15): kmeans = KMeans(n_clusters=k, **kmeans_kwargs) kmeans.fit(new_c_scaled2) sse.append(kmeans.inertia_) #initiate K means, fit German population df, iterate up to K clusters. #For K I randomly chose 15 to provide a good spread for the elbow graphs #identify where 'elbow' occurs, IE, SSE lowers and starts really tapering off, this is the point of best trade off, #indicating best number of 'k' values to use with K means model #https://realpython.com/k-means-clustering-python/ new_g = pop_feat2.values scaler = StandardScaler() scaler.fit(new_g) new_g_scaled2 = scaler.transform(new_g) kmeans_kwargs = {"init": "random","n_init": 10,"max_iter": 300,"random_state": 42,} # A list holds the SSE values for each k value sse_pop = [] for k in range(1, 15): kmeans = KMeans(n_clusters=k, **kmeans_kwargs) #kmeans.fit(pop_feat) kmeans.fit(new_g_scaled2) sse_pop.append(kmeans.inertia_) #plot SSE Elbow: results show SSE has a very leveled tapering off after ~6 clusters ....this will be optimal #https://stackoverflow.com/questions/332289/how-do-you-change-the-size-of-figures-drawn-with-matplotlib #https://www.kite.com/python/answers/how-to-set-the-width-and-height-of-a-figure-in-matplotlib-in-python #rcParams['figure.figsize'] = 5, 10 #width1 = 10 #height1 = 5 #width_height_1 = (width1, height1) #plt.figure(figsize=width_height_1) plt.style.use("fivethirtyeight") plt.plot(range(1, 15), sse) plt.xticks(range(1, 15)) plt.xlabel("Number of Clusters") plt.ylabel("SSE") plt.show() # plot SSE Elbow: results show SSE really tapers off after 6-7 clusters ....this will be optimal #https://stackoverflow.com/questions/332289/how-do-you-change-the-size-of-figures-drawn-with-matplotlib #https://www.kite.com/python/answers/how-to-set-the-width-and-height-of-a-figure-in-matplotlib-in-python #rcParams['figure.figsize'] = 5, 10 width1 = 10 height1 = 5 width_height_1 = (width1, height1) plt.figure(figsize=width_height_1) plt.style.use("fivethirtyeight") plt.plot(range(1, 15), sse_pop) plt.xticks(range(1, 15)) plt.xlabel("Number of Clusters") plt.ylabel("SSE German Pop") plt.show() def km_pipe(X, clusters=6): '''function to: - pipeline using standardscaler to scale features, use pca for dimensionality reduction, and use kmeans for clustering - fit pipeline with training data - predict on test data ''' kmeans_kwargs = {"init": "random","n_init": 10,"max_iter": 300,"random_state": 42, "n_clusters":clusters} pipeline = Pipeline([ ('scaler', StandardScaler()), ("pca", PCA(n_components=2, random_state=42)), ('km', KMeans(**kmeans_kwargs)) ]) # fit training data and transform (fit+transform for standardscaler), then use km classifier pipeline.fit_transform(X, y=None) # predict on test data y_pred = pipeline.predict(X) return pipeline, y_pred pipeline_cust, y_pred_cust = km_pipe(customers_feat2) pipeline_gen_pop, y_pred_gen_pop = km_pipe(pop_feat2) #combine german population df with kmeans cluster azdias_mod["cluster_German_pop"] = y_pred_gen_pop #https://stats.stackexchange.com/questions/213171/testing-whether-two-datasets-cluster-similarly #https://www.researchgate.net/post/How-to-measure-the-similarity-between-two-cluster-results #convert array to pandas series, normalize to give frequencies y_pred_gen_pop_ser = pd.Series(y_pred_gen_pop) y_pred_gen_pop_ser.value_counts(normalize=True) #combine customer population df with kmeans cluster customers_mod["cluster_existing_custs"] = y_pred_cust #convert array to pandas series, normalize to give frequencies y_pred_cust_ser = pd.Series(y_pred_cust) y_pred_cust_ser.value_counts(normalize=True) #create 'percent of total' metrics using cluster labeled customer and German population data #show percent of total for each cluster, will compare customer and German population sets side by side germ_pop_cluster_ct = azdias_mod.groupby(["cluster_German_pop"],as_index=False).agg({"LNR" : "count"}) germ_pop_cluster_ct.rename(columns={'LNR': 'total_german', 'cluster_German_pop': 'Cluster'}, inplace=True) germ_pop_cluster_pct = germ_pop_cluster_ct germ_pop_cluster_pct['total_german'] = germ_pop_cluster_pct['total_german']/germ_pop_cluster_pct['total_german'].sum() germ_pop_cluster_pct.rename(columns={'total_german': 'perc_tot_german'}, inplace=True) cust_cluster_ct = customers_mod.groupby(["cluster_existing_custs"],as_index=False).agg({"LNR" : "count"}) cust_cluster_ct.rename(columns={'LNR': 'total_exist_custs', 'cluster_existing_custs': 'Cluster'}, inplace=True) cust_cluster_pct = cust_cluster_ct cust_cluster_pct['total_exist_custs'] = cust_cluster_pct['total_exist_custs']/cust_cluster_ct['total_exist_custs'].sum() cust_cluster_pct.rename(columns={'total_exist_custs': 'perc_tot_exist_custs'}, inplace=True) #clusters 0,1,2,5 show a greater proportion of existing customers clustered together than the German population #This indicates customers within these clusters contain attributes/features that best represent the customer base for the #mail order company. #next we will look at the features for customers within these clusters cluster_perc_diffs = pd.merge(germ_pop_cluster_pct, cust_cluster_pct, on="Cluster") cluster_perc_diffs #plot count of total features from customer group #cluster 1 has a higher proportion of customers than other clusters customers_mod.cluster_existing_custs.value_counts().plot.bar(),customers_mod.shape #rename values in clustered customer data, will examine clusters in further detail customers_mod['mailorder_12mo_actvt'] = np.select( [ customers_mod['D19_VERSAND_ANZ_12'] == 0, customers_mod['D19_VERSAND_ANZ_12'] == 1, customers_mod['D19_VERSAND_ANZ_12'] == 2, customers_mod['D19_VERSAND_ANZ_12'] == 3, customers_mod['D19_VERSAND_ANZ_12'] == 4, customers_mod['D19_VERSAND_ANZ_12'] == 5, customers_mod['D19_VERSAND_ANZ_12'] == 6, ], [ 'no transactions known', 'very low activity', 'low activity', 'slightly increased activity', 'increased activity', 'high activity', 'very high activity', ], default='no transactions known' ) ######################################################## customers_mod['Age'] = np.select( [ customers_mod['ALTERSKATEGORIE_GROB'] == 0, customers_mod['ALTERSKATEGORIE_GROB'] == -1, customers_mod['ALTERSKATEGORIE_GROB'] == 1, customers_mod['ALTERSKATEGORIE_GROB'] == 2, customers_mod['ALTERSKATEGORIE_GROB'] == 3, customers_mod['ALTERSKATEGORIE_GROB'] == 4, customers_mod['ALTERSKATEGORIE_GROB'] == 9, ], [ 'unknown age', 'unknown age', '< 30 years', '30 - 45 years', '46 - 60 years', '> 60 years', 'uniformly distributed', ], default='unknown age' ) ######################################################### customers_mod['Gender'] = np.select( [ customers_mod['ANREDE_KZ'] == 0, customers_mod['ANREDE_KZ'] == -1, customers_mod['ANREDE_KZ'] == 1, customers_mod['ANREDE_KZ'] == 2, ], [ 'unknown', 'unknown', 'male', 'female', ], default='Unknown' ) ####################################################### customers_mod['HH_Net_Income'] = np.select( [ customers_mod['HH_EINKOMMEN_SCORE'] == 0, customers_mod['HH_EINKOMMEN_SCORE'] == -1, customers_mod['HH_EINKOMMEN_SCORE'] == 1, customers_mod['HH_EINKOMMEN_SCORE'] == 2, customers_mod['HH_EINKOMMEN_SCORE'] == 3, customers_mod['HH_EINKOMMEN_SCORE'] == 4, customers_mod['HH_EINKOMMEN_SCORE'] == 5, customers_mod['HH_EINKOMMEN_SCORE'] == 6, ], [ 'unknown', 'unknown', 'highest income', 'very high income', 'high income', 'average income', 'lower income', 'very low income', ], default='Unknown' ) customers_mod_2 = customers_mod[['cluster_existing_custs','LNR','mailorder_12mo_actvt','Age','Gender','HH_Net_Income']] \ [customers_mod.mailorder_12mo_actvt.isin(['high activity', 'very high activity','increased activity', 'slightly increased activity'])] #looks like clusters 3 and 5 have have the most accounts with high activity. These will be analyzed further #we will take clusters 3 and 5 to identify attributes most important for mail order/new onboard customers clust_chk_1 = customers_mod_2.groupby(['cluster_existing_custs','mailorder_12mo_actvt'],as_index = False).agg({'LNR':'count'}) clust_chk_1.rename(columns={"LNR": "Existing_Cust_count"}, inplace = True) clust_chk_1.sort_values(by = 'Existing_Cust_count',ascending = False) #pull only clusters 3 and 5 customers_mod_3 = customers_mod[['cluster_existing_custs','LNR','mailorder_12mo_actvt','Age','Gender','HH_Net_Income']] \ [customers_mod.cluster_existing_custs.isin(['3','5'])].copy() #clusters 3 and 5 represent the current customer demographic well!! Older high income males #These clusters have the demographics we should target. Not only the age, gender, and income level, but #91 other attributes that could be used!! #for simplicity we will predict on males > 60 years of age and 45-60, having very high, high, and increased #mail order activity, last 12 months print('Age stats, clusters 3 and 5: ', customers_mod_3.Age.describe(),'\n','\n', '\n', 'Gender stats, clusters 3 and 5: ', customers_mod_3.Gender.describe(),'\n','\n', '\n', 'HH Income stats, clusters 3 and 5: ', customers_mod_3.HH_Net_Income.describe()) ###Output Age stats, clusters 3 and 5: count 18724 unique 5 top > 60 years freq 11873 Name: Age, dtype: object Gender stats, clusters 3 and 5: count 18724 unique 2 top male freq 15592 Name: Gender, dtype: object HH Income stats, clusters 3 and 5: count 18724 unique 7 top highest income freq 5971 Name: HH_Net_Income, dtype: object ###Markdown Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';', dtype = 'str') #see how many response/successful customer onboarding instances occured from mailout train set #small customer onboarding rate mailout_train.RESPONSE.value_counts() mailout_train.shape mailout_train.head(3) mailout_train.LNR.nunique() mailout_train2 = mailout_train.copy() #replace -1 with NANs to not lose value, will impute with mean later #https://stackoverflow.com/questions/29247712/how-to-replace-a-value-in-pandas-with-nan mailout_train2.replace('-1', np.NaN, inplace = True) #convert all column values from string to numeric #https://stackoverflow.com/questions/36814100/pandas-to-numeric-for-multiple-columns pd.options.mode.chained_assignment = None # default='warn' cols = mailout_train2.columns mailout_train2[cols] = mailout_train2[cols].apply(pd.to_numeric, errors='coerce') #impute nulls with mean mailout_train2.fillna(mailout_train2.mean(), inplace = True) #some nulls still exist, look at these columns pd.set_option('display.max_rows', None) pd.set_option('display.max_colwidth', 90) (np.sum(mailout_train2.isnull() == True)/mailout_train2.shape[0])*100 # going back to the original metadata spreadsheets imported in earlier: #appears 2 columns not found, 2 have definitions listed below #I don't believe removing these 4 columns will have a large impact, so these columns will be #removed #CAMEO_DEU_2015: CAMEO_4.0: specific group #fOST_WEST_KZ: lag indicating the former GDR/FRG #D19_LETZTER_KAUF_BRANCHE: not in excel metadata #EINGEFUEGT_AM: not in original excel metadata pd.reset_option('^display.', silent=True) mailout_train[['EINGEFUEGT_AM','D19_LETZTER_KAUF_BRANCHE','CAMEO_DEU_2015','OST_WEST_KZ']].tail(10) #columns to drop (Many features) cols_drop = ['CAMEO_DEU_2015','D19_LETZTER_KAUF_BRANCHE','EINGEFUEGT_AM','EINGEZOGENAM_HH_JAHR','OST_WEST_KZ', 'ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4'] mailout_train3 = mailout_train2.copy() mailout_train3.drop(cols_drop, axis = 1, inplace = True) ########################################################################################################## #columns to keep (scaled down, fix overfitting)??? #cols_keep3 = ['ALTERSKATEGORIE_GROB', 'D19_VERSAND_ANZ_12', 'ANZ_HH_TITEL', 'ANREDE_KZ', 'HH_EINKOMMEN_SCORE', # 'CJT_GESAMTTYP', 'REGIOTYP', 'EWDICHTE','FINANZTYP','LNR','RESPONSE'] #mailout_train3 = mailout_train2[cols_keep3].copy() mailout_train3.shape #Response column indicates customers successfully onboarded. Remove that column for x, input set #y value set as response for each column, as response is output X= mailout_train3.drop(columns=['LNR', 'RESPONSE'],axis=1).values y = mailout_train3.RESPONSE.values X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.3, random_state=0) #ML Pipeline. KNN after reviewing scikit cheat sheet: #https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html def knn(X_train, X_test, y_train, y_val): '''function to: - pipeline scale with standard scaler, classify with knn - fit pipeline with training data - predict on test data ''' pipeline = Pipeline([ ('scaler', StandardScaler()), ('clf', KNeighborsClassifier(n_neighbors = 5)) ]) # fit/train transformers and classifier pipeline.fit(X_train, y_train) # predict on test data y_pred = pipeline.predict(X_val) pipeline_knn = pipeline y_pred_knn = y_pred return pipeline_knn, y_pred_knn #train pipeline 2 pipeline_knn, y_pred_knn = knn(X_train, X_val, y_train, y_val) #confusion matrix, KNN, check accuracy of classifications made #looks like overwealmingly there are True Positives and a small number of false negatives. confusion_matrix(y_val,y_pred_knn) test_test = pd.Series(y_val) test_test.value_counts(normalize=True) test_pred = pd.Series(y_pred_knn) test_pred.value_counts(normalize=True) print(classification_report(y_val,y_pred_knn)) #show parameters used knn pipeline_knn.get_params() #accuracy and auc_roc score on KNN accuracy_score(y_val,y_pred_knn), roc_auc_score(y_val, y_pred_knn) #https://machinelearningmastery.com/overfitting-machine-learning-models/ #knn learning curve to identify overfitting/underfitting/good fit # define lists to collect scores train_scores, val_scores = list(), list() # define the tree depths to evaluate values = [i for i in range(1, 15)] # evaluate a decision tree for each depth for i in values: # configure the model model = KNeighborsClassifier(n_neighbors=i) # fit model on the training dataset model.fit(X_train, y_train) # evaluate on the train dataset train_yhat = model.predict(X_train) train_acc = accuracy_score(y_train, train_yhat) train_scores.append(train_acc) # evaluate on the validation dataset val_yhat = model.predict(X_val) val_acc = accuracy_score(y_val, val_yhat) val_scores.append(val_acc) # summarize progress print('>%d, train: %.3f, test: %.3f' % (i, train_acc, val_acc)) # plot of train and test scores vs number of neighbors plt.plot(values, train_scores, '-o', label='Train') plt.plot(values, val_scores, '-o', label='Validation') plt.legend() plt.show() #https://thedatascientist.com/learning-curves-scikit-learn/ #X, y = load_digits(return_X_y=True) estimator = SVC(gamma=0.001) train_sizes, train_scores, test_scores, fit_times, _ = learning_curve(estimator, X, y, cv=30,return_times=True) plt.plot(train_sizes,np.mean(train_scores,axis=1)) #https://vitalflux.com/learning-curves-explained-python-sklearn-example/ # Create a pipeline; This will be passed as an estimator to learning curve method # pipeline = make_pipeline(StandardScaler(), KNeighborsClassifier(n_neighbors=5)) # # Use learning curve to get training and test scores along with train sizes # train_sizes, train_scores, test_scores = learning_curve(estimator=pipeline, X=X_train, y=y_train, cv=10, train_sizes=np.linspace(0.1, 1.0, 10), n_jobs=1) # # Calculate training and test mean and std # train_mean = np.mean(train_scores, axis=1) train_std = np.std(train_scores, axis=1) test_mean = np.mean(test_scores, axis=1) test_std = np.std(test_scores, axis=1) # # Plot the learning curve # plt.plot(train_sizes, train_mean, color='blue', marker='o', markersize=5, label='Training Accuracy') plt.fill_between(train_sizes, train_mean + train_std, train_mean - train_std, alpha=0.15, color='blue') plt.plot(train_sizes, test_mean, color='green', marker='+', markersize=5, linestyle='--', label='Validation Accuracy') plt.fill_between(train_sizes, test_mean + test_std, test_mean - test_std, alpha=0.15, color='green') plt.title('Learning Curve') plt.xlabel('Training Data Size') plt.ylabel('Model accuracy') plt.grid() plt.legend(loc='lower right') plt.show() #parameters for gridsearch + model fitting; then print best parameters from analysis #https://medium.com/@erikgreenj/k-neighbors-classifier-with-gridsearchcv-basics-3c445ddeb657 #new params: {'clf__metric': 'euclidean', 'clf__n_neighbors': 5, 'clf__weights': 'uniform'} 0.987663352509 #looks like all that changed was metric 'minkowski' to 'euclidean' parameters = { 'clf__n_neighbors' : [3,5,11,19], 'clf__weights' : ['uniform','distance'], 'clf__metric' : ['euclidean','manhattan'] } cv = GridSearchCV(pipeline_knn, param_grid=parameters, verbose=3) cv.fit(X_train, y_train) y_pred = cv.predict(X_val) print(cv.best_params_, cv.best_score_) #### REFLECTION: Checking training again with multiple models!!!!!!!!!!!!!!!!!!!!!!!!!!! def model_trainer(model, X_train, y_train, X_val, y_val): '''This function customization of the fit method. Args: model: instantiated model from the list of the classifiers X_train: training data y_train: training labels X_test: validation data y_test: validation labels returns: ROC-AUC score, training time ''' t = time.time() model = model.fit(X_train, y_train) y_pred = model.predict_proba(X_val)[:,1] roc_score = roc_auc_score(y_val, y_pred) #acc_score = accuracy_score(y_test,y_pred) train_time = time.time() - t return roc_score, train_time #acc_score #list of classifiers to check AUC_ROC score classifiers = [ ("Nearest Neighbors", KNeighborsClassifier(3)), ("Decision Tree", DecisionTreeClassifier(random_state=42)), ("Random Forest", RandomForestClassifier(random_state=42)), ("AdaBoost", AdaBoostClassifier(random_state=42)), ("GradientBoostingClassifier", GradientBoostingClassifier(random_state=42)) ] #function to run multiple classifiers and compare auc_roc def run_multiple(classifiers, X_train, y_train, X_val, y_val): result={ 'classifier':[], 'ROC_AUC score':[], 'train_time':[] } for name, classifier in classifiers: score, t = model_trainer(classifier, X_train, y_train, X_val, y_val) result['classifier'].append(name) result['ROC_AUC score'].append(score) result['train_time'].append(t) results_df = pd.DataFrame.from_dict(result, orient='index').transpose() return results_df run_multiple(classifiers, X_train, y_train, X_val, y_val) #https://www.askpython.com/python/examples/k-fold-cross-validation k = 5 kf = KFold(n_splits=k, random_state=None) model = GradientBoostingClassifier(random_state=42) acc_score = [] for train_index , test_index in kf.split(X): #X_train , X_test = X.iloc[train_index,:],X.iloc[test_index,:] #y_train , y_test = y[train_index] , y[test_index] model.fit(X_train,y_train) pred_values = model.predict(X_val) acc = accuracy_score(pred_values , y_val) acc_score.append(acc) avg_acc_score = sum(acc_score)/k print('accuracy of each fold - {}'.format(acc_score)) print('Avg accuracy : {}'.format(avg_acc_score)) ###Output accuracy of each fold - [0.98727597175886417, 0.98727597175886417, 0.98727597175886417, 0.98727597175886417, 0.98727597175886417] Avg accuracy : 0.9872759717588642 ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter.Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';', dtype = 'str') #see how many response/successful customer onboarding instances occured from mailout train set #small customer onboarding rate mailout_test.head(3) mailout_test.shape mailout_test.LNR.nunique() mailout_test2 = mailout_test.copy() #replace -1 with NANs to not lose value, will impute with mean later #https://stackoverflow.com/questions/29247712/how-to-replace-a-value-in-pandas-with-nan mailout_test2.replace('-1', np.NaN, inplace = True) #convert all column values from string to numeric #https://stackoverflow.com/questions/36814100/pandas-to-numeric-for-multiple-columns pd.options.mode.chained_assignment = None # default='warn' cols = mailout_test2.columns mailout_test2[cols] = mailout_test2[cols].apply(pd.to_numeric, errors='coerce') #impute nulls with mean mailout_test2.fillna(mailout_test2.mean(), inplace = True) #some nulls still exist, look at these columns pd.set_option('display.max_rows', None) pd.set_option('display.max_colwidth', 90) (np.sum(mailout_test2.isnull() == True)/mailout_test2.shape[0])*100 #columns to drop cols_drop = ['CAMEO_DEU_2015','D19_LETZTER_KAUF_BRANCHE','EINGEFUEGT_AM','EINGEZOGENAM_HH_JAHR','OST_WEST_KZ', 'ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4'] mailout_test3 = mailout_test2.copy() mailout_test3.drop(cols_drop, axis = 1, inplace = True) #columns to keep (scaled down, fix overfitting)??? #cols_keep3 = ['ALTERSKATEGORIE_GROB', 'D19_VERSAND_ANZ_12', 'ANZ_HH_TITEL', 'ANREDE_KZ', 'HH_EINKOMMEN_SCORE', # 'CJT_GESAMTTYP', 'REGIOTYP', 'EWDICHTE','FINANZTYP','LNR'] #mailout_test3 = mailout_test2[cols_keep3].copy() mailout_test3.shape #df with only LNR/Account to join back later to identify LNR/Acct after prediction LNR_test = mailout_test.LNR #predict on mailout_test cleansed data (IE, mailout_test becomes input testing X, IE- X_test). #original 'seen' data, X_train, y_train is prior train set..fit model with this, predict on X_test X_test_new= mailout_test3.drop(columns=['LNR'],axis=1).values #updated ML Pipeline. KNN after reviewing scikit cheat sheet: ##optimal params: {'clf__metric': 'euclidean', 'clf__n_neighbors': 5, 'clf__weights': 'uniform'} 0.987663352509 def knn_new(X_train, X_test, y_train, y_test): '''function to: - pipeline scale with standard scaler, classify with knn - fit pipeline with training data - predict on test data ''' pipeline = Pipeline([ ('scaler', StandardScaler()), ('clf', KNeighborsClassifier(n_neighbors = 5, metric = 'euclidean', weights = 'uniform')) ]) # fit training data with transformers and classifier pipeline.fit(X_train, y_train) # predict on test data y_pred = pipeline.predict(X_test) pipeline_knn = pipeline y_pred_knn = y_pred return pipeline_knn, y_pred_knn pipeline_knn, y_pred_knn = knn_new(X_train, X_test_new, y_train, y_val) #https://www.geeksforgeeks.org/create-a-dataframe-from-a-numpy-array-and-specify-the-index-column-and-column-headers/ array = y_pred_knn index_values = LNR_test # creating a list of column names column_values = ['RESPONSE'] # creating the dataframe df_pred_fin = pd.DataFrame(data = array, index = index_values, columns = column_values) #change index to column df_pred_fin.reset_index(level=0, inplace=True) df_pred_fin.head() df_pred_fin.RESPONSE.value_counts() df_pred_fin.to_csv('df_final_pred_kaggle.csv', sep=';', index = False) ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ! Use NBViewer !To view the Visualizations in this notebook, please use NbViewer:[Arvato Project Workbook.ipynb](https://nbviewer.jupyter.org/github/lewi0332/arvato_financial_customer_segmentation/blob/master/Arvato%20Project%20Workbook.ipynb) ###Code from sklearnex import patch_sklearn patch_sklearn() # import libraries here; add more as necessary import numpy as np import pandas as pd import pickle as pkl from pandas_profiling import ProfileReport from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler from sklearn.impute import SimpleImputer, MissingIndicator from sklearn.pipeline import FeatureUnion, make_pipeline from sklearn.cluster import KMeans, MiniBatchKMeans from sklearn.metrics import roc_curve, auc from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.pipeline import FeatureUnion, make_pipeline from sklearn.ensemble import AdaBoostClassifier import xgboost as xgb import plotly.graph_objects as go import plotly.express as px pd.options.plotting.backend = "plotly" import fohr_theme_light import plotly.io as pio pio.renderers.default = "notebook_connected" import chart_studio.plotly as py ###Output Intel(R) Extension for Scikit-learn* enabled (https://github.com/intel/scikit-learn-intelex) ###Markdown Move from Udacity Workspace. First step in this project is to move the data out of the Udacity workspace as the IDE on the site was not nearly capable of working on this project. The Udacity supplied IDE took 22 minutes just to load the data into memory. Thus, it clearly did not have the resources to perform machine learning tasks on such a large data set. So, the first step in this project was to get the data out of the Udacity Workspace and download it to my local maching. The workspace was so underpowered that I could not even convert the files to Parquet first to reduce network traffic and ease the burden of downloading to my local computer. In fact, it could not even handle the conda install to _attempt_ to get the libraries needed to save the file in Parquet format.**Convert Udacity Supplied data into Parquet format**. Once the data was on my local machine I decided to conver the data to Parquet anyway. Parquet is binary format that is not only compressed (uses 10x less space) but maintains data type. While Pandas has logic built in to determine the data type, in this case it was confused when given multiple data types in a single column and throws an error. Not only do I perfer to use parquet files to CSV for these and other reasons such as future compatibility with Spark, but, in this case, by converting to Parquet now I solve this datatype issue once. All future attempts to load the data in order to work on this project will be quicker and datatype-error free.1. Fix the two columns in each csv file with multiple datatypes - The columns are categorical in nature and use an integer to label each category. As Integers are lightweight and respond to Sklearn's categorical features, I will convert the columns to integers 2. Store the resulting dataframes as parquet files to use less space on my local machine and load faster on subsequent working sessions. ###Code ''' Items in this cell were run once to convert the data files into a Parquet format. Two columns had to be pre-cleaned in order to store the parquet file in the correct data type. This is a key component of the project. Therefore I left this convert cell to show the steps needed and will later clean more columns in a similar way. ''' # # load in the data # azdias = pd.read_csv('Udacity_AZDIAS_052018.csv') # customers = pd.read_csv('Udacity_CUSTOMERS_052018.csv') # # According to data map supplied "-1" means unknown. Fill non-integer numbers with np.nan and convert to float # azdias.CAMEO_DEUG_2015 = azdias.CAMEO_DEUG_2015.replace('X', np.NaN).astype(float) # azdias.CAMEO_INTL_2015 = azdias.CAMEO_INTL_2015.replace('XX', np.NaN).astype(float) # # According to data map supplied "-1" means unknown. Fill non-integer numbers with np.nan and convert to float # customers.CAMEO_DEUG_2015 = customers.CAMEO_DEUG_2015.replace('X', np.NaN).astype(float) # customers.CAMEO_INTL_2015 = customers.CAMEO_INTL_2015.replace('XX', np.NaN).astype(float) # azdias.to_parquet('Udacity_AZDIAS_052018.parquet') # customers.to_parquet('Udacity_CUSTOMERS_052018.parquet') ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code azdias = pd.read_parquet('Udacity_AZDIAS_052018.parquet') # profile = ProfileReport(azdias, minimal=True) # profile.to_file("output.html") ###Output _____no_output_____ ###Markdown Pandas Profile reportOne of my favorite tools for EDA is the Pandas Profiling library. This library automates the EDA tasks data scientists take on each day. The report outlines the distrobution of each column, its missing values, its data type and some very helpful tools in categorical columns. **In the real world** I would do most of my learning about the data and make most decision here with this html document. However, for the scope of this project, it is difficult to show the process with this tool. Thus I will create a few visuals to show highlight *some* of the decisions. ![Data Profile html visual](pandasprofile.png)![categorical data](pandasprofilecategorical.png) ExampleIn the first visual for the `LNR` feature, Pandas Profiler has clearly shown that this feature has unique values for all rows and is likely an ID or another type of value that will not help our model. It will be dropped from the dataset before clustering. Part 0.1Read in the customer data file. These two files will be used in comparison after creating clusters from the data. I will check for differences in the data. ###Code customers = pd.read_parquet('Udacity_CUSTOMERS_052018.parquet') print(azdias.shape) print(customers.shape) ###Output (891221, 366) (191652, 369) ###Markdown The project brief mentioned the three additional columns in the customers dataset. However we need to check to see that the remaining 366 columns are the same. Here I use Python `.intersection()` method to compare two sets of column names. ###Code print(len(set(azdias.columns).intersection(customers.drop(['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], axis=1).columns))) print(len(set(customers.drop(['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], axis=1).columns).intersection(azdias.columns))) ###Output 366 366 ###Markdown Part 0.2 Read in Data Dictionary - Convert unknown values to NaNNext I will better understand the columns with information from the data dictionary files `DIAS Attributes - Values 2017.xlsx` and `DIAS Information Levels - Attributes 2017.xlsx`. In these files we can learn more about the meanings of each value code. Certain value codes represent `unknown` or `missing`. These values were not collected or completely unknown. Our machine learning algorithms will respond better to `NaN` values in their place. Thus, I must discover and replace all value codes that are `unknown` or `missing`. ###Code values = pd.read_excel('DIAS Attributes - Values 2017.xlsx', sheet_name='Tabelle1', header=1, engine='openpyxl') values.head(10) ###Output _____no_output_____ ###Markdown Give the size of the dataset, perhaps I can use a bit of string matching to find each attributes specfic code and programitically remove these values. I will first build a list of only `attribute` and `value` that can be stored and reused in future data cleaning. With these data dictionary documents open in Excel, a quick visual scan helped me see that there are basically 4 distinct string values that represent an unknown value: 'unknown', 'unknown / no main age detectable', 'no transaction known', and 'no transactions known' ###Code # Forward fill the attribute values. values['Attribute'] = values['Attribute'].ffill() value_unknown_codes = values.loc[(values['Meaning'].str.contains(pat='unknown', case=False, regex=False, na=False)) | (values['Meaning'].str.contains(pat='no transaction', case=False, regex=False, na=False))] value_unknown_codes.head(10) ###Output _____no_output_____ ###Markdown This looked like it worked. Test for all types. ###Code value_unknown_codes['Meaning'].unique() ###Output _____no_output_____ ###Markdown That is all four of the unknown meanings that I could see in the file. It looks like our string matching was successful and we are able to parse and retain all of them.Next, the values on the unknown fields are sometimes multiple integers seperated by a comma. We will need to seperate these and give them their own row as both values will need to be convertied to NaN values. ###Code value_unknown_codes['Value'].values[1] value_unknown_codes = value_unknown_codes[['Attribute']].join(value_unknown_codes['Value'].astype('str').str.split(',', expand=True)).melt(id_vars='Attribute').drop('variable', axis=1) value_unknown_codes.dropna(inplace=True) value_unknown_codes.describe() value_unknown_codes['value'].unique() ###Output _____no_output_____ ###Markdown This file will now be used in a future data cleaning process to convert missing values to proper `np.nan` values ###Code value_unknown_codes.value = value_unknown_codes.value.astype('int') azdias.isna().sum().sum() azdias.isna().sum().index def plot_missing_values(dff, renderer='notebook_connected'): """ Visualize the missing values per column in a dataset INPUT - dff : Pandas Dataframe, renderer : Choice of plotly rendering tools. Default of "notebook_connected".capitaliz Other options include "chrome", "firefox", "jpg" etc. OUTPUT - Ploty chart """ temp_missing_values = dff.isna().sum() temp_missing_values.sort_values(ascending=False, inplace=True) fig = go.Figure() fig.add_trace(go.Bar( x=temp_missing_values.index, y=temp_missing_values/len(dff) )) fig.update_yaxes(title='Rate of Missing Values', tickformat='%') fig.update_xaxes(title = 'Column name') fig.update_layout(title= "Missing Values per Column") return fig fig = plot_missing_values(azdias) fig.show() # py.plot(fig, filename = 'udsnd_1_missing', auto_open=False) missing_attributes = [] for idx, row in value_unknown_codes.iterrows(): try: azdias.loc[azdias[row['Attribute']] == row['value'], row['Attribute']] = np.nan except KeyError: missing_attributes.append(row['Attribute']) azdias.isna().sum().sum() ###Output _____no_output_____ ###Markdown Plot the missing values again after converting any of the explicit value codes to a NaN value. ###Code fig = plot_missing_values(azdias) fig.show() ###Output _____no_output_____ ###Markdown Lets look at which attributes were in the data dictionary, but not in our `azdias` dataset: ###Code # Items in the attribute dictionary that are not represented in the dataset: missing_attributes ###Output _____no_output_____ ###Markdown Importantly, we must look to see which attributes are in our dataset that are **not** in our data dictionary. These items we will not know their meaning. Thus, we do not know if there is a `missing` or `unknown` coded value ###Code azdias_attr = list(azdias.columns) values_attr = values.Attribute.unique() azdias_missing_attr = [element for element in azdias_attr if element not in values_attr] # 94 columns in azdias that we do not find in our data dictionary len(azdias_missing_attr) # Perhaps these missing attributes are of a specific type? azdias[azdias_missing_attr].info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 891221 entries, 0 to 891220 Data columns (total 94 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 LNR 891221 non-null int64 1 AKT_DAT_KL 817722 non-null float64 2 ALTER_KIND1 81058 non-null float64 3 ALTER_KIND2 29499 non-null float64 4 ALTER_KIND3 6170 non-null float64 5 ALTER_KIND4 1205 non-null float64 6 ALTERSKATEGORIE_FEIN 628274 non-null float64 7 ANZ_KINDER 817722 non-null float64 8 ANZ_STATISTISCHE_HAUSHALTE 798073 non-null float64 9 ARBEIT 794005 non-null float64 10 CAMEO_INTL_2015 791869 non-null float64 11 CJT_KATALOGNUTZER 886367 non-null float64 12 CJT_TYP_1 886367 non-null float64 13 CJT_TYP_2 886367 non-null float64 14 CJT_TYP_3 886367 non-null float64 15 CJT_TYP_4 886367 non-null float64 16 CJT_TYP_5 886367 non-null float64 17 CJT_TYP_6 886367 non-null float64 18 D19_BANKEN_DIREKT 891221 non-null int64 19 D19_BANKEN_GROSS 891221 non-null int64 20 D19_BANKEN_LOKAL 891221 non-null int64 21 D19_BANKEN_REST 891221 non-null int64 22 D19_BEKLEIDUNG_GEH 891221 non-null int64 23 D19_BEKLEIDUNG_REST 891221 non-null int64 24 D19_BILDUNG 891221 non-null int64 25 D19_BIO_OEKO 891221 non-null int64 26 D19_BUCH_CD 891221 non-null int64 27 D19_DIGIT_SERV 891221 non-null int64 28 D19_DROGERIEARTIKEL 891221 non-null int64 29 D19_ENERGIE 891221 non-null int64 30 D19_FREIZEIT 891221 non-null int64 31 D19_GARTEN 891221 non-null int64 32 D19_HANDWERK 891221 non-null int64 33 D19_HAUS_DEKO 891221 non-null int64 34 D19_KINDERARTIKEL 891221 non-null int64 35 D19_KONSUMTYP_MAX 891221 non-null int64 36 D19_KOSMETIK 891221 non-null int64 37 D19_LEBENSMITTEL 891221 non-null int64 38 D19_LETZTER_KAUF_BRANCHE 634108 non-null object 39 D19_LOTTO 634108 non-null float64 40 D19_NAHRUNGSERGAENZUNG 891221 non-null int64 41 D19_RATGEBER 891221 non-null int64 42 D19_REISEN 891221 non-null int64 43 D19_SAMMELARTIKEL 891221 non-null int64 44 D19_SCHUHE 891221 non-null int64 45 D19_SONSTIGE 891221 non-null int64 46 D19_SOZIALES 634108 non-null float64 47 D19_TECHNIK 891221 non-null int64 48 D19_TELKO_MOBILE 891221 non-null int64 49 D19_TELKO_ONLINE_QUOTE_12 634108 non-null float64 50 D19_TELKO_REST 891221 non-null int64 51 D19_TIERARTIKEL 891221 non-null int64 52 D19_VERSAND_REST 891221 non-null int64 53 D19_VERSI_DATUM 891221 non-null int64 54 D19_VERSI_OFFLINE_DATUM 891221 non-null int64 55 D19_VERSI_ONLINE_DATUM 891221 non-null int64 56 D19_VERSI_ONLINE_QUOTE_12 634108 non-null float64 57 D19_VERSICHERUNGEN 891221 non-null int64 58 D19_VOLLSORTIMENT 891221 non-null int64 59 D19_WEIN_FEINKOST 891221 non-null int64 60 DSL_FLAG 798073 non-null float64 61 EINGEFUEGT_AM 798073 non-null object 62 EINGEZOGENAM_HH_JAHR 817722 non-null float64 63 EXTSEL992 237068 non-null float64 64 FIRMENDICHTE 798066 non-null float64 65 GEMEINDETYP 793947 non-null float64 66 HH_DELTA_FLAG 783619 non-null float64 67 KBA13_ANTG1 785421 non-null float64 68 KBA13_ANTG2 785421 non-null float64 69 KBA13_ANTG3 785421 non-null float64 70 KBA13_ANTG4 785421 non-null float64 71 KBA13_BAUMAX 785421 non-null float64 72 KBA13_CCM_1401_2500 785421 non-null float64 73 KBA13_GBZ 785421 non-null float64 74 KBA13_HHZ 785421 non-null float64 75 KBA13_KMH_210 785421 non-null float64 76 KK_KUNDENTYP 306609 non-null float64 77 KOMBIALTER 891221 non-null int64 78 KONSUMZELLE 798066 non-null float64 79 MOBI_RASTER 798073 non-null float64 80 RT_KEIN_ANREIZ 886367 non-null float64 81 RT_SCHNAEPPCHEN 886367 non-null float64 82 RT_UEBERGROESSE 839995 non-null float64 83 SOHO_KZ 817722 non-null float64 84 STRUKTURTYP 793947 non-null float64 85 UMFELD_ALT 793435 non-null float64 86 UMFELD_JUNG 793435 non-null float64 87 UNGLEICHENN_FLAG 817722 non-null float64 88 VERDICHTUNGSRAUM 793947 non-null float64 89 VHA 817722 non-null float64 90 VHN 770025 non-null float64 91 VK_DHT4A 815304 non-null float64 92 VK_DISTANZ 815304 non-null float64 93 VK_ZG11 815304 non-null float64 dtypes: float64(53), int64(39), object(2) memory usage: 639.2+ MB ###Markdown Looks like there were only 2 columns of type "object". All others are `float` or `int` ###Code azdias[azdias_missing_attr].isna().sum().sum() fig = plot_missing_values(azdias[azdias_missing_attr]) fig.show() # py.plot(fig, filename = 'udsnd_2_missing', auto_open=False) ###Output _____no_output_____ ###Markdown This presents an interesting challenge about what to do with these columns. Given we do not know the meaning of the coded values, it is possible that some features have values that are unknown but represented with a digitI could see above that there were 5 unique value codes associated with being unknown: `[-1, 0, 10, 9]`. While I can see these codes in the data in the columns that we have no values for, it is not **certain** that they mean the same thing and we could be losing data if we choose to convert them to `NaN`. **The real world** - at this point in a real on-the-job scenario, I would stop the process and communicate with someone in the company to solve for this missing information as it might greatly increase our accuracy. In this case, I might have to test both pathways: 1. Completely Remove the columns I don't understand as the inclusion of `NaN` values may pollute the results. 2. Convert the value codes that seem to match the `unknown` of other columns. ###Code azdias[azdias_missing_attr].head() ###Output _____no_output_____ ###Markdown I will attempt something perhaps a little onthrodox. I will try the second option of keeping each column we don't know, but take a closer look at the columns values to see if we can make a more informed decision about what might be the liekly value code that represents `unknown`. First, We can see from the visual above there are many columns that have significant `NaN` values already. I will assume that in these cases the data was collected in a way that left the value empty instead of assigning unknown to a code. For instance, `Alter_Kind` describes Other childern and clearly increases in missing values as the number grows. Fewer families have 4 childern than 3, etc.The remaining columns (those with less than 50K `nan` values) are suspect. For these columns, I will look at the distrobution of values by name groups. Perhaps we can see a pattern and one that matchs with our previous codes. ###Code temp_missing_values = azdias[azdias_missing_attr].isna().sum() temp_missing_values = temp_missing_values[temp_missing_values<50_000] ###Output _____no_output_____ ###Markdown **D19_**This name group has the most features in view. From the Attributes .xlsx file I learn this data contains unique data regarding the mail-order activity of consumers, differentiated. I will first examine all of these values to see if we can find an obvious place to start when assuming which value code might represent `unkown`. ###Code D19_missing = temp_missing_values[temp_missing_values.index.str.contains('D19_')] D19_missing ###Output _____no_output_____ ###Markdown Here are all the columns that begin with `D19_`, do not have explicit information about which value code might represent `unknown` and have less thna 50,000 `NaN` values in across all rows of the dataset. After another visual scan of the `DIAS Information Levels - Attributes 2017.xlsx` file. I can see that all of these `D19_` columns match an Information Level called `125 x 125 grid` **HOWEVER**, they are all missing `_RZ` attached to the column name. Further, in the `DIAS Attributes - Values 2017.xlsx` file there are in fact information about the missing value codes for those columns with the `_RZ` suffix. While this is another unorthodox assumption, a test could be made using the information about the value code for `unknown` values from the near matching column names. Simply droping the `_RZ` suffix from teh `DIAS Attributes - Values 2017.xlsx` list would match up the column names for nearly all but 4 of our missing `D19` numbers. 4 `D19_` categories that would still remain. - D19_KONSUMTYP_MAX- D19_VERSI_DATUM- D19_VERSI_OFFLINE_DATUM- D19_VERSI_ONLINE_DATUM**`D19_VERSI_*`** Each of these three columns have nearly identical Matches for `D19_VERSAND_*`. Later I will create a change to the unknowns list to match these columns as well. ###Code D19_missing = azdias[D19_missing.index].astype('float').stack().value_counts() D19_missing.sort_index().plot.bar() ###Output _____no_output_____ ###Markdown This view shows us that all value codes are in a similar range to the other known `D19_` columns and the distribution of `0` values also mirrors that of the known group of `D19_` columns that had missing values. Thus, I **will** attempt to remove the `_RZ` suffix from the `Attribute` column in final data clean to match these columns with the nearly identical columns in the `DIAS Attributes - Values 2017.xlsx` file. ###Code # Remove _RZ suffix from Attributes and re-run our for loop to # replace value codes known to represent 'missing' with np.nonlocal value_unknown_codes.loc[value_unknown_codes['Attribute'].str.endswith('_RZ'), 'Attribute'] = value_unknown_codes.loc[value_unknown_codes['Attribute'].str.endswith('_RZ'), 'Attribute'].str.replace('_RZ', '') missing_attributes = [] for idx, row in value_unknown_codes.iterrows(): try: azdias.loc[azdias[row['Attribute']] == row['value'], row['Attribute']] = np.nan except KeyError: missing_attributes.append(row['Attribute']) ###Output _____no_output_____ ###Markdown Having adjusted the Attribute name in the `DIAS Attributes - Values 2017.xlsx` list of unkown value codes, now lets see how many `NaN` values are in our dataset. ###Code fig = plot_missing_values(azdias[azdias_missing_attr]) fig.show() fig = plot_missing_values(azdias) fig.show() # py.plot(fig, filename = 'udsnd_3_missing', auto_open=False) ###Output _____no_output_____ ###Markdown Next, I will move on to the remaining columns that have no matching name in the "Value Code" file and have very few `NaN` values ###Code temp_missing_values = azdias[azdias_missing_attr].isna().sum() temp_missing_values = temp_missing_values[temp_missing_values<50_000] temp_missing_values ###Output _____no_output_____ ###Markdown **CJT_ name group**These are values about the Customer-Journey-Typology relating to the preferred information and buying channels of consumers. Interestingly, in the `Attribute` information, we can see that this data source should be just one column with codes to represent 6 types of consumers. But in fact I can see that there are 6 columns with `_typ_x` suffix. Perhaps these columns represent a one-hot encoded version of the existing and known column: `CJT_GESAMTTYP`? ###Code azdias[['CJT_GESAMTTYP', 'CJT_KATALOGNUTZER', 'CJT_TYP_1', 'CJT_TYP_2', 'CJT_TYP_3', 'CJT_TYP_4', 'CJT_TYP_5', 'CJT_TYP_6' ]].head(10) # DataFrame values where the Gesamttyp values are NaN. azdias[['CJT_GESAMTTYP', 'CJT_KATALOGNUTZER', 'CJT_TYP_1', 'CJT_TYP_2', 'CJT_TYP_3', 'CJT_TYP_4', 'CJT_TYP_5', 'CJT_TYP_6' ]].loc[azdias['CJT_GESAMTTYP'].isna()] ###Output _____no_output_____ ###Markdown Ok, Good learning here. I only have verification on the value code that represents `unknown` for one of the `CJT_` columns. It looks like I can see that all of the missing values from the column I **do** know are **also** missing from the remaining columns. The count of `4854` also matches two further columns we did not have value code information on but were not named `CJT_*`. `CJT_` columns are satisified here. **However**, I will note that for the next step in feature engineering the `CJT_GESAMTTYPE` (or `total_type` in english) is a _categorical_ variable and that the others seem to be a _ordinal_ value. ###Code temp_missing_values.loc[~temp_missing_values.index.str.contains('CJT_')] ###Output _____no_output_____ ###Markdown The Remaining columns that have very few `NaN` values and are not represented in our list of value codes to determine if there is a code that represents `unknown` are above. I will deal with these columns individually: - `LNR`: This is something of a Unique ID that was discovered above when looking at the Pandas Profiling report. I will drop this column before makeing any predictions- `D19_BUCH_CD` - This column was one that should have been included in the `D19_` group process above, but it has added a `_CD` suffix, which I learned from the Value Codes list that the attribute includes consumer interest in "books and cds". I will assume the `azdias` dataset column `D19_BUCH` is equal to `D19_BUCH_CD` and use the value code associated with it to convert to `NaN` - `D19_KONSUMTYP_MAX` - This column nearly matchs with `D19_KONSUMTYP`, but the `_MAX` suffix could be that it has a separate meaning. I will manually match the two as I did above with `D19_BUCH_CD` as the likelihood of the same type of Value Code (`9`) is rare. The final evidence that `9` is the value of `unknown` here and should be converted to `NaN` is that the number if existing `NaN` values in `D19_KONSUMTYP` is 257113. The number of `D19_KONSUMTYP_MAX` values of `9` are 257113. Thus, I will manually add the conversion of all `D19_KONSUMTYP_MAX` values of `9` to `NaN`- `D19_VERSI_DATUM`, `D19_VERSI_OFFLINE_DATUM`, and `D19_VERSI_ONLINE_DATUM` - These were discussed above and will be manually converted to `D19_VERSAND_*`- `RT_KEIN_ANREIZ` and `RT_SCHNAEPPCHEN` have identical missing values to the other `CJT_` named columns and I will assume all `NaN` values are accounted for and no further conversion needs to be done. ###Code #TODO Remove LNR from list #TODO add _CD to D19_BUCH on unknowns list #TODO add D19_KONSUMTYP_MAX = 9 to nan list #TODO add D19_VERSI_DATUM, D19_VERSI_OFFLINE_DATUM, and D19_VERSI_ONLINE_DATUM = 10 to the unknowns list. # Remove _RZ suffix from Attributes and re-run our for loop to # replace value codes known to represent 'missing' with np.nonlocal value_unknown_codes.loc[value_unknown_codes['Attribute'].str.endswith('_RZ'), 'Attribute'] = value_unknown_codes.loc[value_unknown_codes['Attribute'].str.endswith('_RZ'), 'Attribute'].str.replace('_RZ', '') value_unknown_codes.loc[value_unknown_codes['Attribute'].str.contains('BUCH'), 'Attribute'] = 'D19_BUCH_CD' value_unknown_codes = value_unknown_codes.append(pd.DataFrame({'Attribute': ['D19_KONSUMTYP_MAX'], 'value': [9]})) value_unknown_codes.loc[value_unknown_codes['Attribute'].str.contains('D19_VERSAND_OFFLINE_DATUM|D19_VERSAND_ONLINE_DATUM|D19_VERSAND_DATUM'), 'Attribute'] = value_unknown_codes.loc[value_unknown_codes['Attribute'].str.contains('D19_VERSAND_OFFLINE_DATUM|D19_VERSAND_ONLINE_DATUM|D19_VERSAND_DATUM'), 'Attribute'].str.replace('_VERSAND_', '_VERSI_') missing_attributes = [] for idx, row in value_unknown_codes.iterrows(): try: azdias.loc[azdias[row['Attribute']] == row['value'], row['Attribute']] = np.nan except KeyError: missing_attributes.append(row['Attribute']) fig = plot_missing_values(azdias[azdias_missing_attr]) fig.show() ###Output _____no_output_____ ###Markdown Part 0.3 Store unknown value codes list to fileI can use this list of `attributes -> Unknow Value Code` for a future cleaning process. ###Code value_unknown_codes.to_csv('value_unknown_codes.csv', index=False) ###Output _____no_output_____ ###Markdown Row based cleaning Next we will look for rows that are missing data. If a single row is missing most of the data fields it won't help me make predictions.First I will plot a histogram to see the distribution of missing values per row. There are 366 columns in the DataFrame. I will count the missing values in each row and look for an obivous pattern. As our data set was built from serveral sources. It is likely that certain individuals might be missing an entire portion of the columns. This would present a pattern. ###Code len(azdias.columns) rownulls = azdias.isnull().sum(axis=1) rownullsmax = rownulls.max() rownulls = np.histogram(rownulls, bins=range(0, rownulls.max(), 1)) fig = go.Figure() fig.add_trace( go.Bar( x=rownulls[1], y=rownulls[0] )) fig.update_xaxes(title='Number of Missing Values') fig.update_yaxes(title='Count of Rows') fig.update_layout(showlegend=False, title = 'Distribution of Missing Values per Row') # py.plot(fig, filename = 'udsnd_4_missing', auto_open=False) ###Output _____no_output_____ ###Markdown There are three "shapes" visible in the distrobution. The first shape on the right has a steady curve. The next two shapes are skyscraper like spikes in count of individuals who are all missing that same amount of values in the row. The first curved shape to the left indicates that all of these individuals have data present in nearly 300 columns and likely some from all the disparate sources but the gradual curve of fewer and fewer missing data above 300 columns leads me to believe that the remaining data fields are truely `not applicable` such as "Age_Child_4". Fewer and fewer individuals with have data on a fourth child, becuase they have no fourth child and thus the `NaN` here is a value.Alternatively, the large spikes both lead me to believe that individuals in these bins are missing all the data from an single data source. For now, I will remove rows that represent the first spike (and a little further). The second spike of missing data may present itself as a pattern in it's value in the future PCA portion. If so, I can re-adjust. ###Code # A very crude way to count the number of individuals that would be removed by threshold: temp_len = len(azdias) for i in [(50, "After left spike"), (215, 'Middle ground'), (278, "Before second spike"), (293, 'After second spike')]: print(f"{i[1]}: {temp_len - len(azdias.dropna(axis=0, how='any', thresh=i[0]))}" ) print(len(azdias.dropna(axis=0, how='any', thresh=293))) print(len(azdias.dropna(axis=0, how='any', thresh=278))) print(len(azdias.dropna(axis=0, how='any', thresh=293))/temp_len) print(len(azdias.dropna(axis=0, how='any', thresh=278))/temp_len) ###Output 0.6400174591936232 0.8262709249445424 ###Markdown Tough call. 160,000 rows will be lost in order to be sure that each data field is truly represented. In the more strict scenario we are left with 64% of the data, but this will likely produce better results. **In the real world** part of this decision would be made by understanding the business case for the value of predicitons in the future. Is it more important to make a less accurate prediction with fewer data, or a more accurate prediction with fewer overall predictions possible due to missing data? I will start the process by including more rows and set the threshold at **278 columns**. Then, drop individual columns with a similar ratio of missing values per row. ###Code azdias.dropna(axis=0, how='any', thresh=278, inplace=True) # Plot missing values now that rows have been removed. fig = plot_missing_values(azdias) fig.show() # py.plot(fig, filename = 'udsnd_5_missing', auto_open=False) ###Output _____no_output_____ ###Markdown Using Plotly, I can hover over data to see the value of each bar as a tooltip. Thus, I can simply look at the shape of this visual and see the drop off in rate of missing values. The first 'plateau' of values after the slow downward curve on the left are all at 22% approximately. After this 'plateau', the next level of values are at `18%`Seeing the plateau at 22%, leads me to believe a 23% cutoff is appropriate to drop all columns with more missing values than this. ###Code temp_col_na = azdias.isna().sum()/len(azdias) temp_col_na = temp_col_na.loc[temp_col_na > .23].index ###Output _____no_output_____ ###Markdown I will use this specific list in the future cleaning function to drop these columns. I don't have enough data to train on them, therefore I will not be able to use these columns for predictions ###Code # List of columns with more that 23% of values missing (after row cleaning) temp_col_na azdias.drop(temp_col_na, axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Feature EngineeringNext is a very important and challenging portion of the data cleaning needed for this project. I will need to identify columns that require transformations. Such as: 1. Categorical columns - Multi-label fields - boolean2. Ordinal columns3. Date columnsHere I used the HTML file created by the Pandas Profiling tool to visually inspect columns quickly to determine which were categorical versus ordinal. This allowed me to create a file to help determine which columns should be engineered and how. First, I'll look at any column with an `object` datatype. ###Code azdias.select_dtypes(include=['object']) ###Output _____no_output_____ ###Markdown Looks like the `EINGEFUEGT_AM` column is a date like object. I will start with this one. As long as all values. The meaning roughly translates "inserted" and there is no information in the data attributes file about this column. I don't know what it means, but `inserted` leads me to possibly believe this is when the data was collected? It may add to the model best to separate the date into smaller chunks and test it. ###Code azdias['EINGEFUEGT_AM'].isna().sum() # Add to cleaning function azdias['EINGEFUEGT_AM'] = pd.to_datetime(azdias['EINGEFUEGT_AM']) azdias['EINGEFUEGT_AM_year'] = azdias['EINGEFUEGT_AM'].dt.year azdias['EINGEFUEGT_AM_month'] = azdias['EINGEFUEGT_AM'].dt.month azdias.drop('EINGEFUEGT_AM', axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Next I will look at a categorical columns that has been combined. I noticed this one by looking through the Values .xlsx file and saw that the first value in the number pair represent one category and the second value in the number pair represents another category. For cleaning model training I will separate these values into their own columns to allow for the model to be slightly more general. ###Code # Combined Category print(azdias['CAMEO_INTL_2015'].isna().sum()) print(azdias['CAMEO_INTL_2015'].dtype) azdias['CAMEO_INTL_2015_0'] = azdias['CAMEO_INTL_2015']//10 azdias['CAMEO_INTL_2015_1'] = azdias['CAMEO_INTL_2015']%10 azdias.drop('CAMEO_INTL_2015', axis=1, inplace=True) ###Output _____no_output_____ ###Markdown The remaining features are categorical features which I will separate into dummie columns. ###Code categorical_features = ['CAMEO_DEU_2015', 'CJT_GESAMTTYP', 'D19_KONSUMTYP', 'D19_LETZTER_KAUF_BRANCHE', 'FINANZTYP', 'GEBAEUDETYP', 'GFK_URLAUBERTYP', 'KBA05_MAXHERST', 'LP_FAMILIE_FEIN', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB', 'NATIONALITAET_KZ', 'OST_WEST_KZ', 'RETOURTYP_BK_S', 'SHOPPER_TYP', 'WOHNLAGE', 'ZABEOTYP'] azdias = pd.get_dummies(azdias, columns=categorical_features) ###Output _____no_output_____ ###Markdown Cleaning Function ###Code def clean_AZ_dataset(dff, codes='value_unknown_codes.csv', impute=True, first_clean=False): """ Function to prepare the AZ Consumer dataset for machine learning models INPUT dff - Pandas DataFrame to be cleaned. codes = A file path to a predifined .csv file with list of value codes in the data that represent "unknown" or "missing" values. File should be formatted as such: attribute, value, 'CAMEO_DEU_2015', -1, RETURNS LNR - Pandas series with customer numbers df - Pandas DataFrame ready to be used in supervised and unsupervised models. """ print("Starting Cleaning.") # Read in special file to map codes to missing values. value_unknown_codes = pd.read_csv(codes) print("Loaded Missing Value codes.") # This is wierd, but I did this next step manually for the azdias file so that I could store it as .parquet # Thus, I am adding this step below for the mailout CSV's and a flag to signal that it needs to be run. if first_clean == True: # According to data map supplied "-1" means unknown. Fill non-integer numbers with np.nan and convert to float dff.CAMEO_DEUG_2015 = dff.CAMEO_DEUG_2015.replace('X', np.NaN).astype(float) dff.CAMEO_INTL_2015 = dff.CAMEO_INTL_2015.replace('XX', np.NaN).astype(float) # Loop through values replaceing values with NaN. missing_attributes = [] for idx, row in value_unknown_codes.iterrows(): try: dff.loc[dff[row['Attribute']] == row['value'], row['Attribute']] = np.nan except KeyError: missing_attributes.append(row['Attribute']) print("Finished converting missing value codes to NaN.") print(f"These fields were not found in the data {missing_attributes}") temp_len = len(dff) dff.dropna(axis=0, how='any', thresh=278, inplace=True) print(f"Dropped {temp_len-len(dff)} rows from the orginal {temp_len} rows in the dataset") try: dff.drop(['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], axis=1, inplace=True) except KeyError: pass LNR = dff['LNR'] temp_col_na = ['LNR', 'AGER_TYP', 'ALTER_HH', 'ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4', 'D19_BANKEN_ANZ_12', 'D19_BANKEN_ANZ_24', 'D19_BANKEN_DATUM', 'D19_BANKEN_DIREKT', 'D19_BANKEN_GROSS', 'D19_BANKEN_LOKAL', 'D19_BANKEN_OFFLINE_DATUM', 'D19_BANKEN_ONLINE_DATUM', 'D19_BANKEN_REST', 'D19_BEKLEIDUNG_GEH', 'D19_BEKLEIDUNG_REST', 'D19_BILDUNG', 'D19_BIO_OEKO', 'D19_BUCH_CD', 'D19_DIGIT_SERV', 'D19_DROGERIEARTIKEL', 'D19_ENERGIE', 'D19_FREIZEIT', 'D19_GARTEN', 'D19_GESAMT_ANZ_12', 'D19_GESAMT_ANZ_24', 'D19_GESAMT_DATUM', 'D19_GESAMT_OFFLINE_DATUM', 'D19_GESAMT_ONLINE_DATUM', 'D19_HANDWERK', 'D19_HAUS_DEKO', 'D19_KINDERARTIKEL', 'D19_KOSMETIK', 'D19_LEBENSMITTEL', 'D19_LOTTO', 'D19_NAHRUNGSERGAENZUNG', 'D19_RATGEBER', 'D19_REISEN', 'D19_SAMMELARTIKEL', 'D19_SCHUHE', 'D19_SONSTIGE', 'D19_TECHNIK', 'D19_TELKO_ANZ_12', 'D19_TELKO_ANZ_24', 'D19_TELKO_DATUM', 'D19_TELKO_MOBILE', 'D19_TELKO_OFFLINE_DATUM', 'D19_TELKO_ONLINE_DATUM', 'D19_TELKO_REST', 'D19_TIERARTIKEL', 'D19_VERSAND_ANZ_12', 'D19_VERSAND_ANZ_24', 'D19_VERSAND_DATUM', 'D19_VERSAND_OFFLINE_DATUM', 'D19_VERSAND_ONLINE_DATUM', 'D19_VERSAND_REST', 'D19_VERSI_ANZ_12', 'D19_VERSI_ANZ_24', 'D19_VERSI_DATUM', 'D19_VERSI_OFFLINE_DATUM', 'D19_VERSI_ONLINE_DATUM', 'D19_VERSICHERUNGEN', 'D19_VOLLSORTIMENT', 'D19_WEIN_FEINKOST', 'EXTSEL992', 'KBA05_BAUMAX', 'KK_KUNDENTYP', 'TITEL_KZ'] dff.drop(temp_col_na, axis=1, inplace=True) print("Dropped columns with missing data.") dff['EINGEFUEGT_AM'] = pd.to_datetime(dff['EINGEFUEGT_AM']) dff['EINGEFUEGT_AM_year'] = dff['EINGEFUEGT_AM'].dt.year dff['EINGEFUEGT_AM_month'] = dff['EINGEFUEGT_AM'].dt.month dff.drop('EINGEFUEGT_AM', axis=1, inplace=True) print('Converted EINGEFUEGT_AM to year and month') dff['CAMEO_INTL_2015_0'] = dff['CAMEO_INTL_2015']//10 dff['CAMEO_INTL_2015_1'] = dff['CAMEO_INTL_2015']%10 dff.drop('CAMEO_INTL_2015', axis=1, inplace=True) print('Split CAMEO_INTL_2015 into two columns') categorical_features = ['CAMEO_DEU_2015', 'CJT_GESAMTTYP', 'D19_KONSUMTYP', 'D19_LETZTER_KAUF_BRANCHE', 'FINANZTYP', # 'GEBAEUDETYP', 'GFK_URLAUBERTYP', 'KBA05_MAXHERST', 'LP_FAMILIE_FEIN', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB', 'NATIONALITAET_KZ', 'OST_WEST_KZ', 'RETOURTYP_BK_S', 'SHOPPER_TYP', 'WOHNLAGE', 'ZABEOTYP'] dff = pd.get_dummies(dff, columns=categorical_features) print('Converted Categorical columns to dummy columns.') if impute == True: imp_median = SimpleImputer(missing_values=np.nan, strategy='median') dff_imp = imp_median.fit_transform(dff) print("Imputed NaN values with Median") return LNR, pd.DataFrame(dff_imp, index=dff.index, columns=dff.columns) else: return LNR, dff ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. ###Code azdias_neu = pd.read_parquet('Udacity_AZDIAS_052018.parquet') LNR, azdias_neu = clean_AZ_dataset(azdias_neu, impute=True) scaler = StandardScaler() # fit transform scaled = scaler.fit_transform(azdias_neu.astype(float)) azdias_scaled = pd.DataFrame(data=scaled, index=azdias_neu.index, columns=azdias_neu.columns) print("original shape: ", azdias_neu.shape) print("scaled shape:", azdias_scaled.shape) ###Output original shape: (736683, 492) scaled shape: (736683, 492) ###Markdown PCA Refresher: Eigenvalues - sum of squared distances from origin for the best fit line Singular Value of PC1 - The square root of the EigenvalueEigenvector - Single unit of a PCLoading scores - proportion of each value to get the eigenvector for the PC. Scree plot - Graphical representation of the percentages of variation for each Principal Component ###Code pca = PCA() pca.fit(azdias_scaled) pca_data = pca.transform(azdias_scaled) total_var = pca.explained_variance_ratio_.sum() * 100 pc_var = pca.explained_variance_ratio_ exp_var_cumul = np.cumsum(pc_var) fig = go.Figure() fig.add_trace(go.Scatter( x=list(range(1, len(exp_var_cumul))), y=exp_var_cumul, fill='tozeroy', # fill area between trace0 and trace1 mode='lines', line_color='indigo')) fig.update_layout(title = 'Cumulative Sum of Explained Variance') fig.update_yaxes(tickformat='%', title='Explained Variance', showspikes=True) fig.update_xaxes(title='Count of Principal Components', showspikes=True) fig.show() # py.plot(fig, filename='udsnd_6_pca', auto_open=False) fig = go.Figure() fig.add_trace( go.Bar( x=np.arange(len(pca.explained_variance_ratio_)), y=pca.explained_variance_ratio_ ) ) fig.update_yaxes(tickformat=".2%", title="Percent of Explained Variance") fig.update_xaxes(title = 'PCA Components') fig.update_layout(title = 'Principal Component Analysis') fig.show() # py.plot(fig, filename='udsnd_7_pca', auto_open=False) ###Output _____no_output_____ ###Markdown Principle Component AnalysisWe can see for our visuals above that like many large datasets a few components account for a large explanation of the variance. While there are a few components that are larger than the rest the largest components only account for about 25% of the total variance. I have seen some information about 85% being a good start, however that would be about 250 dimensions. At 180 dimensions I get to about 75% of variance explained. I will start there. ###Code n_components = 180 pca = PCA(n_components = n_components) pca.fit(azdias_scaled) pca_data = pca.transform(azdias_scaled) def load_score_viz(pc): ''' Creates visualization of Loading scores for each feature in a principal component. INPUT - 'pc' - An integer representing the element in the list of principle components OUTPUT - 'fig' - Plotly figure with visualization of the top 50 most impactful feature for that specific component. ''' loading_scores = pd.Series(pca.components_[pc], index=azdias_scaled.columns) sorted_loading_scores = loading_scores.abs().sort_values(ascending=False) fig = go.Figure( go.Bar( x = sorted_loading_scores.head(50).index, y= sorted_loading_scores.head(50) ) ) fig.update_yaxes(title = "Proportion of Eigenvector") fig.update_layout(title= f"Absolute Value of Load Score - Principle Component {pc} <br><sub>{round((pca.explained_variance_ratio_[pc] * 100), 2)}% of explained variance") return fig fig = load_score_viz(0) fig.show() # py.plot(fig, filename='udsnd_8_loading', auto_open=False) fig = load_score_viz(1) fig.show() # py.plot(fig, filename='udsnd_9_loading', auto_open=False) fig = load_score_viz(2) fig.show() # py.plot(fig, filename='udsnd_10_loading', auto_open=False) ###Output _____no_output_____ ###Markdown Optimal ClustersNext I will test several Kmeans centers to find a starting point of least SSE versus trainging time ###Code n_clusters = range(5,50) scores = [MiniBatchKMeans(i, random_state=42).fit(pca_data).score(pca_data) for i in n_clusters] fig = go.Figure() fig.add_trace( go.Scatter( x=list(n_clusters), y=np.abs(scores), mode='lines+markers' ) ) fig.update_xaxes(title='Number of Clusters') fig.update_yaxes(title='Sum of Squared Errors') fig.update_layout(title='K-means Optimal Number of Cluster Centers') fig.show() # py.plot(fig, filename='udsnd_11_kmeans', auto_open=False) # re-fith with 11 clusters kmeans = KMeans(n_clusters=19, random_state=42) # general population predictions azdias_predictions = kmeans.fit_predict(pca_data) ###Output _____no_output_____ ###Markdown Read in Custome data and fit it to current clustersRun this now on scaler, pca, and Kmeans for analysis, but later store into a single pipeline for future use. ###Code cust_neu = pd.read_parquet('Udacity_CUSTOMERS_052018.parquet') LNR, cust_neu = clean_AZ_dataset(cust_neu, impute=True) cust_scaled = scaler.transform(cust_neu.astype(float)) cust_pca_data = pca.transform(cust_scaled) cust_cluster_predictions = kmeans.predict(cust_pca_data) # Cant upload Histograms to free Plotly embed site. Convert to Bar Chart x_ax = np.unique(cust_cluster_predictions, return_counts=True)[0] cust_clust = np.unique(cust_cluster_predictions, return_counts=True)[1]/len(cust_cluster_predictions) az_clust = np.unique(azdias_predictions, return_counts=True)[1]/len(azdias_predictions) fig = go.Figure() fig.add_trace( go.Bar( x=x_ax, y=cust_clust, name='Customers' ) ) fig.add_trace( go.Bar( x=x_ax, y=az_clust, name='Population' ) ) fig.update_yaxes(tickformat='.2%', title='Frequency') fig.update_xaxes(title='Customer Groups') fig.update_layout(barmode='overlay', title='Customer Segmentation') fig.update_traces(opacity=0.75) fig.show() # py.plot(fig, filname='udsnd_12_segments', auto_open=False) ###Output _____no_output_____ ###Markdown Understand our clustersNow that I can see how our customer group differs from the population, I will take a look to determine some individual features that differ. With 366 different columns of data to start it is too exhuastive to analyse each feature. However, with PCA and our Loading Score visual, we might start with those features that had the largest proportion of the eigenvector in the components that had the greatest explained variance. **Cluster 18**One cluster had more than 25% of our customer population. Cluster 18 is a great example to start to see the features that most qualify our customers. ###Code # Add cluster column to both dataframes to filter by cluster cust_neu['cluster'] = cust_cluster_predictions def plot_feature_compare(feature, cluster): ''' Builds a plot compare features of the customer group versus the population for a specific cluster.cust_neu INPUT - feature: String of column name cluster: integer of customer cluster to filter on OUTPUT - Plotly figure with distribution of values for each group. ''' x_ax = np.unique(azdias_neu[feature], return_counts=True)[0] cust_mobi_reg = np.unique(cust_neu.loc[cust_neu['cluster'] == cluster, feature], return_counts=True)[1]/len(cust_neu.loc[cust_neu['cluster'] == cluster]) az_mobi_reg = np.unique(azdias_neu[feature], return_counts=True)[1]/len(azdias_neu) fig = go.Figure() fig.add_trace( go.Bar( x=x_ax, y=cust_mobi_reg, name='Customers' ) ) fig.add_trace( go.Bar( x=x_ax, y=az_mobi_reg, name='Population' ) ) fig.update_yaxes(tickformat='.2%', title='Frequency') fig.update_xaxes(title='Value') fig.update_layout(barmode='overlay', title=f'Customer Segmentation by Feature<br><sub><b>Cluster:</b> {cluster} - <b>Feature:</b> {feature}') fig.update_traces(opacity=0.75) return fig ###Output _____no_output_____ ###Markdown `MOBI_REGIO` - This feature had the highest loading score in the Principle component with the most explained variance. The freature describes moving patterns. Lower numbers are high mobility and high numbers are very low mobility. It is clear the Arvato Customers have lower mobility regionally. ###Code fig = plot_feature_compare('MOBI_REGIO', 18) fig.show() # py.plot(fig, filname='udsnd_13_features', auto_open=False) ###Output _____no_output_____ ###Markdown In PC2 the largest loading score was `PRAEGENDE_JUGENDJAHRE` ###Code fig = plot_feature_compare('PRAEGENDE_JUGENDJAHRE', 18) fig.show() # py.plot(fig, filname='udsnd_14_features', auto_open=False) ###Output _____no_output_____ ###Markdown In PC3 the largest loading score was car based `KBA13_HERST_BMW_BENZ`, In the third Principle Component we can see the highest loading scores are all regarding automobiles and generally those of a luxury class. This feature is the share of BMW & Mercedes Benz within the neighborhood. A score of 5 is high. ###Code fig = plot_feature_compare('KBA13_HERST_BMW_BENZ', 18) fig.show() # py.plot(fig, filname='udsnd_15_features', auto_open=False) ###Output _____no_output_____ ###Markdown `PLZ8_ANTG1` - Finally, the number of 1–2 family houses in the neighborhood of the individuals is lower for the customers in Cluster 18 ###Code fig = plot_feature_compare('PLZ8_ANTG1', 18) fig.show() # py.plot(fig, filname='udsnd_16_features', auto_open=False) ###Output _____no_output_____ ###Markdown Build reusable Pipeline for Customer segmentationI will move all the components into a single pipeline to be saved for future use on new data. Create a function that could be used on future customer data. This includes the imputed values for each column. Why? While imputed values leads to data leak in a traditional train/test data set scenario. In this case I am considering the one off scenario. If this cluster segmentation needed to be productionized and a single user mapped to it, I could not impute values for that user. The transformer in this pipeline will store the median value for each feature to be able to reproduce a prediction in the case that it is used against a single row of data and that row has missing values. Of course, in our cleaning step rows with less than 70% of fields will be dropped. ###Code azdias_neu = pd.read_parquet('Udacity_AZDIAS_052018.parquet') LNR, azdias_neu = clean_AZ_dataset(azdias_neu, impute=True) cust_neu = pd.read_parquet('Udacity_CUSTOMERS_052018.parquet') LNR, cust_neu = clean_AZ_dataset(cust_neu, impute=True) def build_model(): """ Builds an Sklearn Pipeline with a Simple Imputer, Scaler, PCA, and Kmeans. Input - None Output - Sklearn Pipeline object to be fit on Population data and reused with future data. """ transformer = FeatureUnion( transformer_list=[ ('features', SimpleImputer(strategy='median')), ('indicators', MissingIndicator(features="all"))]) pipeline = make_pipeline(transformer, StandardScaler(), PCA(n_components = 180), KMeans(n_clusters=19, random_state=42)) return pipeline pipe = build_model() pipe.fit(azdias_neu) def save_model(model, model_filepath): """ Saves model as pickle object. Input - model : model object model_filepath : filepath destination for output Output - None, file stored """ pkl.dump(model, open(model_filepath, 'wb')) pass save_model(pipe, 'customer_segmentation/clusters.pkl') ###Output _____no_output_____ ###Markdown Part 2: Supervised Learning Model---Now that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code X = pd.read_csv('Udacity_MAILOUT_052018_TRAIN.csv') LNR, X = clean_AZ_dataset(X, first_clean=True, impute=True) y = X['RESPONSE'] X = X.drop('RESPONSE', axis=1) ###Output /Users/derricklewis/.local/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3146: DtypeWarning: Columns (18,19) have mixed types.Specify dtype option on import or set low_memory=False. ###Markdown SplitFor several quick trials I will split the data into train and test sets to verify results. ###Code X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y) ###Output _____no_output_____ ###Markdown Evaluate Several Baseline ModelsHere I will trial several types of models to determine a rough idea of feasibilty of sucess. Below is a function to produce a ROC curve visualization for the predictions of each traine algorithm. ###Code def evaluate_model(MODEL, X_test, y_test, name): ''' Function to gather basic results for printing to standard out. Input - Model : trained model object X_test : Unseen Input features to evaluate model y_test : Unseen labels to evaluate model Output - Roc_auc_score for predictions ''' y_pred = MODEL.predict_proba(X_test)[:, 1] fpr, tpr, thresholds = roc_curve(y_test, y_pred, pos_label=1) fig = px.area( x=fpr, y=tpr, title=f'{name}<br><sub>ROC Curve (AUC={auc(fpr, tpr):.4f})', labels=dict(x='False Positive Rate', y='True Positive Rate'), ) fig.add_shape( type='line', line=dict(dash='dash'), x0=0, x1=1, y0=0, y1=1 ) fig.update_yaxes(scaleanchor="x", scaleratio=1) fig.update_xaxes(constrain='domain') return fig ###Output _____no_output_____ ###Markdown Some models have built in adjustments for class imbalance. I will do a quick determination of class imbalance to be used when evaluating these models. ###Code # sum(negative instances) / sum(positive instances) class_weight = int(y.value_counts()[0]/y.value_counts()[1]) print(class_weight) ###Output 80 ###Markdown Logistic Regression ###Code from sklearn.linear_model import LogisticRegression LGR = make_pipeline(StandardScaler(), LogisticRegression(max_iter=1000, class_weight={0:1, 1:class_weight}, random_state=42)) LGR.fit(X_train, y_train) fig = evaluate_model(LGR, X_test, y_test, 'Logistic Regression') fig.show() py.plot(fig, filname='udsnd_17_log', auto_open=False) ###Output _____no_output_____ ###Markdown Support Vector Machines ###Code from sklearn.svm import SVC clf = make_pipeline(StandardScaler(), SVC(gamma = 'auto', class_weight={0:1, 1:class_weight}, probability=True, random_state=42)) clf.fit(X_train, y_train) fig = evaluate_model(clf, X_test, y_test, 'Support Vector Classifier') fig.show() py.plot(fig, filname='udsnd_18_svm', auto_open=False) ###Output _____no_output_____ ###Markdown ADABoost ###Code model = AdaBoostClassifier(random_state=42) model.fit(X_train, y_train) fig = evaluate_model(model, X_test, y_test, name='AdaBoost') fig.show() py.plot(fig, filname='udsnd_19_ada', auto_open=False) ###Output _____no_output_____ ###Markdown XGBoost ###Code model = xgb.XGBClassifier(use_label_encoder=False, eval_metric='auc', scale_pos_weight = 80, random_state=42) model.fit(X_train, y_train) fig = evaluate_model(model, X_test, y_test, name = 'XGBoost') fig.show() py.plot(fig, filname='udsnd_20_xgb', auto_open=False) ###Output _____no_output_____ ###Markdown Use XGBoostgoing to use XGboost Algorithm and optimize for the best parameters. ###Code model = xgb.XGBClassifier( use_label_encoder=False, eval_metric='auc', # learning_rate = 0.01, scale_pos_weight = 80, n_estimators = 20, max_delta_step=5 ) param_grid = { 'learning_rate': [.01, 0.1, 0.5, .9], # 'num_leaves': [2], 'max_depth': [2, 3, 6], # 'colsample_bytree':[0.5, 1.0], # "min_data_in_leaf":[20, 100], # 'min_child_samples': [0, 50], # 'max_bin': [100, 1000], # 'reg_lambda': [1e-9, 1.0], # 'reg_alpha': [1e-9, 1.0], # 'scale_pos_weight': [80], 'n_estimators': [20,40,60] } grid = GridSearchCV(model, param_grid, cv=4, scoring='roc_auc', verbose=3) grid.fit(X, y) # evaluate_model(grid, X_test, y_test) fig = evaluate_model(grid, X_test, y_test, name='XGBoost Tuned') py.plot(fig, filname='udsnd_21_xgb2', auto_open=False) grid.best_score_ grid.best_params_ grid.get_params() ###Output _____no_output_____ ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter.Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code mailout_test = pd.read_csv('Udacity_MAILOUT_052018_TEST.csv') LNR, X = clean_AZ_dataset(mailout_test, first_clean=True, impute=True) y_pred = grid.predict_proba(X)[:, 1] submit_df = pd.concat([LNR.astype('int32'), pd.Series(y_pred)], axis =1, ignore_index=True) submit_df.columns=['LNR', 'RESPONSE'] temp = pd.read_csv('Udacity_MAILOUT_052018_TEST.csv') submit_df = temp[['LNR']].merge(submit_df, on='LNR', how='left').fillna(0) submit_df.to_csv('kaggle_submit.csv', index=False) ###Output /Users/derricklewis/.local/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3146: DtypeWarning: Columns (18,19) have mixed types.Specify dtype option on import or set low_memory=False. ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sb from sklearn.preprocessing import Imputer, StandardScaler from sklearn.decomposition import PCA from sklearn.cluster import KMeans import progressbar from sklearn.metrics import roc_curve, roc_auc_score from sklearn.model_selection import GridSearchCV, StratifiedKFold from sklearn.linear_model import LogisticRegression, SGDClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier from sklearn.pipeline import Pipeline # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # load in the data azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';') customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') azdias.head() azdias.info() customers.head() customers.info() feat_info = pd.read_excel('DIAS Attributes - Values 2017.xlsx') feat_info.head() feat_info.info() feat_info = feat_info.drop(['Unnamed: 0'], axis=1) feat_info.head() feat_info.head(10) ###Output _____no_output_____ ###Markdown Part 0.1: Match the number of features After knowing the number of the features which did not match between feat_info attribute and azdias column, we will explore deeper ###Code feat_info2 = feat_info.copy() feat_info_attribute = feat_info['Attribute'].fillna(method='ffill') feat_info2['Attribute'] = feat_info_attribute feat_info2.head() feat_info2.to_csv("feat_info2.csv", header=True) feat_info2 = pd.read_csv("feat_info2.csv") feat_info2 = feat_info2.drop(['Unnamed: 0'], axis=1) feat_info2.head() in_az = [] not_in_az = [] for row in feat_info2['Attribute'].value_counts().index: if row in azdias.columns: in_az.append(row) else: not_in_az.append(row) len(not_in_az) not_in_az in_feat = [] not_in_feat = [] for col in azdias.columns: if col in feat_info2['Attribute'].value_counts().index: in_feat.append(col) else: not_in_feat.append(col) len(not_in_feat) not_in_feat ###Output _____no_output_____ ###Markdown We can see that there are differences in the features that are being explained by feat_info2 and azdias. The features that are being explained in feat_info2 but did not appear in azdias is fine; however, the features in the azdias which cannot be explained might hinder during the interpretation of the analysis. We seperate the "not_in_feat" features into two parts manually: The one that have almost similar name such as D19_REISEN -> D19_REISEN_KZ and the one that does not have any similarity which will be dropped from our analysis. ###Code final_not_in_feat = ['LNR', 'AKT_DAT_KL', 'ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4', 'ALTERSKATEGORIE_FEIN', 'ANZ_KINDER', 'ANZ_STATISTISCHE_HAUSHALTE', 'ARBEIT', 'CJT_KATALOGNUTZER', 'CJT_TYP_1', 'CJT_TYP_2', 'CJT_TYP_3', 'CJT_TYP_4', 'CJT_TYP_5', 'CJT_TYP_6', 'D19_KONSUMTYP_MAX', 'D19_LETZTER_KAUF_BRANCHE', 'D19_SOZIALES', 'D19_TELKO_ONLINE_QUOTE_12', 'D19_VERSI_DATUM', 'D19_VERSI_OFFLINE_DATUM', 'D19_VERSI_ONLINE_DATUM', 'D19_VERSI_ONLINE_QUOTE_12', 'DSL_FLAG', 'EINGEFUEGT_AM', 'EINGEZOGENAM_HH_JAHR', 'EXTSEL992', 'FIRMENDICHTE', 'GEMEINDETYP', 'HH_DELTA_FLAG', 'KBA13_ANTG1', 'KBA13_ANTG2', 'KBA13_ANTG3', 'KBA13_ANTG4', 'KBA13_BAUMAX', 'KBA13_GBZ', 'KBA13_HHZ', 'KBA13_KMH_210', 'KOMBIALTER', 'KONSUMZELLE', 'MOBI_RASTER', 'RT_KEIN_ANREIZ', 'RT_SCHNAEPPCHEN', 'RT_UEBERGROESSE', 'STRUKTURTYP', 'UMFELD_ALT', 'UMFELD_JUNG', 'UNGLEICHENN_FLAG', 'VERDICHTUNGSRAUM', 'VHA', 'VHN', 'VK_DHT4A', 'VK_DISTANZ', 'VK_ZG11'] azdias2 = azdias.drop(['LNR', 'AKT_DAT_KL', 'ALTER_KIND1', 'ALTER_KIND2', 'ALTER_KIND3', 'ALTER_KIND4', 'ALTERSKATEGORIE_FEIN', 'ANZ_KINDER', 'ANZ_STATISTISCHE_HAUSHALTE', 'ARBEIT', 'CJT_KATALOGNUTZER', 'CJT_TYP_1', 'CJT_TYP_2', 'CJT_TYP_3', 'CJT_TYP_4', 'CJT_TYP_5', 'CJT_TYP_6', 'D19_KONSUMTYP_MAX', 'D19_LETZTER_KAUF_BRANCHE', 'D19_SOZIALES', 'D19_TELKO_ONLINE_QUOTE_12', 'D19_VERSI_DATUM', 'D19_VERSI_OFFLINE_DATUM', 'D19_VERSI_ONLINE_DATUM', 'D19_VERSI_ONLINE_QUOTE_12', 'DSL_FLAG', 'EINGEFUEGT_AM', 'EINGEZOGENAM_HH_JAHR', 'EXTSEL992', 'FIRMENDICHTE', 'GEMEINDETYP', 'HH_DELTA_FLAG', 'KBA13_ANTG1', 'KBA13_ANTG2', 'KBA13_ANTG3', 'KBA13_ANTG4', 'KBA13_BAUMAX', 'KBA13_GBZ', 'KBA13_HHZ', 'KBA13_KMH_210', 'KOMBIALTER', 'KONSUMZELLE', 'MOBI_RASTER', 'RT_KEIN_ANREIZ', 'RT_SCHNAEPPCHEN', 'RT_UEBERGROESSE', 'STRUKTURTYP', 'UMFELD_ALT', 'UMFELD_JUNG', 'UNGLEICHENN_FLAG', 'VERDICHTUNGSRAUM', 'VHA', 'VHN', 'VK_DHT4A', 'VK_DISTANZ', 'VK_ZG11'], axis=1) azdias2 = azdias2.rename(columns = {'CAMEO_INTL_2015':'CAMEO_DEUINTL_2015', 'D19_KOSMETIK': 'D19_KOSMETIK_RZ', 'D19_BANKEN_LOKAL': 'D19_BANKEN_LOKAL_RZ', 'D19_BANKEN_REST':'D19_BANKEN_REST_RZ', 'D19_BANKEN_GROSS':'D19_BANKEN_GROSS_RZ', 'D19_BEKLEIDUNG_GEH': 'D19_BEKLEIDUNG_GEH_RZ', 'D19_BANKEN_DIREKT':'D19_BANKEN_DIREKT_RZ', 'D19_BEKLEIDUNG_REST': 'D19_BEKLEIDUNG_REST_RZ', 'D19_BILDUNG': 'D19_BILDUNG_RZ', 'D19_BIO_OEKO':'D19_BIO_OEKO_RZ', 'D19_BUCH_CD':'D19_BUCH_RZ', 'D19_DIGIT_SERV':'D19_DIGIT_SERV_RZ', 'D19_DROGERIEARTIKEL':'D19_DROGERIEARTIKEL_RZ', 'D19_ENERGIE': 'D19_ENERGIE_RZ', 'D19_FREIZEIT':'D19_FREIZEIT_RZ', 'D19_GARTEN':'D19_GARTEN_RZ', 'D19_HANDWERK': 'D19_HANDWERK_RZ', 'D19_HAUS_DEKO':'D19_HAUS_DEKO_RZ', 'D19_KINDERARTIKEL':'D19_KINDERARTIKEL_RZ', 'D19_LEBENSMITTEL': 'D19_LEBENSMITTEL_RZ', 'D19_NAHRUNGSERGAENZUNG':'D19_NAHRUNGSERGAENZUNG_RZ', 'D19_RATGEBER':'D19_RATGEBER_RZ', 'D19_REISEN': 'D19_REISEN_RZ', 'D19_SAMMELARTIKEL':'D19_SAMMELARTIKEL_RZ', 'D19_SCHUHE': 'D19_SCHUHE_RZ', 'D19_TELKO_REST': 'D19_TELKO_REST_RZ', 'D19_SONSTIGE':'D19_SONSTIGE_RZ', 'D19_TELKO_MOBILE': 'D19_TELKO_MOBILE_RZ', 'D19_TECHNIK': 'D19_TECHNIK_RZ', 'D19_VOLLSORTIMENT': 'D19_VOLLSORTIMENT_RZ', 'D19_TIERARTIKEL':'D19_TIERARTIKEL_RZ', 'D19_VERSICHERUNGEN': 'D19_VERSICHERUNGEN_RZ', 'SOHO_KZ':'SOHO_FLAG', 'KBA13_CCM_1401_2500': 'KBA13_CCM_1400_2500', 'D19_VERSAND_REST':'D19_VERSAND_REST_RZ', 'D19_WEIN_FEINKOST':'D19_WEIN_FEINKOST_RZ', 'D19_LOTTO': 'D19_LOTTO_RZ', 'KK_KUNDENTYP': 'D19_KK_KUNDENTYP' }) azdias2['CAMEO_DEUINTL_2015'].value_counts() #azdias2.to_csv("azdias2.csv", header=True) #azdias2 = pd.read_csv("azdias2.csv") #azdias2 = azdias2.drop(['Unnamed: 0'], axis=1) #azdias2.head() ###Output /opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py:2785: DtypeWarning: Columns (9,10) have mixed types. Specify dtype option on import or set low_memory=False. interactivity=interactivity, compiler=compiler, result=result) ###Markdown Part 0.2: Converting Unknown into NaNAt this part, we will convert all of the features that have classfication of the "unknown" answers such as -1 and 0 and turned it into NaN. But first, we will watch closely for those that are have string dtypes as some of them have weird value of X and XX. ###Code azdias2.select_dtypes(include='object').head() azdias2['CAMEO_DEUG_2015'].value_counts() azdias2['CAMEO_DEU_2015'].value_counts() azdias2['OST_WEST_KZ'].value_counts() azdias2['CAMEO_DEUG_2015'].replace(['X'], np.nan, inplace=True) azdias2['CAMEO_DEU_2015'].replace(['XX'], np.nan, inplace=True) azdias2['CAMEO_DEUINTL_2015'].replace(['XX'], np.nan, inplace=True) ###Output _____no_output_____ ###Markdown We replaced all of the X and XX values of string-type columns with NaN as there is no explanation of the X and XX values at the feat_info ###Code feat_info2['Meaning'].value_counts() feat_nan = feat_info2[feat_info2['Meaning'].str.contains('unknown', na=False)] feat_nan nan = [] for attribute in feat_nan['Attribute'].unique(): val = feat_nan.loc[feat_nan['Attribute'] == attribute, 'Value'].astype(str).str.cat(sep=',').split(',') nan.append(val) feat_nan = pd.concat([pd.Series(feat_nan['Attribute'].unique()), pd.Series(nan)], axis=1) feat_nan.columns = ['Attribute', 'missing_or_unknown'] feat_nan for row in feat_nan['Attribute']: print(row) if row in azdias2.columns: na_map = feat_nan.loc[feat_nan['Attribute'] == row, 'missing_or_unknown'].iloc[0] na_idx = azdias2.loc[:, row].isin(na_map) azdias2.loc[na_idx, row] = np.NaN else: continue azdias2.head() #azdias2.to_csv("azdias3.csv", header=True) #feat_nan.to_csv("feat_nan.csv", header=True) #azdias3 = pd.read_csv("azdias3.csv") #azdias3 = azdias3.drop(['Unnamed: 0'], axis=1) #azdias3.head() #feat_nan = pd.read_csv("feat_nan.csv") #feat_nan = feat_nan.drop(['Unnamed: 0'], axis=1) #feat_nan.head() # Be sure to add in a lot more cells (both markdown and code) to document your # approach and findings! ###Output _____no_output_____ ###Markdown Part 0.3: Cleaning columns and Row with too many NaN In this part, we will tried to identify the columns and rows that might be considered as outlier as they have too many NaN value and we will consider the reason to drop those columns and rows ###Code miss_col = azdias3.isnull().sum() miss_col plt.hist(miss_col) plt.xlabel('Missing Value Counts') plt.ylabel('Number of Columns') plt.show() miss_col0 = miss_col[miss_col>0].nlargest(100) miss_col0.sort_values(inplace=True) miss_col0.plot.bar(figsize=(15,10)) plt.xlabel('Column with missing values') plt.ylabel('Count of missing values') plt.grid(True) plt.show() ###Output _____no_output_____ ###Markdown We can see that there is a distinct separation on the top 100 column with missing values at the 200k threshold of missing values. We will drop all of the columns with more than 200k missing values ###Code miss_col1 = miss_col[miss_col>200000] azdias4 = azdias3.drop(miss_col1.index, axis=1) azdias4.info() miss_col1 feat_info2.info() feat_info3 = feat_info2[~feat_info2['Attribute'].isin(miss_col1.index)] feat_info3 miss_row = azdias4.isnull().sum(axis=1) hist_bins = [0,1,5,10,15,20,30,50,100,150,200,250,300] hist_ticks = np.array([0,1,5,10,15,20,30,50,100,150,200,250,300]) plt.figure(figsize=(15,7)) plt.hist(miss_row,bins=hist_bins) plt.xticks(hist_ticks,hist_ticks.astype(str)) plt.xlabel('Missing Value Counts') plt.ylabel('Number of Row') plt.show() ###Output _____no_output_____ ###Markdown We can see that there is also a distinct threshold of 15 missing values per row. We decide to drop all of the rows with more than 15 missing values ###Code miss_row1 = miss_row[miss_row>15] azdias4 = azdias4.drop(miss_row1, axis=0) azdias4.info() #azdias4.to_csv("azdias4.csv", header=True) #feat_info3.to_csv("feat_info3.csv", header=True) #azdias4 = pd.read_csv("azdias4.csv") #azdias4 = azdias4.drop(['Unnamed: 0'], axis=1) #azdias4.head() #feat_info3 = pd.read_csv("feat_info3.csv") #feat_info3 = feat_info3.drop(['Unnamed: 0'], axis=1) #feat_info3.head() ###Output _____no_output_____ ###Markdown Part 0.4: Categorical Fix For categorical data, we would ordinarily need to encode the levels as dummy variables. Depending on the number of categories, perform one of the following:- For binary (two-level) categoricals that take numeric values, we can keep them without needing to do anything.- There is one binary variable that takes on non-numeric values. For this one, we need to re-encode the values as numbers or create a dummy variable.- For multi-level categoricals (three or more values), we can choose to encode the values using multiple dummy variables (e.g. via [OneHotEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html)), or (to keep things straightforward) just drop them from the analysis. ###Code categorical = ['ANREDE_KZ', 'BIP_FLAG','CAMEO_DEUG_2015', 'CAMEO_DEU_2015','CAMEO_DEUINTL_2015', 'CJT_GESAMTTYP', 'D19_KK_KUNDENTYP', 'FINANZTYP', 'GEBAEUDETYP', 'GFK_URLAUBERTYP', 'GREEN_AVANTGARDE', 'HAUSHALTSSTRUKTUR', 'KBA05_HERSTTEMP', 'KBA05_MAXHERST', 'KBA05_MODTEMP', 'LP_FAMILIE_FEIN', 'LP_FAMILIE_GROB', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB', 'LP_STATUS_FEIN', 'LP_STATUS_GROB', 'NATIONALITAET_KZ', 'OST_WEST_KZ', 'PRAEGENDE_JUGENDJAHRE', 'SHOPPER_TYP', 'SOHO_FLAG', 'VERS_TYP', 'ZABEOTYP'] not_in_az2 = ['HAUSHALTSSTRUKTUR','GEOSCORE_KLS7','WACHSTUMSGEBIET_NB','BIP_FLAG'] miss_column1 = ['AGER_TYP', 'ALTER_HH', 'D19_BANKEN_ONLINE_QUOTE_12', 'D19_GESAMT_ONLINE_QUOTE_12', 'D19_KONSUMTYP', 'D19_LOTTO_RZ', 'D19_VERSAND_ONLINE_QUOTE_12', 'KBA05_BAUMAX', 'D19_KK_KUNDENTYP', 'TITEL_KZ'] categorical2 = ['ANREDE_KZ','CAMEO_DEUG_2015', 'CAMEO_DEU_2015','CAMEO_DEUINTL_2015', 'CJT_GESAMTTYP', 'FINANZTYP', 'GEBAEUDETYP', 'GFK_URLAUBERTYP', 'GREEN_AVANTGARDE', 'KBA05_HERSTTEMP', 'KBA05_MAXHERST', 'KBA05_MODTEMP', 'LP_FAMILIE_FEIN','LP_FAMILIE_GROB', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB', 'LP_STATUS_FEIN', 'LP_STATUS_GROB', 'NATIONALITAET_KZ', 'OST_WEST_KZ', 'PRAEGENDE_JUGENDJAHRE', 'SHOPPER_TYP', 'SOHO_FLAG', 'VERS_TYP', 'ZABEOTYP'] binary = [] multi = [] for col in categorical2: if azdias4[col].nunique() > 2: multi.append(col) else: binary.append(col) print('\nBinary categorical variables are : {}'.format(binary)) print('\nMulti-level categorical variables are : {}'.format(multi)) for col in binary: print(azdias4[col].value_counts()) azdias4['ANREDE_KZ'].replace([2.0,1.0], [1,0], inplace=True) azdias4['SOHO_FLAG'].replace([1.0,0.0], [1,0], inplace=True) azdias4['OST_WEST_KZ'].replace(['W','O'], [1,0], inplace=True) azdias4['VERS_TYP'].replace([2.0,1.0], [1,0], inplace=True) for col in multi: print(azdias4[col].value_counts()) azdias4['LP_FAMILIE_GROB'].replace([0.0], np.nan, inplace=True) azdias4['LP_FAMILIE_FEIN'].replace([0.0], np.nan, inplace=True) azdias4['LP_LEBENSPHASE_FEIN'].replace([0.0], np.nan, inplace=True) azdias4['LP_LEBENSPHASE_GROB'].replace([0.0], np.nan, inplace=True) azdias4.loc[azdias4['PRAEGENDE_JUGENDJAHRE'].isin([1,3,5,8,10,12,14]),'MAINSTREAM'] = 0 azdias4.loc[azdias4['PRAEGENDE_JUGENDJAHRE'].isin([2,4,6,7,9,11,13,15]),'MAINSTREAM'] = 1 azdias4.loc[azdias4['PRAEGENDE_JUGENDJAHRE'].isin([1,2]),'DECADE'] = 1 azdias4.loc[azdias4['PRAEGENDE_JUGENDJAHRE'].isin([3,4]),'DECADE'] = 2 azdias4.loc[azdias4['PRAEGENDE_JUGENDJAHRE'].isin([5,6,7]),'DECADE'] = 3 azdias4.loc[azdias4['PRAEGENDE_JUGENDJAHRE'].isin([8,9]),'DECADE'] = 4 azdias4.loc[azdias4['PRAEGENDE_JUGENDJAHRE'].isin([10,11,12,13]),'DECADE'] = 5 azdias4.loc[azdias4['PRAEGENDE_JUGENDJAHRE'].isin([14,15]),'DECADE'] = 6 def wealth(x): if pd.isnull(x): return np.nan else: return int(str(x)[0]) def lifestage(x): if pd.isnull(x): return np.nan else: return int(str(x)[1]) azdias4['WEALTH'] = azdias4['CAMEO_DEUINTL_2015'].apply(wealth) azdias4['LIFESTAGE'] = azdias4['CAMEO_DEUINTL_2015'].apply(lifestage) azdias4.info() azdias5 = azdias4.drop(['PRAEGENDE_JUGENDJAHRE', 'CAMEO_DEU_2015', 'CAMEO_DEUINTL_2015', 'LP_FAMILIE_GROB', 'LP_FAMILIE_FEIN', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB'], axis=1) ###Output _____no_output_____ ###Markdown We drop the old multi categorical columns that have been reencoded as seperate categorical columns. We also drop the columns that have too specific answers; hnce, many answers for their classification. Some of the columns such as "LP_FAMILIE_GROB" is also discarded due to their indication of household and lifestage already being answered with other columns such as the new columns that we just reencode. ###Code azdias5.shape #azdias5.to_csv("azdias5.csv", header=True) #azdias5 = pd.read_csv("azdias5.csv") #azdias5 = azdias5.drop(['Unnamed: 0'], axis=1) #azdias5.head() ###Output _____no_output_____ ###Markdown Part 0.5: Cleaning Function We create the cleaning function so that we can reuse all of these cleaning function again with the customers datasets and train_test datasets as well ###Code feat_info2 = pd.read_csv("feat_info2.csv") feat_info2 = feat_info2.drop(['Unnamed: 0'], axis=1) feat_info2.head() def clean_data(df1,feat_info2): """ Perform feature trimming, re-encoding, and engineering for demographics data INPUT: df1 - Demographics DataFrame feat_info2 - Dataframe of features explanation OUTPUT: Trimmed and cleaned demographics DataFrame """ # removing and rename columns that does not have description in the feat_info df1 = df1.drop(['LNR','AKT_DAT_KL','ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4','ALTERSKATEGORIE_FEIN','ANZ_KINDER','ANZ_STATISTISCHE_HAUSHALTE', 'ARBEIT','CJT_KATALOGNUTZER','CJT_TYP_1','CJT_TYP_2','CJT_TYP_3','CJT_TYP_4','CJT_TYP_5','CJT_TYP_6','D19_KONSUMTYP_MAX','D19_LETZTER_KAUF_BRANCHE', 'D19_SOZIALES','D19_TELKO_ONLINE_QUOTE_12','D19_VERSI_DATUM','D19_VERSI_OFFLINE_DATUM','D19_VERSI_ONLINE_DATUM','D19_VERSI_ONLINE_QUOTE_12', 'DSL_FLAG','EINGEFUEGT_AM','EINGEZOGENAM_HH_JAHR','EXTSEL992','FIRMENDICHTE','GEMEINDETYP','HH_DELTA_FLAG','KBA13_ANTG1', 'KBA13_ANTG2','KBA13_ANTG3','KBA13_ANTG4','KBA13_BAUMAX','KBA13_GBZ','KBA13_HHZ','KBA13_KMH_210','KOMBIALTER','KONSUMZELLE', 'MOBI_RASTER','RT_KEIN_ANREIZ','RT_SCHNAEPPCHEN','RT_UEBERGROESSE','STRUKTURTYP','UMFELD_ALT','UMFELD_JUNG','UNGLEICHENN_FLAG', 'VERDICHTUNGSRAUM','VHA','VHN','VK_DHT4A','VK_DISTANZ','VK_ZG11'], axis=1) df1 = df1.rename(columns = {'CAMEO_INTL_2015':'CAMEO_DEUINTL_2015', 'D19_KOSMETIK': 'D19_KOSMETIK_RZ', 'D19_BANKEN_LOKAL': 'D19_BANKEN_LOKAL_RZ', 'D19_BANKEN_REST':'D19_BANKEN_REST_RZ', 'D19_BANKEN_GROSS':'D19_BANKEN_GROSS_RZ', 'D19_BEKLEIDUNG_GEH': 'D19_BEKLEIDUNG_GEH_RZ', 'D19_BANKEN_DIREKT':'D19_BANKEN_DIREKT_RZ', 'D19_BEKLEIDUNG_REST': 'D19_BEKLEIDUNG_REST_RZ', 'D19_BILDUNG': 'D19_BILDUNG_RZ', 'D19_BIO_OEKO':'D19_BIO_OEKO_RZ', 'D19_BUCH_CD':'D19_BUCH_RZ', 'D19_DIGIT_SERV':'D19_DIGIT_SERV_RZ', 'D19_DROGERIEARTIKEL':'D19_DROGERIEARTIKEL_RZ', 'D19_ENERGIE': 'D19_ENERGIE_RZ', 'D19_FREIZEIT':'D19_FREIZEIT_RZ', 'D19_GARTEN':'D19_GARTEN_RZ', 'D19_HANDWERK': 'D19_HANDWERK_RZ', 'D19_HAUS_DEKO':'D19_HAUS_DEKO_RZ', 'D19_KINDERARTIKEL':'D19_KINDERARTIKEL_RZ', 'D19_LEBENSMITTEL': 'D19_LEBENSMITTEL_RZ', 'D19_NAHRUNGSERGAENZUNG':'D19_NAHRUNGSERGAENZUNG_RZ', 'D19_RATGEBER':'D19_RATGEBER_RZ', 'D19_REISEN': 'D19_REISEN_RZ', 'D19_SAMMELARTIKEL':'D19_SAMMELARTIKEL_RZ', 'D19_SCHUHE': 'D19_SCHUHE_RZ', 'D19_TELKO_REST': 'D19_TELKO_REST_RZ', 'D19_SONSTIGE':'D19_SONSTIGE_RZ', 'D19_TELKO_MOBILE': 'D19_TELKO_MOBILE_RZ', 'D19_TECHNIK': 'D19_TECHNIK_RZ', 'D19_VOLLSORTIMENT': 'D19_VOLLSORTIMENT_RZ', 'D19_TIERARTIKEL':'D19_TIERARTIKEL_RZ', 'D19_VERSICHERUNGEN': 'D19_VERSICHERUNGEN_RZ', 'SOHO_KZ':'SOHO_FLAG', 'KBA13_CCM_1401_2500': 'KBA13_CCM_1400_2500', 'D19_VERSAND_REST':'D19_VERSAND_REST_RZ', 'D19_WEIN_FEINKOST':'D19_WEIN_FEINKOST_RZ', 'D19_LOTTO': 'D19_LOTTO_RZ', 'KK_KUNDENTYP': 'D19_KK_KUNDENTYP'}) # convert missing value codes into NaNs df1['CAMEO_DEUG_2015'].replace(['X'], np.nan, inplace=True) df1['CAMEO_DEU_2015'].replace(['XX'], np.nan, inplace=True) df1['CAMEO_DEUINTL_2015'].replace(['XX'], np.nan, inplace=True) feat_nan = feat_info2[feat_info2['Meaning'].str.contains('unknown', na=False)] nan = [] for attribute in feat_nan['Attribute'].unique(): val = feat_nan.loc[feat_nan['Attribute'] == attribute, 'Value'].astype(str).str.cat(sep=',').split(',') nan.append(val) feat_nan = pd.concat([pd.Series(feat_nan['Attribute'].unique()), pd.Series(nan)], axis=1) feat_nan.columns = ['Attribute', 'missing_or_unknown'] for row in feat_nan['Attribute']: print(row) if row in df1.columns: na_map = feat_nan.loc[feat_nan['Attribute'] == row, 'missing_or_unknown'].iloc[0] na_idx = df1.loc[:, row].isin(na_map) df1.loc[na_idx, row] = np.NaN else: continue # Removing column and rows witn high amount of NaNs miss_col1 = ['AGER_TYP','ALTER_HH','D19_BANKEN_ONLINE_QUOTE_12','D19_GESAMT_ONLINE_QUOTE_12','D19_KK_KUNDENTYP','D19_KONSUMTYP', 'D19_LOTTO_RZ','D19_VERSAND_ONLINE_QUOTE_12','KBA05_BAUMAX','TITEL_KZ'] df1 = df1.drop(miss_col1, axis=1) row_na = df1.shape[1] - df1.count(axis = 1) rows_to_drop = df1.index[row_na > 15] df1.drop(rows_to_drop, axis=0, inplace = True) # Fixing categorical and mixed features df1['ANREDE_KZ'].replace([2.0,1.0], [1,0], inplace=True) df1['SOHO_FLAG'].replace([1.0,0.0], [1,0], inplace=True) df1['OST_WEST_KZ'].replace(['W','O'], [1,0], inplace=True) df1['VERS_TYP'].replace([2.0,1.0], [1,0], inplace=True) df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([1,3,5,8,10,12,14]),'MAINSTREAM'] = 0 df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([2,4,6,7,9,11,13,15]),'MAINSTREAM'] = 1 df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([1,2]),'DECADE'] = 1 df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([3,4]),'DECADE'] = 2 df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([5,6,7]),'DECADE'] = 3 df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([8,9]),'DECADE'] = 4 df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([10,11,12,13]),'DECADE'] = 5 df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([14,15]),'DECADE'] = 6 def wealth(x): if pd.isnull(x): return np.nan else: return int(str(x)[0]) def lifestage(x): if pd.isnull(x): return np.nan else: return int(str(x)[1]) df1['WEALTH'] = df1['CAMEO_DEUINTL_2015'].apply(wealth) df1['LIFESTAGE'] = df1['CAMEO_DEUINTL_2015'].apply(lifestage) df1 = df1.drop(['PRAEGENDE_JUGENDJAHRE', 'CAMEO_DEU_2015','CAMEO_DEUINTL_2015', 'LP_FAMILIE_GROB', 'LP_FAMILIE_FEIN', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB'], axis=1) # Return the cleaned dataframe. return df1 ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. Part 1.1: Imputing, Dummy and Feature Scaling - In order to perform the analysis properly, we neeed to deal with all of the NaN since sklearn requires that data not have missing values in order for its estimators to work properly which is why we will use imputer to fill all of the missing values with the mode of the column (better with categorical variable) - Then, we will perform the OneHotEncoding toward the categorical variable with more than 2 answers (multi) - Finally, we will use StandardScaler() for the feature scaling so that the principal component vectors are not influenced by the natural differences in scale for features. ###Code azdias5.shape fill_nan = Imputer(strategy='most_frequent') azdias6 = pd.DataFrame(fill_nan.fit_transform(azdias5)) azdias6.columns = azdias5.columns azdias6.index = azdias5.index multi2 = ['CAMEO_DEUG_2015', 'CJT_GESAMTTYP','DECADE', 'FINANZTYP', 'GEBAEUDETYP', 'GFK_URLAUBERTYP', 'KBA05_HERSTTEMP', 'KBA05_MAXHERST', 'KBA05_MODTEMP', 'LIFESTAGE', 'NATIONALITAET_KZ', 'SHOPPER_TYP', 'WEALTH', 'ZABEOTYP'] for col in multi2: dummy = pd.get_dummies(azdias6[col], prefix = col) azdias6 = pd.concat([azdias6, dummy], axis = 1) azdias7 = azdias6.drop(multi2, axis=1) scaler = StandardScaler() azdias8 = scaler.fit_transform(azdias7) azdias8 = pd.DataFrame(azdias8, columns=list(azdias7)) ###Output _____no_output_____ ###Markdown Part 1.2: PCA & InterpretationAfter scaling our data, we will perform the dimensionality reduction by using the principal component analysis. First, we will use PCA with full components in order to find the maximal variance of the data. Then, we will reduced the amount of components expecially the one that does not effect the variances heavily. ###Code # Apply PCA to the data. pca = PCA() azdias_pca = pca.fit_transform(azdias8) def scree_plot(pca): ''' Creates a scree plot associated with the principal components INPUT: pca - the result of instantian of PCA in scikit learn OUTPUT: None ''' num_components=len(pca.explained_variance_ratio_) ind = np.arange(num_components) vals = pca.explained_variance_ratio_ plt.figure(figsize=(10, 6)) ax = plt.subplot(111) cumvals = np.cumsum(vals) ax.bar(ind, vals) ax.plot(ind, cumvals) for i in range(num_components): ax.annotate(r"%s%%" % ((str(vals[i]*100)[:4])), (ind[i]+0.2, vals[i]), va="bottom", ha="center", fontsize=4.5) ax.xaxis.set_tick_params(width=0) ax.yaxis.set_tick_params(width=2, length=12) ax.set_xlabel("Principal Component") ax.set_ylabel("Variance Explained (%)") plt.title('Explained Variance Per Principal Component') scree_plot(pca) pca.explained_variance_ratio_.sum() # Re-apply PCA to the data while selecting for number of components to retain. pca = PCA(n_components=200) azdias_pca = pca.fit_transform(azdias8) scree_plot(pca) pca.explained_variance_ratio_.sum() ###Output _____no_output_____ ###Markdown As for PCA, there is no certain methods other than [elbow](https://towardsdatascience.com/a-one-stop-shop-for-principal-component-analysis-5582fb7e0a9c) which is also not reliable; however, I decided to keep the nearest round of principal components used which can explain around 90% variability ###Code def pca_weights(pca, i): df = pd.DataFrame(pca.components_, columns=list(azdias8.columns)) weights = df.iloc[i].sort_values(ascending=False) return weights pca_weight_0 = pca_weights(pca, 0) pca_weight_0.nlargest(10) pca_weight_0.nsmallest(10) # Map weights for the second principal component to corresponding feature names # and then print the linked values, sorted by weight. pca_weight_1 = pca_weights(pca, 1) pca_weight_1.nlargest(10) pca_weight_1.nsmallest(10) # Map weights for the third principal component to corresponding feature names # and then print the linked values, sorted by weight. pca_weight_2 = pca_weights(pca, 2) pca_weight_2.nlargest(10) pca_weight_2.nsmallest(10) ###Output _____no_output_____ ###Markdown Discussion 1.2: Interpret Principal Components¶Answer: We will disclose the relationship of top three positive and negative components of each principal components1. PCA0MOBI_REGIO, PLZ8_ANTG1, KBA05_ANTG1 are the top three positively correlated components that affect the first principal components. This indicates that low mobility, high number of 1-2 family houses in area PLZ8 and KBA05 is associated with these principal components.PLZ8_ANTG3 , D19_GESAMT_DATUM and WEALTH_5.0 are the top three negatively correlated components. This indicates that high number of 6-10 family houses in area PLZ8, no actuality of the last transaction with the complete file and poor wealth.This principal compenent measures the number of shares of houses in the indicated areas especially PLZ8 as well as their mobility and number of transactions of certain products which both positive and negative results complement each other. 2. PCA1KBA13_HERST_BMW_BENZ, KBA13_SEG_OBEREMITTELKLASSE and KBA13_MERCEDES are the top three positively correlated components for our 2nd PCA. this indicates the high share of BMW & Mercedes Benz (upper middle and upper class cars) within the PLZ8.KBA13_SITZE_5, KBA13_KMH_140_210 and KBA13_SEG_KLEINWAGEN are the top three negatively correlated components for this principal component. this indicates that the low number of car with 5 seats, max speed between 140-210 km/h as well as low share of small and very small cars (Ford Fiesta, Ford Ka etc.) in the PLZ8.This principal component measures based on the car class especially the high class one which associated with lower seats and higher max speed as well as some certain high-brand class appereances. 3. PCA2D19_GESAMT_ANZ_24, D19_GESAMT_ANZ_12 and D19_VERSAND_ANZ_24 are the top three positively correlated components for our last PCA. this indicates that there has been high activities of transaction of all products in the last 1-2 years especially through mail order. D19_GESAMT_ONLINE_DATUM, D19_GESAMT_DATUM and D19_VERSAND_ONLINE_DATUM are the top three negatively correlated components for this PCA. this indicates no actuality of both online transaction and transaction with items(complete file and segemnted mail).This principal components measures the effectiveness of the mail order and the ineffectiveness of online transaction for high volume of transactions. Part 1.3: ClusteringIn this substep, we will apply k-means clustering to the dataset and use the average within-cluster distances from each point to their assigned cluster's centroid to decide on a number of clusters to keep. Once we selected a final number of clusters to use, we will re-fit a KMeans instance to perform the clustering operation ###Code def get_kmeans_score(data, center): kmeans = KMeans(n_clusters = center) model = kmeans.fit(data) score = np.abs(model.score(data)) return score scores = [] centers = list(range(2,30,2)) for center in centers: score = round(get_kmeans_score(azdias_pca, center)) print('center : {} score : {}'.format(center, score)) scores.append(score) plt.plot(centers, scores, linestyle='--', marker='o', color='b'); plt.xlabel('K'); plt.ylabel('SSE'); plt.title('SSE vs. K'); kmeans_10 = KMeans(n_clusters = 10) model_10 = kmeans_10.fit(azdias_pca) azdias_pred = model_10.predict(azdias_pca) ###Output _____no_output_____ ###Markdown Part 1.4: Cleaning, Imputing and Feature Scaling Customers In this part, we will perform all of the steps that had been done on the general population toward the customers data. We are going to use the fits from the general population to clean, transform, and cluster the customer data. Then, we will interpret how the general population fits apply to the customer data. ###Code #customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') customers = pd.read_csv("customers.csv") customers = customers.drop(['Unnamed: 0'], axis=1) customers.head() customers = customers.drop(['PRODUCT_GROUP', 'CUSTOMER_GROUP','ONLINE_PURCHASE'], axis=1) customers_clean = clean_data(customers, feat_info2) customers_clean.shape azdias6.shape customers_imputer = pd.DataFrame(fill_nan.transform(customers_clean)) customers_imputer.columns = customers_clean.columns customers_imputer.index = customers_clean.index for col in multi2: dummy = pd.get_dummies(customers_imputer[col], prefix = col) customers_imputer = pd.concat([customers_imputer, dummy], axis = 1) customers_dummy = customers_imputer.drop(multi2, axis=1) list_as = list(azdias7.columns) list_dum = list(customers_dummy.columns) def setdiff_sorted(array1,array2,assume_unique=False): ans = np.setdiff1d(array1,array2,assume_unique).tolist() if assume_unique: return sorted(ans) return ans main_list = setdiff_sorted(list_as,list_dum) main_list customers_dummy['GEBAEUDETYP_5.0'] = 0 customers_dummy['GEBAEUDETYP_5.0'].value_counts() customers_scaler = scaler.transform(customers_dummy) customers_scaler = pd.DataFrame(customers_scaler, columns=list(customers_dummy)) customers_pca = pca.fit_transform(customers_scaler) customers_pred = model_10.predict(customers_pca) ###Output _____no_output_____ ###Markdown Part 1.5: Comparing the Customer Data to Demographics DataAt this point, we have clustered data based on demographics of the general population of Germany, and seen how the customer data for a mail-order sales company maps onto those demographic clusters. In this final substep, we will compare the two cluster distributions to see where the strongest customer base for the company is. ###Code # Compare the proportion of data in each cluster for the customer data to the # proportion of data in each cluster for the general population. figure, axs = plt.subplots(nrows=1, ncols=2, figsize = (10,5)) figure.subplots_adjust(hspace = .5, wspace=.3) sb.countplot(customers_pred, ax=axs[0]) axs[0].set_title('Customer Clusters') sb.countplot(azdias_pred, ax=axs[1]) axs[1].set_title('General Clusters') cust_df = pd.DataFrame(customers_pred,columns=['Cluster']).reset_index().groupby('Cluster').count()/len(customers_pred)*100 gen_df = pd.DataFrame(azdias_pred,columns=['Cluster']).reset_index().groupby('Cluster').count()/len(azdias_pred)*100 diff = (cust_df-gen_df) diff.rename_axis({'index':'DiffPerc'}, axis=1, inplace=True) cust_df.rename_axis({'index':'CustPerc'}, axis=1, inplace=True) gen_df.rename_axis({'index':'GenPerc'}, axis=1, inplace=True) diff = diff.join(cust_df).join(gen_df).sort_values('DiffPerc',ascending=False) diff.fillna(0, inplace=True) diff centroid_4 = pca.inverse_transform(model_10.cluster_centers_[4]) over = pd.Series(data = centroid_4, index = customers_scaler.columns).sort_values() over centroid_5 = pca.inverse_transform(model_10.cluster_centers_[5]) under = pd.Series(data = centroid_5, index = customers_scaler.columns).sort_values() under ###Output _____no_output_____ ###Markdown Discussion 1.5: Compare Customer Data to Demographics DataIf there are only particular segments of the population that are interested in the company's products, then we should see a mismatch from one to the other. If there is a higher proportion of persons in a cluster for the customer data compared to the general population then that suggests the people in that cluster to be a target audience for the company. On the other hand, the proportion of the data in a cluster being larger in the general population than the customer data suggests that group of persons to be outside of the target demographics.**Answer:** From the list we can see that the most overrepresented cluster is cluster 4 and the most underrepresented culster is cluster 5. The charateristics of cluster 4 are: - Pre-Family Couples & Singles- Most likely German nationality (prename analysis)- Owned VW-Audi- High chance to be an investor - Reside in mixed building (unknown whether residential or commercial)While the charateristics of cluster 5 are: - Reside in mixed building (unknown whether residential or commercial)- Most likely German nationality (prename analysis)- Fair supplied energy consumer- High money saving tendency - High income Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. Part 2.1: Cleaning, Imputing and Scaling ###Code #mailout_train = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') mailout_train = pd.read_csv("mailout_train.csv") mailout_train = mailout_train.drop(['Unnamed: 0'], axis=1) mailout_train.head() mailout_train2 = clean_data(mailout_train, feat_info2) mailout_train2['RESPONSE'] Ytr = mailout_train2['RESPONSE'] Xtr= mailout_train2.drop(['RESPONSE'], axis=1) Xtr_imputer = pd.DataFrame(fill_nan.fit_transform(Xtr)) Xtr_imputer.columns = Xtr.columns Xtr_imputer.index = Xtr.index for col in multi2: dummy = pd.get_dummies(Xtr_imputer[col], prefix = col) Xtr_imputer = pd.concat([Xtr_imputer, dummy], axis = 1) Xtr_imputer = Xtr_imputer.drop(multi2, axis=1) Xtr_scaler = scaler.fit_transform(Xtr_imputer) Xtr_scaler = pd.DataFrame(Xtr_scaler, columns=list(Xtr_imputer)) ###Output C:\Users\Audi\Anaconda3\lib\site-packages\sklearn\preprocessing\data.py:645: DataConversionWarning: Data with input dtype uint8, float64 were all converted to float64 by StandardScaler. return self.partial_fit(X, y) C:\Users\Audi\Anaconda3\lib\site-packages\sklearn\base.py:464: DataConversionWarning: Data with input dtype uint8, float64 were all converted to float64 by StandardScaler. return self.fit(X, **fit_params).transform(X) ###Markdown Part 2.2: Comparing models and algorithms ###Code clf_A = LogisticRegression(random_state=135) clf_B = RandomForestClassifier(random_state=135) clf_C = AdaBoostClassifier(random_state=135) clf_D = GradientBoostingClassifier(random_state=135) def classifier_roc(clf, param_grid, X=Xtr_scaler, y=Ytr): """ Fit a classifier using GridSearchCV and calculates ROC AUC INPUT: - clf (classifier): classifier to fit - param_grid (dict): classifier parameters used with GridSearchCV - Xtr_scaler (DataFrame): features of the training dataframe - Ytr (DataFrame): labels of the training dataframe OUTPUT: - classifier: fitted classifier - prints elapsed time and ROX AUC """ # cv uses StratifiedKFold # scoring roc_auc available as parameter grid = GridSearchCV(estimator=clf, param_grid=param_grid, scoring='roc_auc', verbose = 3, n_jobs=10, cv=5) grid.fit(X, y) print(grid.best_score_) return grid.best_estimator_ print(classifier_roc(clf_A, {})) print(classifier_roc(clf_B, {})) print(classifier_roc(clf_C, {})) print(classifier_roc(clf_D, {})) ###Output Fitting 5 folds for each of 1 candidates, totalling 5 fits ###Markdown | Model | ROC_AUC || :------------: | :---------------: | | Logistic | 0.5731 | | RandomForest | 0.5006 | | Adaboost | 0.5535 | | GradientBoost | 0.5573 | We choose Logistic Regression because it has the best ROC_AUC results Part 2.3: Optimizing Model ###Code param_grid = {'C': [1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1e0], 'penalty': ['l2']} grid = GridSearchCV(estimator=LogisticRegression(random_state=135), param_grid=param_grid, scoring='roc_auc', verbose = 3, n_jobs=10, cv=5) grid.fit(Xtr_scaler, Ytr) print(grid.best_score_) print(grid.best_estimator_) ###Output Fitting 5 folds for each of 8 candidates, totalling 40 fits ###Markdown By giving the C parameters more range, we can increase the roc_auc score from 0.5731 to 0.5843 Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. Part 3.1: Cleaning, Imputing and Scaling ###Code #mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') mailout_test = pd.read_csv('mailout_test.csv') mailout_test = mailout_test.drop(['Unnamed: 0'], axis=1) mailout_test.head() def clean_data2(df1,feat_info2): """ Perform feature trimming, re-encoding, and engineering for demographics data INPUT: df1 - Demographics DataFrame feat_info2 - Dataframe of features explanation OUTPUT: Trimmed and cleaned demographics DataFrame """ # removing and rename columns that does not have description in the feat_info (LNR needed) df1 = df1.drop(['AKT_DAT_KL','ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4','ALTERSKATEGORIE_FEIN','ANZ_KINDER','ANZ_STATISTISCHE_HAUSHALTE', 'ARBEIT','CJT_KATALOGNUTZER','CJT_TYP_1','CJT_TYP_2','CJT_TYP_3','CJT_TYP_4','CJT_TYP_5','CJT_TYP_6','D19_KONSUMTYP_MAX','D19_LETZTER_KAUF_BRANCHE', 'D19_SOZIALES','D19_TELKO_ONLINE_QUOTE_12','D19_VERSI_DATUM','D19_VERSI_OFFLINE_DATUM','D19_VERSI_ONLINE_DATUM','D19_VERSI_ONLINE_QUOTE_12', 'DSL_FLAG','EINGEFUEGT_AM','EINGEZOGENAM_HH_JAHR','EXTSEL992','FIRMENDICHTE','GEMEINDETYP','HH_DELTA_FLAG','KBA13_ANTG1', 'KBA13_ANTG2','KBA13_ANTG3','KBA13_ANTG4','KBA13_BAUMAX','KBA13_GBZ','KBA13_HHZ','KBA13_KMH_210','KOMBIALTER','KONSUMZELLE', 'MOBI_RASTER','RT_KEIN_ANREIZ','RT_SCHNAEPPCHEN','RT_UEBERGROESSE','STRUKTURTYP','UMFELD_ALT','UMFELD_JUNG','UNGLEICHENN_FLAG', 'VERDICHTUNGSRAUM','VHA','VHN','VK_DHT4A','VK_DISTANZ','VK_ZG11'], axis=1) df1 = df1.rename(columns = {'CAMEO_INTL_2015':'CAMEO_DEUINTL_2015', 'D19_KOSMETIK': 'D19_KOSMETIK_RZ', 'D19_BANKEN_LOKAL': 'D19_BANKEN_LOKAL_RZ', 'D19_BANKEN_REST':'D19_BANKEN_REST_RZ', 'D19_BANKEN_GROSS':'D19_BANKEN_GROSS_RZ', 'D19_BEKLEIDUNG_GEH': 'D19_BEKLEIDUNG_GEH_RZ', 'D19_BANKEN_DIREKT':'D19_BANKEN_DIREKT_RZ', 'D19_BEKLEIDUNG_REST': 'D19_BEKLEIDUNG_REST_RZ', 'D19_BILDUNG': 'D19_BILDUNG_RZ', 'D19_BIO_OEKO':'D19_BIO_OEKO_RZ', 'D19_BUCH_CD':'D19_BUCH_RZ', 'D19_DIGIT_SERV':'D19_DIGIT_SERV_RZ', 'D19_DROGERIEARTIKEL':'D19_DROGERIEARTIKEL_RZ', 'D19_ENERGIE': 'D19_ENERGIE_RZ', 'D19_FREIZEIT':'D19_FREIZEIT_RZ', 'D19_GARTEN':'D19_GARTEN_RZ', 'D19_HANDWERK': 'D19_HANDWERK_RZ', 'D19_HAUS_DEKO':'D19_HAUS_DEKO_RZ', 'D19_KINDERARTIKEL':'D19_KINDERARTIKEL_RZ', 'D19_LEBENSMITTEL': 'D19_LEBENSMITTEL_RZ', 'D19_NAHRUNGSERGAENZUNG':'D19_NAHRUNGSERGAENZUNG_RZ', 'D19_RATGEBER':'D19_RATGEBER_RZ', 'D19_REISEN': 'D19_REISEN_RZ', 'D19_SAMMELARTIKEL':'D19_SAMMELARTIKEL_RZ', 'D19_SCHUHE': 'D19_SCHUHE_RZ', 'D19_TELKO_REST': 'D19_TELKO_REST_RZ', 'D19_SONSTIGE':'D19_SONSTIGE_RZ', 'D19_TELKO_MOBILE': 'D19_TELKO_MOBILE_RZ', 'D19_TECHNIK': 'D19_TECHNIK_RZ', 'D19_VOLLSORTIMENT': 'D19_VOLLSORTIMENT_RZ', 'D19_TIERARTIKEL':'D19_TIERARTIKEL_RZ', 'D19_VERSICHERUNGEN': 'D19_VERSICHERUNGEN_RZ', 'SOHO_KZ':'SOHO_FLAG', 'KBA13_CCM_1401_2500': 'KBA13_CCM_1400_2500', 'D19_VERSAND_REST':'D19_VERSAND_REST_RZ', 'D19_WEIN_FEINKOST':'D19_WEIN_FEINKOST_RZ', 'D19_LOTTO': 'D19_LOTTO_RZ', 'KK_KUNDENTYP': 'D19_KK_KUNDENTYP'}) # convert missing value codes into NaNs df1['CAMEO_DEUG_2015'].replace(['X'], np.nan, inplace=True) df1['CAMEO_DEU_2015'].replace(['XX'], np.nan, inplace=True) df1['CAMEO_DEUINTL_2015'].replace(['XX'], np.nan, inplace=True) feat_nan = feat_info2[feat_info2['Meaning'].str.contains('unknown', na=False)] nan = [] for attribute in feat_nan['Attribute'].unique(): val = feat_nan.loc[feat_nan['Attribute'] == attribute, 'Value'].astype(str).str.cat(sep=',').split(',') nan.append(val) feat_nan = pd.concat([pd.Series(feat_nan['Attribute'].unique()), pd.Series(nan)], axis=1) feat_nan.columns = ['Attribute', 'missing_or_unknown'] for row in feat_nan['Attribute']: print(row) if row in df1.columns: na_map = feat_nan.loc[feat_nan['Attribute'] == row, 'missing_or_unknown'].iloc[0] na_idx = df1.loc[:, row].isin(na_map) df1.loc[na_idx, row] = np.NaN else: continue # Removing column ONLY with high amount of NaNs miss_col1 = ['AGER_TYP','ALTER_HH','D19_BANKEN_ONLINE_QUOTE_12','D19_GESAMT_ONLINE_QUOTE_12','D19_KK_KUNDENTYP','D19_KONSUMTYP', 'D19_LOTTO_RZ','D19_VERSAND_ONLINE_QUOTE_12','KBA05_BAUMAX','TITEL_KZ'] df1 = df1.drop(miss_col1, axis=1) # Fixing categorical and mixed features df1['ANREDE_KZ'].replace([2.0,1.0], [1,0], inplace=True) df1['SOHO_FLAG'].replace([1.0,0.0], [1,0], inplace=True) df1['OST_WEST_KZ'].replace(['W','O'], [1,0], inplace=True) df1['VERS_TYP'].replace([2.0,1.0], [1,0], inplace=True) df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([1,3,5,8,10,12,14]),'MAINSTREAM'] = 0 df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([2,4,6,7,9,11,13,15]),'MAINSTREAM'] = 1 df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([1,2]),'DECADE'] = 1 df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([3,4]),'DECADE'] = 2 df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([5,6,7]),'DECADE'] = 3 df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([8,9]),'DECADE'] = 4 df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([10,11,12,13]),'DECADE'] = 5 df1.loc[df1['PRAEGENDE_JUGENDJAHRE'].isin([14,15]),'DECADE'] = 6 def wealth(x): if pd.isnull(x): return np.nan else: return int(str(x)[0]) def lifestage(x): if pd.isnull(x): return np.nan else: return int(str(x)[1]) df1['WEALTH'] = df1['CAMEO_DEUINTL_2015'].apply(wealth) df1['LIFESTAGE'] = df1['CAMEO_DEUINTL_2015'].apply(lifestage) df1 = df1.drop(['PRAEGENDE_JUGENDJAHRE', 'CAMEO_DEU_2015','CAMEO_DEUINTL_2015', 'LP_FAMILIE_GROB', 'LP_FAMILIE_FEIN', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB'], axis=1) # Return the cleaned dataframe. return df1 mailout_test2 = clean_data2(mailout_test, feat_info2) LNR = mailout_test2['LNR'] mailout_test2 = mailout_test2.drop(['LNR'], axis=1) test_imputer = pd.DataFrame(fill_nan.fit_transform(mailout_test2)) test_imputer.columns = mailout_test2.columns test_imputer.index = mailout_test2.index for col in multi2: dummy = pd.get_dummies(test_imputer[col], prefix = col) test_imputer = pd.concat([test_imputer, dummy], axis = 1) test_imputer = test_imputer.drop(multi2, axis=1) test_scaler = scaler.fit_transform(test_imputer) test_scaler = pd.DataFrame(test_scaler, columns=list(test_imputer)) ###Output C:\Users\Audi\Anaconda3\lib\site-packages\sklearn\preprocessing\data.py:645: DataConversionWarning: Data with input dtype uint8, float64 were all converted to float64 by StandardScaler. return self.partial_fit(X, y) C:\Users\Audi\Anaconda3\lib\site-packages\sklearn\base.py:464: DataConversionWarning: Data with input dtype uint8, float64 were all converted to float64 by StandardScaler. return self.fit(X, **fit_params).transform(X) ###Markdown Part 3.2: Uploading Kaggle.csv ###Code param_grid = {'C': [1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1e0], 'penalty': ['l2']} grid = GridSearchCV(estimator=LogisticRegression(random_state=135), param_grid=param_grid, scoring='roc_auc', verbose = 3, n_jobs=10, cv=5) grid.fit(Xtr_scaler, Ytr) y_preds = grid.predict_proba(test_scaler) kaggle = pd.DataFrame({'LNR':LNR.astype(np.int32), 'RESPONSE':y_preds[:, 1]}) kaggle.to_csv('kaggle.csv', index = False) kaggle.head() ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # !pip install pyarrow==2.0.0 # !pip install umap # wait_for_me = True # dummy code that prevents jupyter from executing the following cells before this one has completed import time import os import numpy as np import pandas as pd from scipy import stats import umap.umap_ as umap from sklearn.ensemble import RandomForestClassifier, StackingClassifier from sklearn.model_selection import train_test_split, RandomizedSearchCV from sklearn.metrics import confusion_matrix, roc_auc_score, roc_curve, auc from sklearn.cluster import OPTICS from bokeh.plotting import figure, output_file, show from bokeh.io import output_notebook from bokeh.layouts import column, gridplot from bokeh.models import ColumnDataSource, FactorRange, Label from bokeh.palettes import Spectral6, Cividis, Category10, Category20, RdYlBu from bokeh.transform import factor_cmap output_notebook() from tqdm.notebook import tqdm from datetime import datetime from functools import partial from sys import getsizeof import gc from IPython.display import clear_output pd.set_option('plotting.backend', 'pandas_bokeh') ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # load in the data # csv files are cumbersome to wrok with - convert them to the parquet format so we can load them quicker def load_data(path, name): """ Load a csv file if a parquet file does not exist otherwise load the parquet file. If the parquet file does not exist it will be created and saved in the current working directory. This speeds up loading times for the next run (by about 4x). INPUT path - path to the csv file name - name to save the csv file to in parquet format OUTPUT df - pandas dataframe with object columns converted to string """ if not os.path.exists(f'{name}.parquet'): print(f'{name} parquet not found - loading csv') df = pd.read_csv(path, sep=';') print(f'{name} csv loaded') for c in df.columns: if df[c].dtype == object: df[c] = df[c].astype(str) print(f"{name}: {c} converted to string") df.to_parquet(f'{name}.parquet') print(f'{name} converted to parquet') else: print(f'{name} parquet found - loading') df = pd.read_parquet(f'{name}.parquet') print(f'{name} parquet loaded') return df %%time # load the two datasets azdias = load_data('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', 'azdias') customers = load_data('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', 'customers') # parse attributes sheet attributes = pd.read_excel('DIAS Attributes - Values 2017.xlsx', header=1).drop(columns=['Unnamed: 0', 'Description']).fillna(method='ffill') # make the attribute labels match the labels in the dataset where we could figure out what matches attributes.loc[:,'Attribute'] = attributes['Attribute'].str.replace('_RZ', '') attributes['Attribute'].replace( { 'D19_BUCH': 'D19_BUCH_CD', 'CAMEO_DEUINTL_2015': 'CAMEO_INTL_2015', 'D19_KK_KUNDENTYP': 'KK_KUNDENTYP', 'KBA13_CCM_1400_2500': 'KBA13_CCM_1401_2500', 'SOHO_FLAG': 'SOHO_KZ', }, inplace=True ) # get number of values associated with an attribute nr_attribute_values = attributes.groupby('Attribute').count()['Value'] # filter out attributes that are only 1 row (these dont have a pre-devined value for unknowns) attributes_filt = attributes[attributes['Attribute'].isin(nr_attribute_values[nr_attribute_values > 1].index)] ###Output _____no_output_____ ###Markdown Clean dataset meta info ###Code def setup_dataframe(df, index_col='LNR'): """ Setup the dataframe index to the specified column. For this data its the LNR column. Rid the dataframe of columns that contain meta information on the data, that is not important to the task. The function runs in place and does not return a value. Input - DataFrame - dataframe to set the index on. - index_column - the column to use as the index. """ # Entry ID = LNR (does not overlap between azdias and customers) df.set_index(index_col, inplace=True) # EINGEFUEGT tanslates to inserted and is in the form of a date # drop so we don't errenously correlate when the data appeared in the dataset to things we observe in the data df.drop(columns=['EINGEFUEGT_AM'], inplace=True) # setup both the datasets to prepare them for further analysis setup_dataframe(azdias) setup_dataframe(customers) ###Output _____no_output_____ ###Markdown Basic checks on number of columns ###Code # check how many columns there are print( f"{len(azdias.columns)} - number of columns in azdias dataframe\n" + f"{len(customers.columns)} - number of columns in customers dataframe\n" + f"{len(set(azdias.columns).intersection(set(customers.columns)))} - number of columns that overlap betwen azdias and customers\n" + f"{len(nr_attribute_values)} - number of columns described in attributes file\n" + f"{len(set(nr_attribute_values.index) - set(azdias.columns))} - number of attribures not in dataframe columns\n" + f"{len(set(azdias.columns) - set(nr_attribute_values.index))} - number of columns in dataframe not in attributes" ) # 364 columns are too many to analyze individually. # All comuns in azdias is in customers. # The attributes file does not contain information of all columns in the data. # There are columns in the data not contained in the attributes file. These will need further investigation. ###Output 364 - number of columns in azdias dataframe 367 - number of columns in customers dataframe 364 - number of columns that overlap betwen azdias and customers 314 - number of columns described in attributes file 4 - number of attribures not in dataframe columns 54 - number of columns in dataframe not in attributes ###Markdown Drop columns we know too little aboutColumns we could manually treat or decode have been treated above so these are the remaining cases.Often the speed with which we can get to a good enough answer is preferred for decision makers, so they can start working on their approach while futher analsysis refines the results. We can always come back later and try to squeeze more information out of these columns if required, if the rest of the dataset does not provide sufficient predictive capability given the bussiness objectives or for a version 2 iteration to improve. ###Code # identify columns in our data that is not described by the accompanying information sheet low_to_no_info_columns = set(azdias.columns) - set(nr_attribute_values.index) # drop the columns we know too little about azdias.drop(columns=low_to_no_info_columns, inplace=True) customers.drop(columns=low_to_no_info_columns, inplace=True) ###Output _____no_output_____ ###Markdown Check for duplicate rows ###Code # find rows that are duplicated in our dataset a_dups, c_dups = azdias.duplicated(), customers.duplicated() # Check the number of missing values in this duplicate data - turns out its mostly missing anway (azdias.loc[a_dups].isna().sum()/a_dups.sum()).sort_values(ascending=False).head(210) # There is some mismatch in the degree to which data is missing between the two datasets, nothing to worry about too much. a_dups.sum()/len(azdias), c_dups.sum()/len(customers) # drop the duplicated rows azdias.drop(index=(a_dups[a_dups==True].index), inplace=True) customers.drop(index=(c_dups[c_dups==True].index), inplace=True) ###Output _____no_output_____ ###Markdown Basic data type cleaning ###Code # instead of searching for object, remove known accepted datatypes # there may be some strange things we could miss if we only look for object data types cols_requiring_transformation = azdias.dtypes[(azdias.dtypes != np.int64) & (azdias.dtypes != np.float64)] cols_requiring_transformation # X / XX missing data labels are very rare and goes up in frequency in both datasets as nans go up # conclusion: replace them with NaNs. pd.DataFrame([ azdias[cols_requiring_transformation.index].isin(['X', 'XX']).sum()/len(azdias), azdias[cols_requiring_transformation.index].isin(['nan']).sum()/len(azdias), customers[cols_requiring_transformation.index].isin(['X', 'XX']).sum()/len(customers), customers[cols_requiring_transformation.index].isin(['nan']).sum()/len(customers), ], index=['A X/XX', 'A nan', 'C X/XX', 'C nan']).T def process_column_dtype(val, base_type=float, nans=[]): """ Convert value to type of base_type, unless str.lower(value) is in nans (all lower) then replace with np.NaN Input val - value to convert base_type - data type value is to be converted to nans = list of values that should be processed as a NaN Output value of type base_type or np.NaN """ if str.lower(val) in nans: return np.NaN else: return base_type(val) def datm(x): """ Helper function to convert datetimes. Input - string in format %Y-%m-%d %H:%M:%S (e.g. 1992-02-10 14:13:59) Output - datetime """ return datetime.strptime(x, '%Y-%m-%d %H:%M:%S') # create functions that can be used in a pandas apply (only 1 input argument) # sometimes it's hard when you try to be DRY process_column_cat = partial(process_column_dtype, base_type=str, nans=['nan', 'x', 'xx']) process_column_datetime = partial(process_column_dtype, base_type=datm, nans=['nan', 'x', 'xx']) process_column_float = partial(process_column_dtype, base_type=float, nans=['nan', 'x', 'xx']) def apply_cleaning_process(df, col_to_proc): """ Takes a dataframe and a dict mapping columns to the cleaning process to be applied. Input df - DataFrame to apply cleaning process to col_to_proc - dict with keys as the column names, and values as the process to apply to those columns Output None - modifies dataframe in place """ for c, p in col_to_proc.items(): df[c] = df[c].apply(p) col_to_proc = { # columns that are numeric 'CAMEO_DEUG_2015' : process_column_float, 'CAMEO_INTL_2015' : process_column_float, # columns that are categories 'CAMEO_DEU_2015' : process_column_cat, 'OST_WEST_KZ' : process_column_cat, } # clean both datasets apply_cleaning_process(azdias, col_to_proc) apply_cleaning_process(customers, col_to_proc) def generate_categorical_replacement_dict(values): """ Generates a dict that maps the values to a numeric. Input - list like of values Output - a dict with keys as the values provided, and a numeric value as the value. """ replacements = dict() for n, k in enumerate(values): if type(k) == str: replacements[k] = n+1 return replacements def categorical_to_numeric(df, col, pre_calculated_replacement=None): """ Replaces non numeric categorical values with numeric values. Uses the sort order of the non numerics to assign a numeric value. Returns the replacement value mapping if one is not provided so the same mapping can be applied to other dataframes columns ensuring consistency. Input df - A pandas dataframe col - A column that this transformation should be applied to pre_calculated_replacement - a dict that maps between the current column values and the new column values Returns None if replacement is provided otherwise: replacement - a dict that maps between the current column values and the new column values """ if pre_calculated_replacement is None: vals = df[col].sort_values().unique() # String categories with missing data. replacement = generate_categorical_replacement_dict(vals) else: replacement = pre_calculated_replacement df[col].replace(replacement, inplace=True) if pre_calculated_replacement is None: return replacement else: return None # convert these from categorial to numeric making sure the conversion is consistent between the datasets CAMEO_DEU_2015_replacements = categorical_to_numeric(azdias, 'CAMEO_DEU_2015') categorical_to_numeric(customers, 'CAMEO_DEU_2015', CAMEO_DEU_2015_replacements) OST_WEST_KZ_replacements = categorical_to_numeric(azdias, 'OST_WEST_KZ') categorical_to_numeric(customers, 'OST_WEST_KZ', OST_WEST_KZ_replacements) # double check we have all object columns treated azdias.dtypes[(azdias.dtypes != np.int64) & (azdias.dtypes != np.float64)] ###Output _____no_output_____ ###Markdown Clean missing data ###Code # most rows contain some form of missing data! # dropping all rows with missing data would be fatal. (azdias.isnull().sum(axis=1) > 0).sum()/len(azdias), (customers.isnull().sum(axis=1) > 0).sum()/len(customers) def find_value_for_unknown(group): """ Takes a dataframe of Attributes Values and Meanings. The meanings are parsed to find the Values that represent unkown or missing data. If no unkown label exist, the minimum value in the group minus 1 is returned. Input - A dataframe (or dataframe group prodiced as part of a group by). Output - a list of values that should be used to label unknowns within the dataframe. """ # all the labels indicating missing data missing_meanings = ['unknown', 'unknown / no main age detectable', 'no transactions known', 'no transaction known'] # find all rows that contain a missing label or equivalent missing_values = group[group['Meaning'].isin(missing_meanings)]['Value'].values # if there is only 1 missing value check if it is a string # sometimes there is more than 1 missing Value but its formatted as a string after reading the spreadsheet if len(missing_values) == 1: if type(missing_values[0]) == str: # split the string and return all missing values as a list of ints return [int(i) for i in missing_values[0].split(',')] elif len(missing_values) > 1: # There should not be more than 1 row with a missing label - should not happen so throw an error raise ValueError('More than 1 row contains a missing definition.') else: pass return missing_values missing_labels = attributes_filt.groupby('Attribute').apply(find_value_for_unknown) # Calculate for each column in each dataset what % of the labels are the missing / unkown label azdias_missing = dict() customers_missing = dict() for c in tqdm(missing_labels.index): if c in azdias.columns: azdias_missing[c] = azdias[c].isin(missing_labels.loc[c]).sum()/len(azdias) if c in customers.columns: customers_missing[c] = customers[c].isin(missing_labels.loc[c]).sum()/len(customers) # combine the missing label data with the NAN / NULL % into a single dataframe (this will make plotting easier) all_data = pd.DataFrame( [azdias.isnull().sum()/len(azdias), customers.isnull().sum()/len(customers), pd.Series(azdias_missing), pd.Series(customers_missing)], index=['azdias_null', 'customers_null', 'azdias_missing', 'customers_missing'] ).T all_data.index.name='features' all_data.fillna(0, inplace=True) all_data['azdias_all'] = all_data['azdias_null'] + all_data['azdias_missing'] all_data['customers_all'] = all_data['customers_null'] + all_data['customers_missing'] all_data.sort_values(by='azdias_all', ascending=False, inplace=True) all_data.head(10) # plot the missing an NULL values for each column layout = column() page_size = 32 pages_total = (len(all_data)//page_size) for k in range(0,len(all_data),page_size): plot_data = all_data.iloc[k:k+page_size] features = list(plot_data.index.unique()) cols = ['azdias', 'customers'] factors = [ (feature, col) for feature in features for col in cols ] unk_type = ['null', 'missing'] null = [] missing = [] for feature, source in factors: null.append(plot_data.loc[feature, f"{source}_null"]) missing.append(plot_data.loc[feature, f"{source}_missing"]) source = ColumnDataSource(data=dict( x=factors, null=null, missing=missing, )) p = figure(x_range=FactorRange(*factors), plot_width=950, plot_height=500, title=f"Overview of missing data in datasets (plot {k//page_size+1} of {pages_total+1})", toolbar_location=None, tools="") p.vbar_stack(unk_type, x='x', width=0.8, alpha=0.5, color=["blue", "red"], line_color=None, line_width=0, source=source, legend_label=unk_type) p.y_range.start = 0 p.y_range.end = 1.05 p.xaxis.major_label_orientation = np.pi/2 p.xaxis.group_label_orientation = np.pi/2 p.xgrid.grid_line_color = None p.legend.location = "top_right" p.legend.orientation = "horizontal" layout.children.append(p) show(layout) ###Output _____no_output_____ ###Markdown Cleaning ALTER_KINDX columnspremature feature engineering - continue later if required ###Code # # we can combine these columns to obtain 1 that has fewer missing values... # # the "alter" word seems to be used to group into age buckets # # if we combine them we "only" have 91% of the data missing instead of 99.9% # len(azdias[['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4']].mean(axis=1).round().dropna())/len(azdias) # def muli_columns_row_mean(df, cols, fillna=-1): # return df[cols].mean(axis=1).round().fillna(fillna) # def muli_columns_row_count_non_nan(df, cols): # return ((df[cols].isnull() == False).sum(axis=1)) # ALTER_KIND_cols = ['ALTER_KIND1','ALTER_KIND2','ALTER_KIND3','ALTER_KIND4'] # azdias['ALTER_KIND_M'] = muli_columns_row_mean(azdias,ALTER_KIND_cols) # azdias['ALTER_KIND_C'] = muli_columns_row_count_non_nan(azdias,ALTER_KIND_cols) # customers['ALTER_KIND_M'] = muli_columns_row_mean(customers,ALTER_KIND_cols) # customers['ALTER_KIND_C'] = muli_columns_row_count_non_nan(customers,ALTER_KIND_cols) # azdias.drop(columns=ALTER_KIND_cols, inplace=True) # customers.drop(columns=ALTER_KIND_cols, inplace=True) ###Output _____no_output_____ ###Markdown Cleaning missing data ###Code # if there are more than 1 missing label use the first one, and replace the 2nd one with the first def clean_missing(df1, missing_labels): """ Makes the missing labels consistent through the dataset. If there is more than 1 missing label, the first one is used. Additionally returns which columns in the dataset is missing from the labels, and which lables are missing from the dataset. Input df - DataFrame to treat missing_labels - a series that has as index the column names and as values a list of the possible missing values Output df_cols_missing_from_labels - columns in the dataset thats is missing from the labels label_cols_missing_from_df - columns in the labels that is missing from the dataset """ df_cols_missing_from_labels = list(df1.columns) label_cols_missing_from_df = [] # go through all values that we have a index mapping for for c in tqdm(missing_labels.index): if c in df_cols_missing_from_labels: # remove colum if it has been treated df_cols_missing_from_labels.remove(c) if len(missing_labels[c]) > 1: # if there is more than 1 value to map to, map all of the rest to the first one replacement_mapping = dict() for k in missing_labels[c][1:]: replacement_mapping[k] = missing_labels[c][0] df1[c].replace(replacement_mapping, inplace=True) else: label_cols_missing_from_df.append(c) return df_cols_missing_from_labels, label_cols_missing_from_df df_cols_missing_from_labels, label_cols_missing_from_df = clean_missing(azdias, missing_labels) _, _ = clean_missing(customers, missing_labels) ###Output _____no_output_____ ###Markdown Fill missing valuesThere are various ways to treat missing values. Due to the multiple procesess in which missing data ends up the dataset, it was chosen to not mix these. So if data was lablelled as missing that label was kept (unless there were multiple labels for missing, in which case the first label was used and all other missing labels were replaced with the first). Missing data in the form of NaNs were replaced with the smallest value in the data -1. The data is mostly categorical. ###Code # replace all nans with a numeric value not found in the data missing_fill_value = azdias.min()-1 azdias.fillna(missing_fill_value, inplace=True) customers.fillna(missing_fill_value, inplace=True) # update the missing labels array so we can use it elsewhere def merge(r): """ Merges the missing values into a single list. Values in each column does not have to be in a list. Removes NaNs. """ col_vals = list(r) arr = [] for elem in col_vals: if '__len__' in dir(elem): # if __len__ is implemented its a list like objecct arr.extend(list(elem)) elif np.isnan(elem) == False: # keep it if its not NaN arr.append(elem) arr = np.array(list(set(arr))) arr.sort() return arr missing_labels_all = pd.DataFrame([missing_labels, missing_fill_value]).T missing_labels_all = missing_labels_all.apply(merge, axis=1) missing_labels_all # make sure the operation completed and that there are no longer any nans in the data azdias.isna().sum().sum() == 0, customers.isna().sum().sum() == 0 ###Output _____no_output_____ ###Markdown Convert columns to int64 if they dont lose information in the process ###Code can_convert_to_int = [] cannot_convert_to_int = [] for c in azdias.dtypes[azdias.dtypes == np.float64].index: # if all values in float form equals the value in int form if (azdias[c] == azdias[c].astype(np.int64)).sum() == len(azdias[c]): # then we can convert that column to int without losing information can_convert_to_int.append(c) else: # otherwise at last some of the float values contain information that is lost in the conversion cannot_convert_to_int.append(c) print(f"{c} is not convertable from float to int") print(f"{len(can_convert_to_int)} columns can be converted from float to int, while {len(cannot_convert_to_int)} can not") # convert all to the columns that we can to int azdias[can_convert_to_int] = azdias[can_convert_to_int].astype(np.int64) customers[can_convert_to_int] = customers[can_convert_to_int].astype(np.int64) ###Output 223 columns can be converted from float to int, while 0 can not ###Markdown Find and treat data that has long tails ###Code # Calculate some properties of the features so problematic ones can be identified column_stats = azdias.quantile([0, 0.01, 0.25, 0.5, 0.75, 0.99, 1]).T column_stats['IQR'] = column_stats[0.75] - column_stats[0.25] column_stats['l'] = column_stats[0.25] - 1.5 * column_stats['IQR'] column_stats['u'] = column_stats[0.75] + 1.5 * column_stats['IQR'] column_stats['num_vals'] = azdias.apply(pd.Series.unique).apply(len) column_stats['upper_outlier_percentage'] = ( (azdias > column_stats['u']) ).sum()/len(azdias) # find columns that has the most outliers column_stats[(column_stats['num_vals'] > 10) & (column_stats['upper_outlier_percentage'] > 0)] # some households with very large number of people before = azdias['ANZ_PERSONEN'].value_counts().copy() # unlikely that distinguishing between 8 and 9 or even between 8 and 40 will have a significant impact on # likelihood of being a customer # very small % of data points affected with this adjustment azdias.loc[azdias['ANZ_PERSONEN'] >= 8,'ANZ_PERSONEN'] = 8 customers.loc[customers['ANZ_PERSONEN'] >= 8,'ANZ_PERSONEN'] = 8 after = azdias['ANZ_PERSONEN'].value_counts().copy() def plot_columns_before_after(before, after, c): p = figure(plot_width=500, plot_height=400, toolbar_location='right', tools="box_zoom") after = after.sort_index() before = before.sort_index() p.line(before.index, before.values, line_color='blue', legend_label=f'{c} before') p.line(after.index, after.values, line_color='red', legend_label=f'{c} after') p.title.text = f"{c} transformation" p.yaxis.axis_label = 'Numer of occurences' p.xaxis.axis_label = 'Column values' p.xgrid.grid_line_color = None return show(p) plot_columns_before_after(before, azdias['ANZ_PERSONEN'].value_counts().copy(), 'ANZ_PERSONEN') # The tail of this feature is in steps of 100, while the body is not. # It does not make sense to have this as a sparse continious variable (could be very prone to overfitting) # Convert entire feature into buckets of 100 before = azdias['KBA13_ANZAHL_PKW'].value_counts().copy() # use ceiling for 2 reasons # 1. having no cars is significantly different to having 1 car # 2. the distribution has a stop at 1300 when rounding, and with this amount of data that large a step is unlikely to # be natural. azdias['KBA13_ANZAHL_PKW'] = (np.ceil((azdias['KBA13_ANZAHL_PKW']/100))*100) customers['KBA13_ANZAHL_PKW'] = (np.ceil((customers['KBA13_ANZAHL_PKW']/100))*100) plot_columns_before_after(before, azdias['KBA13_ANZAHL_PKW'].value_counts(), 'KBA13_ANZAHL_PKW') before = azdias['ANZ_HAUSHALTE_AKTIV'].value_counts() # bin data above the median as things are too extreme in the tails ANZ_HAUSHALTE_AKTIV_quantiles = azdias['ANZ_HAUSHALTE_AKTIV'].quantile([0.5,0.6,0.7,0.8,0.9]) def bin_the_tails(df, column, quantiles): """ Takes a list of values (quantile values) bins all data in between these to the lowest of edges. """ prev = np.inf for v in np.flip(quantiles.values): df.loc[(df[column] >= v) & (df[column] < prev), column] = v prev = v bin_the_tails(azdias, 'ANZ_HAUSHALTE_AKTIV', ANZ_HAUSHALTE_AKTIV_quantiles) bin_the_tails(customers, 'ANZ_HAUSHALTE_AKTIV', ANZ_HAUSHALTE_AKTIV_quantiles) plot_columns_before_after(before, azdias['ANZ_HAUSHALTE_AKTIV'].value_counts(), 'ANZ_HAUSHALTE_AKTIV') ###Output _____no_output_____ ###Markdown Find highly correlated features ###Code # # Calclate cross correlation matrix between all columns # %%time # a_corr = azdias.corr() # # Remove the correlaton between the columns and themselves # np.fill_diagonal(a_corr.values, 0) # # Find coclumns that are highly correlated # pd.DataFrame([a_corr.abs().idxmax(axis=1),a_corr.abs().max(axis=1)], index=['max_corr_col', 'value']).T.sort_values(by='value', ascending=False).head(20) # a_corr['KBA13_HERST_SONST'].abs().sort_values(ascending=False) # a_corr['CAMEO_DEU_2015'].abs().sort_values(ascending=False) # The correlated columns identified above that we should drop correlated_cols_to_drop = [ 'KBA13_FAB_SONSTIGE', # exact copy of KBA13_HERST_SONST 'CAMEO_DEUG_2015', # just a less detailed version of CAMEO_DEU_2015 'LP_LEBENSPHASE_GROB', # a GROB suffix is a less detailed version of a FEIN suffix 'LP_STATUS_GROB', 'LP_FAMILIE_GROB', ] # Drop columns azdias.drop(columns=correlated_cols_to_drop, inplace=True) customers.drop(columns=correlated_cols_to_drop, inplace=True) # # plot a correlation heatmap # a_corr_flat = a_corr.stack() # axis_values = pd.DataFrame(list(a_corr_flat.index), columns=['x','y']) # a_corr_flat.values.max(), a_corr_flat.values.min() # factors = list(a_corr.columns) # x = axis_values['x'] # y = axis_values['y'] # colors = [] # for k in (np.round(a_corr_flat.values*5).astype(int)+5): # colors.append(RdYlBu[11][k]) # hm = figure(title="Cross Correlation Heatmap", tools="hover", toolbar_location=None, # x_range=factors, y_range=factors, plot_width=950, plot_height=950) # hm.rect(x, y, color=colors, width=1, height=1) # hm.xaxis.major_label_orientation = np.pi/2 # hm.yaxis.major_label_text_font_size = "3pt" # hm.xaxis.major_label_text_font_size = "3pt" # show(hm) ###Output _____no_output_____ ###Markdown missing data discussionThere are two clear ways in which data can be missing. The first is a null in the dataset, caused by a missing value in the original CSV file. The second is data that has explicitly been labelled as unkown or missing in the dataset. It is possible that each of these ways in which data can be missing, could be the result of different mechanisms each containing different information (e.g. the difference between trying to obtain the data and failing, and not trying). It could also be the case that these two different mechanisms are equivalent and thus should be merged to avoid noise in our dataset. There are some columns that are mainly missing data - which should be dropped.There is a clear bias in the datasets between azdias and customers when comparing missing data, most notable in the KBA13 prefixed features, with customers being almost double as likely to have these features missing. Ivestigating and knowing why these fields end up missing would be very important to determine if the fact that the data is missing should be exploited. ###Code # use a KS test to determine if the distributions of the data are similar # if the data distribution is too similar then this does not tell us anything about our customers. def filter_missing_values(series, missing_vals): """ Returns values that does not represent a value that indicates the value is missing or NaN. Input Series - The series that should be filtered mssing_vals - The list of values that represents a value that is missing. Output Series consisiting only of the values that are not labelled as missing. """ return series[series.isin(missing_vals) == False] similarity = dict() for n, c in enumerate(tqdm(azdias.columns)): """ Uses the KS statistic to determine if the distributions are the same. If the KS statistic is large or the p-value is small, then we can reject the hypothesis that the distributions of the two samples are the same. """ m = missing_labels_all[c] # ks, p = stats.ks_2samp(azdias[c], customers[c]) ks, p = stats.ks_2samp(filter_missing_values(azdias[c], m), filter_missing_values(customers[c], m)) similarity[c] = {'ks': ks, 'p': p} similarity_scores = pd.DataFrame(similarity).T def calc_pdf_cdf(data, bins): """ Calculates a PDF and CDF from data using the bins. Input data - list like to calculate PDF and CDF from bins - bins to use to group data by Output bins - bin centers of the PDF and CDF pdf - probability distribution of the data cdf - cumulative probability distribution of the data """ if np.diff(bins).min() < 1: raise Error('minimum bin distance is less than 1') else: hist_bins = np.append(bins[0] - 0.5, (bins + 0.5)) count, bins_count = np.histogram(data.loc[np.isfinite(data)], hist_bins) pdf = count / sum(count) cdf = np.cumsum(pdf) return bins, pdf, cdf def ks_bar_plot(c, azdias=azdias, customers=customers, missing_labels_all=missing_labels_all, attributes=attributes): """ Generates a bar plot for a provided column. """ m = missing_labels_all[c] azdi_valid = filter_missing_values(azdias[c], m) cust_valid = filter_missing_values(customers[c], m) numeric_bins = azdi_valid.append(cust_valid).unique() numeric_bins.sort() bins = [] for i in list(numeric_bins): valid_attrib = attributes[(attributes['Attribute'] == c) & (attributes['Value'] == i)]['Meaning'] if len(valid_attrib) > 0: bins.append(valid_attrib.values[0]) else: bins.append(i) x1, pdf1, cdf1 = calc_pdf_cdf(azdi_valid, bins=numeric_bins) x2, pdf2, cdf2 = calc_pdf_cdf(cust_valid, bins=numeric_bins) datasets = ['a', 'c'] x_cat = [ (str(b), d) for b in bins for d in datasets ] counts_cat = list(pd.DataFrame([pdf1, pdf2], index=datasets).T.stack().values) source = ColumnDataSource(data=dict(x=x_cat, counts=counts_cat)) p = figure(x_range=FactorRange(*x_cat), plot_width=950, plot_height=500, toolbar_location=None, tools="") p.vbar(x='x', top='counts', width=0.9, source=source, line_color=None, fill_color=factor_cmap('x', palette=Category10[10][0:2], factors=datasets, start=1, end=2)) p.title.text = f"{c} [ks:{similarity_scores.loc[c]['ks']:0.4f} p:{similarity_scores.loc[c]['p']:0.4f}]" p.y_range.start = 0 p.xaxis.major_label_orientation = 0 p.xaxis.group_label_orientation = np.pi/2 p.xgrid.grid_line_color = None return p # plot a few of the most simmilar features layout = column() for c in tqdm(similarity_scores.sort_values(by=['ks'], ascending=True).head(5).index): layout.children.append(ks_bar_plot(c)) show(layout) # Find most similar features similarity_scores[similarity_scores['p'] > 0.001] # Drop columns that are distributed too similarly between datasets, they will not provide separation ks_cols_to_drop = similarity_scores[similarity_scores['p'] > 0.001].index azdias.drop(columns=ks_cols_to_drop, inplace=True) customers.drop(columns=ks_cols_to_drop, inplace=True) ###Output _____no_output_____ ###Markdown UMAPTried clustering to see if there was anything insightful. Did not produce anything amazing. ###Code # azdias_c = azdias.copy() # customers_c = customers.copy() # azdias_c['customer'] = False # customers_c['customer'] = True # combined_data = azdias_c.append(customers_c[azdias_c.columns]) # mini_data = combined_data.sample(100000, random_state=42) # labels = mini_data['customer'] # train = mini_data.drop(columns=['customer']) # plots = [] # for n_neighbors in tqdm([200, 500, 1000]): # sub_plots = [] # for min_dist in tqdm([0.3, 0.99]): # # gridsearch revealed default settings perform very well # reducer = umap.UMAP(random_state=42, init='random', n_neighbors=n_neighbors, min_dist=min_dist) # reducer.fit(train) # embedding = reducer.transform(train) # x = embedding[:,0] # y = embedding[:,1] # colors = labels.replace({False: Cividis[11][-1], True: Cividis[11][0]}) # p = figure(title=f"Overall view of customers vs non-customers in dataset [n_neighbors = {n_neighbors}, min_dist = {min_dist}]") # p.scatter(x, y, radius=0.1, # fill_color=colors, fill_alpha=0.25, # line_color=None) # sub_plots.append(p) # # layout.children.append(p) # plots.append(sub_plots) # grid = gridplot(plots) # show(grid) # layout = column() # for num_cols in [21]: # tqdm(range(15,25,1)): # ks_most_dissimilar = similarity_scores.sort_values(by=['ks'], ascending=False).head(num_cols).index # ks_mini_data = combined_data[[*ks_most_dissimilar, 'customer']].sample(100000, random_state=42) # ks_labels = ks_mini_data['customer'] # ks_train = ks_mini_data.drop(columns=['customer']) # ks_reducer = umap.UMAP(random_state=42, init='random', n_neighbors=15, min_dist=0.1) # ks_reducer.fit(ks_train) # ks_embedding = ks_reducer.transform(ks_train) # x = ks_embedding[:,0] # y = ks_embedding[:,1] # colors = ks_labels.replace({False: Cividis[11][-1], True: Cividis[11][0]}) # p = figure(title=f"Overall view of customers vs non-customers in dataset [top {num_cols} dissimilar columns]") # p.scatter(x, y, radius=0.1, # fill_color=colors, fill_alpha=0.1, # line_color=None) # layout.children.append(p) # show(layout) # %%time # ks_emb_sample = ks_embedding # clustering = OPTICS(min_samples=1000, n_jobs=-1) # clustering.fit(ks_emb_sample) # max(clustering.labels_) # cluster_labels = list(pd.DataFrame(clustering.labels_).value_counts().index.get_level_values(0)) # c = 0 # r = dict() # for l in cluster_labels: # if l == -1: # r[l] = '#cfcfcf' # elif c < len(Category10[10]): # r[l] = Category10[10][c] # c += 1 # else: # r[l] = '#cfcfcf' # emb_arr = pd.DataFrame(np.vstack([ks_emb_sample.T, ks_labels.astype(int).values, clustering.labels_, ]).T, columns=['Emb_0', 'Emb_1', 'is_customer', 'Cluster']) # emb_arr.head(10) # locations = emb_arr.groupby('Cluster').mean().sort_values(by='is_customer', ascending=False).reset_index() # locations.merge((emb_arr['Cluster'].value_counts()/len(emb_arr)).rename('Cluster Size'), # left_on='Cluster', right_index=True) # colors = list(pd.Series(clustering.labels_).replace(r).values) # x = ks_embedding[:,0] # y = ks_embedding[:,1] # p = figure(title=f"Clusters within the dataset ranked from highest to lowest customer density") # p.scatter(x, y, radius=0.1, # fill_color=colors, fill_alpha=0.25, # line_color=None) # for k in locations.iterrows(): # l, xl, yl = k[0], k[1]['Emb_0'], k[1]['Emb_1'] # if l != -1: # citation = Label(x=xl, y=yl, # x_units='screen', y_units='screen', # text=str(int(l)), render_mode='css', # border_line_color='black', border_line_alpha=1.0, # background_fill_color='white', background_fill_alpha=1.0) # p.add_layout(citation) # show(p) # c_cols = list(set(customers.columns) - set(azdias.columns)) # customer_groups = customers[c_cols].copy() # customer_groups['PRODUCT_GROUP_COSMETIC'] = (customer_groups['PRODUCT_GROUP'].str.find('COSMETIC') >= 0) # customer_groups['PRODUCT_GROUP_FOOD'] = (customer_groups['PRODUCT_GROUP'].str.find('FOOD') >= 0) # customer_groups.drop(columns='PRODUCT_GROUP', inplace=True) # label_groups = pd.merge(labels, customer_groups, left_index=True, right_index=True, how='left') # label_groups['ONLINE_PURCHASE'].fillna(-1, inplace=True) # label_groups['CUSTOMER_GROUP'].fillna('NONE', inplace=True) # label_groups['PRODUCT_GROUP_COSMETIC'].fillna(False, inplace=True) # label_groups['PRODUCT_GROUP_FOOD'].fillna(False, inplace=True) # label_groups.head() # layout = column() # for k in ['ONLINE_PURCHASE', 'CUSTOMER_GROUP', 'PRODUCT_GROUP_COSMETIC', 'PRODUCT_GROUP_FOOD']: # x = embedding[:,0] # y = embedding[:,1] # if k == 'ONLINE_PURCHASE': # colors = label_groups[k].replace({-1: Category20[20][1], 0:Category20[20][4], 1: Category20[20][6]}) # elif k == 'CUSTOMER_GROUP': # colors = label_groups[k].replace({'NONE': Category20[20][1], 'SINGLE_BUYER':Category20[20][4], 'MULTI_BUYER': Category20[20][6]}) # else: # colors = label_groups[k].replace({False: Category20[20][1], True: Category20[20][6]}) # p = figure(title=k) # p.scatter(x, y, radius=0.1, # fill_color=colors, fill_alpha=0.25, # line_color=None) # layout.children.append(p) # show(layout) ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. ###Code # find features that the distribution of customers and general population differ the most on (ignoring missing values) most_different = similarity_scores.sort_values(by=['ks'], ascending=False).index def output_col(c, attributes=attributes): """plot helper""" p = ks_bar_plot(c) show(p) # display(attributes[attributes['Attribute'] == c]) ###Output _____no_output_____ ###Markdown Customers vs General Population:- Customers are typically older, growing up in the 40's - 60's, and older than 60 years of age. - PRAEGENDE_JUGENDJAHRE - ALTERSKATEGORIE_GROB - LP_LEBENSPHASE_FEIN - ALTER_HH - LP_STATUS_FEIN- Customers are more likely to be male than general population (2:1 vs 1:1) - ANREDE_KZ- Customers are wealthy and have a high income, with a greater chance of being a customer the more wealthy they are. - HH_EINKOMMEN_SCORE - LP_STATUS_FEIN- Customers have a higher interest in investing and saving, and lower in finance minimalism and preparedness - FINANZ_SPARER - FINANZ_ANLEGER - FINANZ_VORSORGER - FINANZ_MINIMALIST- Customers typically own their home - LP_LEBENSPHASE_FEIN - LP_STATUS_FEIN - FINANZ_HAUSBAUER- Customers live in freestanding homes but more densely populated areas - KBA05_ANTG1 - ANZ_HAUSHALTE_AKTIV - KBA05_GBZ- Customers are more family orientated, and have multiple generations living in the same home - LP_FAMILIE_FEIN - ANZ_PERSONEN- Customers are immobile w.r.t. housing, staying at the same location for 10+ years, and not move home - WOHNDAUER_2008 - MOBI_REGIO- Customers own more cars, and newer cars than the general population - KBA05_AUTOQUOT - KBA05_VORB2- Customers are critical, realistic, traditional and religious - SEMIO_PFLICHT - SEMIO_REL - SEMIO_VERT - SEMIO_KRIT - SEMIO_TRADV - SEMIO_LUST - SEMIO_RAT Feature analysis- D19_KONSUMTYP: consumption type - Customers are more likely to fall within the universal, versatile and gourmet groups - Customers are less likely to fall within the family, informed, modern and inactive groups- PRAEGENDE_JUGENDJAHRE: dominating movement in the person's youth (avantgarde or mainstream) - Customers are more likely to fall in the 60ies group- FINANZ_VORSORGER: financial typology: be prepared - Customers rank low and very low on financial preparedness - Customers are less likely to rank average, high or very high on financial preparedness - FINANZ_SPARER: financial typology: money saver - Customers rank very high on saving money - Customers are less likely to rank high, average, low, of very low on saving money- FINANZ_ANLEGER: financial typology: investor - Customers rank very high on investing money - Customers are less likely to rank high, average, low, of very low on investing money- FINANZ_MINIMALIST: financial typology: low financial interest - Customers rank very low on financial interest - Customers are less likely to rank low, average, high or very high on financial interest- LP_STATUS_FEIN: social status fine - Customers are typically hoseowners, title-holder-households, and top earners - Customers are not minimalistic high-income earners, orientationseeking low-income earners, or new houseowners- SEMIO_RAT: affinity indicating in what way the person is of a rational mind - Customers are more likely to be rational individuals- ALTERSKATEGORIE_GROB: age classification through prename analysis - Customer age is typically older than 60 years (this aligns with PRAEGENDE_JUGENDJAHRE) - There is allso a large number of customers in the > 45 age group, but this is not larger than the general population- SEMIO_PFLICHT: affinity indicating in what way the person is dutyfull traditional minded - Customers are more likely to show affinity with traditional values- ZABEOTYP: typification of energy consumers - Customers are more likely to identify with green, smart or fair supplied energy types - Customers are less likely to identify with price driven, seeking orentation, or indifferent energy types- HH_EINKOMMEN_SCORE: estimated household net income - Customers are of above average to high household income- ANZ_HAUSHALTE_AKTIV: number of households in the building - Customers typically have fewer households in the building than non customers- SEMIO_LUST: affinity indicating in what way the person is sensual minded - Customers have low affinity with sensuality- LP_LEBENSPHASE_FEIN: lifestage fine - Customers are top earners, homeowners, older age- KBA05_BAUMAX: most common building-type within the cell - Customers are located in less densly populated areas with only 1-2 homes in a microcell- FINANZ_UNAUFFAELLIGER: financial typology: unremarkable - HMMM?- SEMIO_TRADV: affinity indicating in what way the person is traditional minded - More towards high affinity - SEMIO_KRIT: affinity indicating in what way the person is critical minded - Higher affinity with being critically minded- ALTER_HH: main age within the household - Customers birth years are more likely to be = 1955- KBA05_ANTG1: number of 1-2 family houses in the cell - Homes are typically 1-2 family homes- ANZ_PERSONEN: number of adult persons in the household - More likely to have a larger number of adults in the household - LP_FAMILIE_FEIN: familytyp fine - Less likely to be single - More likely to be a coupe with multi genrations living in the same household- MOBI_REGIO: moving patterns - Low to very low mobility compared to the general population - WOHNDAUER_2008: length of residence - Living in the same location for more than 10 years- ANREDE_KZ - More likely to be male compared to general population- SEMIO_VERT: affinity indicating in what way the person is dreamily - low afinity to dreaming, i.e. more focused on reality- SEMIO_REL: affinity indicating in what way the person is religious - More affinity with religion- KBA05_AUTOQUOT: share of cars per household - Customers are more likely to have a high number of cars- KBA05_GBZ: number of buildings in the microcell - more likely to have a large number of buildings in the microcell - KBA05_VORB2: share of cars with more than two preowner - More likely to own newer cars than pre-owned cars- FINANZ_HAUSBAUER: financial typology: main focus is the own house - High affinity with owning a home ###Code # Plot the 50 features that are the most different between the general population and the customers for k in range(50): output_col(most_different[k]) print('='*117*2) ###Output _____no_output_____ ###Markdown Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code # Find the variables taking up the most memory d = dict() keys = list(locals().keys()) for k in keys: d[k] = getsizeof(locals()[k])/1000/1000 # convert to MB pd.DataFrame(d.values(), columns=['mem size'], index=d.keys()).sort_values(by='mem size', ascending=False).head(10) # clear memory del azdias del customers del a_dups del c_dups gc.collect() def apply_pre_processing(df, low_to_no_info_columns, correlated_cols_to_drop, ks_cols_to_drop, col_to_proc, missing_labels, missing_fill_value, drop_duplicates=True, replacements=None, ANZ_HAUSHALTE_AKTIV_quantiles=ANZ_HAUSHALTE_AKTIV_quantiles): """ Systematically apply the pre-processing steps uncovered through the analysis process. The process 1. Removes meta data and sets the index 2. Drops columns of low value 3. Drops duplicated rows if asked to do so 4. Converts columns to numeric equivalents 5. Unifies the missing labels 6. Fill NaNs with a unique label (separate from missing labels) 7. Converts all columns to int 8. Cleans columns that are prone to ouliers Inputs df - DataFrame with the data to process low_to_no_info_columns - columns identified as low or no info in the analysis process correlated_cols_to_drop - columns identified as highly correlated in the analysis process ks_cols_to_drop - columns identified as similarly distributed in the analysis process col_to_proc - columns and processes mapping to convert object columns to numeric missing_labels - labels for each column which indicates the missing values missing_fill_value - labels for each column to use for NaN values drop_duplicates=True - flag that drops dupliacte rows from the dataset replacements=None - replacements mapping (generated in this function to be re-used for subsequent datasets) ANZ_HAUSHALTE_AKTIV_quantiles - bins to use for this column as determined by the analysis process Output The dataframe is processed in place, so its not retuned replacements - mapping as calculated in the application of this function """ setup_dataframe(df) columns_to_drop = [ *low_to_no_info_columns, # columns we dont have descriptions for in the accompanying data descriptions *correlated_cols_to_drop, # columns that are highly correlated *ks_cols_to_drop, # columns with the same distribution in customers and general population ] df.drop(columns=columns_to_drop, inplace=True) # Check for duplicated rows if drop_duplicates: dups = df.duplicated() print(f"Found and dropping {dups.sum()} duplicate rows") df.drop(index=(dups[dups==True].index), inplace=True) # process non numeric columns so they are numeric if 'CAMEO_DEUG_2015' in col_to_proc.keys(): col_to_proc.pop('CAMEO_DEUG_2015') apply_cleaning_process(df, col_to_proc) # convert categoricals to numerics if replacements is None: replacements = dict() cols_to_replace = ['CAMEO_DEU_2015', 'OST_WEST_KZ'] for c in cols_to_replace: replacements[c] = categorical_to_numeric(df, c) else: for c, r in replacements.items(): categorical_to_numeric(df, c, r) # some columns have more than 1 missing label, unify these in the data clean_missing(df, missing_labels) # replace all nans with a numeric value not found in the data df.fillna(missing_fill_value, inplace=True) # analysis showed all columns thus far can be converted to int df.astype(np.int64, copy=False) # apply treatment to some features to reduce outliers or reduce the number of categories df.loc[df['ANZ_PERSONEN'] >= 8,'ANZ_PERSONEN'] = 8 df['KBA13_ANZAHL_PKW'] = (np.ceil((df['KBA13_ANZAHL_PKW']/100))*100) bin_the_tails(df, 'ANZ_HAUSHALTE_AKTIV', ANZ_HAUSHALTE_AKTIV_quantiles) return replacements # re-load the original datasets so we can be sure we are processing them as intended train_azdias = load_data('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', 'azdias') train_customers = load_data('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', 'customers') # Drop descriptive columns train_customers.drop(columns=['CUSTOMER_GROUP', 'PRODUCT_GROUP', 'ONLINE_PURCHASE'], inplace=True) # apply standardised pre-processing replacements = apply_pre_processing(train_azdias, low_to_no_info_columns, correlated_cols_to_drop, ks_cols_to_drop, col_to_proc, missing_labels, missing_fill_value, drop_duplicates=True, replacements=None) # apply standardised pre-processing _ = apply_pre_processing(train_customers, low_to_no_info_columns, correlated_cols_to_drop, ks_cols_to_drop, col_to_proc, missing_labels, missing_fill_value, drop_duplicates=True, replacements=replacements) # Construct a training dataset from the two larger datasets train_azdias['RESPONSE'] = 0 train_customers['RESPONSE'] = 1 data_train = pd.concat([train_azdias, train_customers], axis=0) # Load verification dataset mailout_veri = load_data('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', 'mailout_train') # apply standardised pre-processing _ = apply_pre_processing(mailout_veri, low_to_no_info_columns, correlated_cols_to_drop, ks_cols_to_drop, col_to_proc, missing_labels, missing_fill_value, drop_duplicates=False, replacements=replacements) # Construct datasets X = data_train.drop(columns=['RESPONSE']) y = data_train['RESPONSE'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) X_valid = mailout_veri.drop(columns=['RESPONSE']) y_valid = mailout_veri['RESPONSE'] # clear memory del data_train del train_azdias del train_customers del mailout_veri gc.collect() # what is the balance of the data classes? y.value_counts()/len(y) %%time # Train the classifier clf = RandomForestClassifier(random_state=42, n_jobs=-1) clf.fit(X_train, y_train) # Predict on the test set y_pred_test = clf.predict(X_test) y_score_test = clf.predict_proba(X_test)[:,1] # Predict on the validation set y_pred_valid = clf.predict(X_valid) y_score_valid = clf.predict_proba(X_valid)[:,1] # Calculate ROC curves and AUC fpr_test, tpr_test, thresholds_test = roc_curve(y_test, y_score_test) roc_auc_test = auc(fpr_test, tpr_test) fpr_valid, tpr_valid, thresholds_valid = roc_curve(y_valid, y_score_valid) roc_auc_valid = auc(fpr_valid, tpr_valid) # Plot the results p = figure(title="ROC") p.line(fpr_test, tpr_test, color='Red', legend_label=f"Test ROC curve (AUC = {roc_auc_test:0.3f})") p.line(fpr_valid, tpr_valid, color='Blue', legend_label=f"Validation ROC curve (AUC = {roc_auc_valid:0.3f})") p.line([0,1], [0,1], color='Gray') p.yaxis.axis_label = "True Positive Rate" p.xaxis.axis_label = "False Positive Rate" p.legend.location = "bottom_right" show(p) ###Output _____no_output_____ ###Markdown Gridsearch ###Code # # Get a ballpark for tree depth so we can set meaningful values for our gridsearch # np.mean(np.array([estimator.tree_.max_depth for estimator in clf.estimators_])) # around 42 # # paramaters to search over # param_grid = { # 'n_estimators': [100, 500], # 'max_depth': [None, 10, 20, 30], # 'min_samples_split': [2, 5, 10, 15], # 'min_samples_leaf': [1, 2, 5, 10], # 'max_features': ['sqrt', 'log2'], # } # # construct pipeline for search # clf_hyp = RandomForestClassifier(random_state=42, n_jobs=-1) # clf_rand = RandomizedSearchCV(estimator=clf_hyp, param_distributions=param_grid, n_iter=(60/2), # verbose=5, random_state=42, scoring='roc_auc') # # find the best hyper-parameters # %%time # clf_rand.fit(X, y) # # And the best parameters are.... # clf_rand.best_params_ # # Evaluate the best performing model # y_pred_test_rnd = clf_rand.predict(X_test) # y_score_test_rnd = clf_rand.predict_proba(X_test)[:,1] # y_pred_valid_rnd = clf_rand.predict(X_valid) # y_score_valid_rnd = clf_rand.predict_proba(X_valid)[:,1] # # fpr_test_rnd, tpr_test_rnd, thresholds_test_rnd = roc_curve(y_test, y_score_test_rnd) # roc_auc_test_rnd = auc(fpr_test_rnd, tpr_test_rnd) # fpr_valid_rnd, tpr_valid_rnd, thresholds_valid_rnd = roc_curve(y_valid, y_score_valid_rnd) # roc_auc_valid_rnd = auc(fpr_valid_rnd, tpr_valid_rnd) # Plot the results # p = figure(title="ROC improvement") # p.line(fpr_test, tpr_test, color='#ff9e9e', legend_label=f"Test ROC curve (AUC = {roc_auc_test:0.3f})") # p.line(fpr_valid, tpr_valid, color='#a3a3ff', legend_label=f"Validation ROC curve (AUC = {roc_auc_valid:0.3f})") # p.line(fpr_test_rnd, tpr_test_rnd, color='Red', legend_label=f"Optimised Test ROC curve (AUC = {roc_auc_test_rnd:0.3f})") # p.line(fpr_valid_rnd, tpr_valid_rnd, color='Blue', legend_label=f"Optimised Validation ROC curve (AUC = {roc_auc_valid_rnd:0.3f})") # p.line([0,1], [0,1], color='Gray') # p.yaxis.axis_label = "True Positive Rate" # p.xaxis.axis_label = "False Positive Rate" # p.legend.location = "bottom_right" # show(p) # results_df = pd.DataFrame(clf_rand.cv_results_)[['params', 'mean_test_score', 'std_test_score']].sort_values(by='mean_test_score', ascending=False) # for k in clf_rand.best_params_.keys(): # results_df[k] = results_df['params'].apply(lambda x: x[k]) # results_df.drop(columns=['params'], inplace=True) # results_df ###Output _____no_output_____ ###Markdown Varying n_estimators ###Code # performance_data = dict() # # Train a classifier for multiple values of n_estimators # for k in tqdm(np.geomspace(10,500,18, dtype=int)): # # Train a classifier for k estimators # clf = RandomForestClassifier(random_state=42, n_jobs=-1, n_estimators=k) # clf.fit(X_train, y_train) # # Evaluate the model # y_pred_test = clf.predict(X_test) # y_score_test = clf.predict_proba(X_test)[:,1] # y_pred_valid = clf.predict(X_valid) # y_score_valid = clf.predict_proba(X_valid)[:,1] # fpr_test, tpr_test, thresholds_test = roc_curve(y_test, y_score_test) # roc_auc_test = auc(fpr_test, tpr_test) # fpr_valid, tpr_valid, thresholds_valid = roc_curve(y_valid, y_score_valid) # roc_auc_valid = auc(fpr_valid, tpr_valid) # # store the results # performance_data[k] = { # 'y_score_test': y_score_test, # 'y_score_valid': y_score_valid, # 'fpr_test': fpr_test, # 'tpr_test': tpr_test, # 'thresholds_test': thresholds_test, # 'roc_auc_test': roc_auc_test, # 'fpr_valid': fpr_valid, # 'tpr_valid': tpr_valid, # 'thresholds_valid': thresholds_valid, # 'roc_auc_valid': roc_auc_valid, # } # print(f"{k}, {roc_auc_test:0.3f}, {roc_auc_valid:0.3f}") # # Plot the data # p = figure(title="ROC for varying n_estimators") # for k, v in performance_data.items(): # fpr_valid, tpr_valid = v['fpr_valid'], v['tpr_valid'] # if np.abs(k - 100) < 2: # color = 'Blue' # alpha = 1 # legend = 'n_estimators = 100' # elif np.abs(k - 126) < 5: # color = 'Red' # alpha = 1 # legend = 'n_estimators = 126' # else: # color = 'Black' # alpha = 0.15 # legend = 'n_estimators variations' # # p.line(fpr_test, tpr_test, color='Red', legend_label=f"Test ROC curve (AUC = {roc_auc_test:0.3f})") # p.line(fpr_valid, tpr_valid, color=color, line_alpha=alpha, legend_label=legend) # p.line([0,1], [0,1], color='Gray') # p.yaxis.axis_label = "True Positive Rate" # p.xaxis.axis_label = "False Positive Rate" # p.legend.location = "bottom_right" # show(p) # # Plot the data # p = figure(title="AUC scores by n_estimators") # n_est, auc_test, auc_valid = [], [], [] # for k, v in performance_data.items(): # n_est.append(k) # auc_test.append(v['roc_auc_test']) # auc_valid.append(v['roc_auc_valid']) # p.line(n_est, auc_test, color='Blue', legend_label=f"Test AUC scores") # p.line(n_est, auc_valid, color='Red', legend_label=f"Validation AUC scores") # # p.line([0,1], [0,1], color='Gray') # p.yaxis.axis_label = "AUC" # p.xaxis.axis_label = "n_estimators" # p.legend.location = "bottom_right" # show(p) ###Output _____no_output_____ ###Markdown Stacked classifier ###Code def get_scores(clf, X_data, y_data): """ Score a classifier on a dataset and return evaluation metrics. Input clf - A classifier to evaluate X_data - The feature data to evaluate on y_data - The labels to evaluate against Output fpr - False positive rate values tpr - True positive rate values roc_auc - AUC score """ y_score_data = clf.predict_proba(X_data)[:,1] fpr, tpr, thresholds = roc_curve(y_data, y_score_data) roc_auc = auc(fpr, tpr) return fpr, tpr, roc_auc # # Stack a few classifiers # clfs = StackingClassifier(estimators=[ # ('n15', RandomForestClassifier(n_estimators=15, random_state=42, n_jobs=1)), # ('n31', RandomForestClassifier(n_estimators=31, random_state=42, n_jobs=2)), # ('n126', RandomForestClassifier(n_estimators=126, random_state=42, n_jobs=5)), # ] #) # # Strain the stacked ensable # clfs.fit(X_train, y_train) # Evaluate # fpr_valids, tpr_valids, roc_auc_valids = get_scores(clfs, X_valid, y_valid) # roc_auc_valids # # Evaluate the elements of the ensamble individually to see if the stack exploits the strenghts of each # fpr_valid, tpr_valid = [], [] # for c in clfs.estimators_: # fpr_v, tpr_v, roc_auc_valids = get_scores(c, X_valid, y_valid) # fpr_valid.append(fpr_v) # tpr_valid.append(tpr_v) # # Plot the data # p = figure(title="ROC Stacked Random Forests") # for fpr, tpr in zip(fpr_valid, tpr_valid): # p.line(fpr, tpr, color='Blue', alpha=0.25, legend_label=f"Individual Random Forests") # p.line(fpr_valids, tpr_valids, color='Red', legend_label=f"Stacked (AUC = {roc_auc_valids:0.3f})") # p.line([0,1], [0,1], color='Gray') # p.yaxis.axis_label = "True Positive Rate" # p.xaxis.axis_label = "False Positive Rate" # p.legend.location = "bottom_right" # show(p) ###Output _____no_output_____ ###Markdown Balanced vs unbalanced vs mathing validation set balance ###Code # # Train classifier assuming equally weighted classes # clf_plain = RandomForestClassifier(n_estimators = 126, random_state=42, n_jobs=-1) # clf_plain.fit(X_train, y_train) # print('done1') # # Train classifier calculating weights from the data # clf_bal = RandomForestClassifier(n_estimators = 126, random_state=42, n_jobs=-1, class_weight='balanced') # clf_bal.fit(X_train, y_train) # print('done2') # # Train classifier calculating weights from the data for each subtree # clf_balsub = RandomForestClassifier(n_estimators = 126, random_state=42, n_jobs=-1, class_weight='balanced_subsample') # clf_balsub.fit(X_train, y_train) # print('done3') # # Train classifier calculating weights from the validation data # clf_val = RandomForestClassifier(n_estimators = 126, random_state=42, n_jobs=-1, class_weight=(y_valid.value_counts()/len(y_valid)).to_dict()) # clf_val.fit(X_train, y_train) # print('done4') # # score the classifiers # fpr_valid1, tpr_valid1, roc_auc_valid1 = get_scores(clf_plain, X_valid, y_valid) # fpr_valid2, tpr_valid2, roc_auc_valid2 = get_scores(clf_bal, X_valid, y_valid) # fpr_valid3, tpr_valid3, roc_auc_valid3 = get_scores(clf_balsub, X_valid, y_valid) # fpr_valid4, tpr_valid4, roc_auc_valid4 = get_scores(clf_val, X_valid, y_valid) # # Plot the data # p = figure(title="ROC Balancing") # p.line(fpr_valid1, tpr_valid1, color='Red', legend_label=f"none (AUC = {roc_auc_valid1:0.3f})") # p.line(fpr_valid2, tpr_valid2, color='Blue', legend_label=f"balanced (AUC = {roc_auc_valid2:0.3f})") # p.line(fpr_valid3, tpr_valid3, color='Green', legend_label=f"balanced subsample (AUC = {roc_auc_valid3:0.3f})") # p.line(fpr_valid4, tpr_valid4, color='Purple', legend_label=f"balanced validation (AUC = {roc_auc_valid4:0.3f})") # p.line([0,1], [0,1], color='Gray') # p.yaxis.axis_label = "True Positive Rate" # p.xaxis.axis_label = "False Positive Rate" # p.legend.location = "bottom_right" # show(p) ###Output _____no_output_____ ###Markdown Reducing number of features ###Code # # Train a classifier, measure performance, find the least important features, remove them, retrain # res_dict = dict() # X_lim_t = X_train.copy() # X_lim_v = X_valid.copy() # # loop over the number of features we want to remain (geometrically spaced) # for rem_cols in tqdm(np.flip(np.geomspace(8,len(X_train.columns),16,dtype = int, endpoint=False))): # # train a classifier # clf = RandomForestClassifier(n_estimators = 126, random_state=42, n_jobs=-1) # clf.fit(X_lim_t, y_train) # # evaluate performance on validation set # fpr_valid, tpr_valid, roc_auc_valid = get_scores(clf, X_lim_v, y_valid) # n = len(X_lim_t.columns) # # store results # res_dict[n] = { # "score": roc_auc_valid, # "remaining columns": X_lim_t.columns # } # print(n, roc_auc_valid) # # calculate number of columns to remove # step_size = n - rem_cols # # remove the least important columns # cols_to_drop = pd.DataFrame(clf.feature_importances_, index=X_lim_t.columns).sort_values(by=0).head(step_size).index # X_lim_t.drop(columns=cols_to_drop, inplace=True) # X_lim_v.drop(columns=cols_to_drop, inplace=True) # # most_important_columns = pd.DataFrame(res_dict).T.sort_index().loc[31]['remaining columns'].to_list() most_important_columns = [ 'AGER_TYP', 'ALTER_HH', 'CAMEO_DEU_2015', 'CAMEO_INTL_2015', 'CJT_GESAMTTYP', 'D19_GESAMT_DATUM', 'D19_GESAMT_OFFLINE_DATUM', 'D19_KONSUMTYP', 'D19_LOTTO', 'D19_SONSTIGE', 'D19_VERSAND_DATUM', 'D19_VERSAND_OFFLINE_DATUM', 'FINANZ_ANLEGER', 'FINANZ_MINIMALIST', 'FINANZ_SPARER', 'FINANZ_VORSORGER', 'GEBURTSJAHR', 'GFK_URLAUBERTYP', 'HH_EINKOMMEN_SCORE', 'INNENSTADT', 'KBA05_HERST1', 'KBA05_ZUL4', 'KBA13_ANZAHL_PKW', 'KBA13_SEG_SPORTWAGEN', 'KONSUMNAEHE', 'LP_LEBENSPHASE_FEIN', 'LP_STATUS_FEIN', 'ORTSGR_KLS9', 'PRAEGENDE_JUGENDJAHRE', 'REGIOTYP', 'SEMIO_PFLICHT' ] # # Plot the data # p = figure(title="AUC by number of most important features") # p.line(pd.DataFrame(res_dict).T.index, pd.DataFrame(res_dict).T['score'].values, color='Blue', legend_label=f"AUC") # p.yaxis.axis_label = "AUC" # p.xaxis.axis_label = "number of most important features" # p.legend.location = "bottom_right" # show(p) ###Output _____no_output_____ ###Markdown Predict missing values ###Code # for c in most_important_columns: # m = missing_labels_all.loc[c] # sum(X[c].isin(m)) # print(f"{c}, {m}, {sum(X[c].isin(m))/len(X):0.3f}, {sum(X_valid[c].isin(m))/len(X_valid):0.3f}") ###Output _____no_output_____ ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter.Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code %%time # Train the best classifier from above on all the data clf = RandomForestClassifier(random_state=42, n_jobs=-1, n_estimators=126) clf.fit(X, y) # Evaluate on the validation set y_pred_valid = clf.predict(X_valid) y_score_valid = clf.predict_proba(X_valid)[:,1] fpr_valid, tpr_valid, thresholds_valid = roc_curve(y_valid, y_score_valid) roc_auc_valid = auc(fpr_valid, tpr_valid) print(roc_auc_valid) # did 20% extra data provide any real gains? Yes about 3.5%! # Load the kaggle dataset mailout_test = load_data('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', 'mailout_test') # apply the standard pre-processing pipeline _ = apply_pre_processing(mailout_test, low_to_no_info_columns, correlated_cols_to_drop, ks_cols_to_drop, col_to_proc, missing_labels, missing_fill_value, drop_duplicates=False, replacements=replacements) # score the kaggle dataset y_score_kaggle = clf.predict_proba(mailout_test)[:,1] # convert dataframe to csv file for submission kaggle = pd.DataFrame(y_score_kaggle, columns=['RESPONSE'], index=mailout_test.index) kaggle.to_csv('kaggle.csv') # check scores are correct kaggle.head() ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline from sklearn.cluster import KMeans from sklearn.linear_model import LogisticRegression from sklearn.model_selection import GridSearchCV, train_test_split from sklearn.metrics import plot_roc_curve, roc_curve, roc_auc_score from lightgbm import LGBMClassifier # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # load in the data azdias = pd.read_csv('../Capstone/Udacity_AZDIAS_052018.csv', sep=';') customers = pd.read_csv('../Capstone/Udacity_CUSTOMERS_052018.csv', sep=';') # Inspecting the population data azdias.head() azdias.shape # Inspecting the customers data customers.head() customers.shape ###Output _____no_output_____ ###Markdown 0.1 Clean the data Missing Values ###Code # Both tables have a lot of missing values - we need to treat them appriopriately. # Let's first check the main attributes that are missing. The below function will help us get that insight. # Note that for all the attributes value of -1 means that that value was actually uknown - hence we will swap -1 for NAs later on def find_missing_vals(df): percent_missing = df.isnull().sum() * 100 / len(df) missing_value_df = pd.DataFrame({'column_name': df.columns, 'percent_missing': percent_missing}) missing_value_df = missing_value_df.sort_values('percent_missing', ascending = False) return missing_value_df azdias_miss = find_missing_vals(azdias) azdias_miss.head(30).plot(kind='bar', figsize=(20,8), fontsize=13) plt.title("Distribution of missing values in azdias dataset",fontsize=13,fontweight="bold") plt.xlabel("Features", fontsize=13) plt.ylabel("Precent of missing values", fontsize=13) cust_miss = find_missing_vals(customers) cust_miss.head(30).plot(kind='bar', figsize=(20,8), fontsize=13) plt.title("Distribution of missing values in customers dataset", fontsize=13, fontweight="bold") plt.xlabel("Features", fontsize=13) plt.ylabel("Precent of missing values", fontsize=13) # Find the columns that have > 50% of missing values azdias_drop = azdias_miss[azdias_miss['percent_missing'] > 50] azdias_drop cust_drop = cust_miss[cust_miss['percent_missing']>50] cust_drop # We see that the set of columns is very similar apart from the EXTSEL992 column which is not in the provided schema. # Hence let's remove those columns. azdias_clean = azdias.drop(azdias_drop[azdias_drop['percent_missing']>50]['column_name'], axis = 1) customers_clean = customers.drop(azdias_drop[azdias_drop['percent_missing']>50]['column_name'], axis = 1) # Now let's swap -1s for NA and repeat the process. azdias_clean = azdias_clean.replace(-1, np.nan) customers_clean = customers_clean.replace(-1, np.nan) azdias_miss = find_missing_vals(azdias_clean) azdias_miss.head(30).plot(kind='bar', figsize=(20,8), fontsize=13) plt.title("Distribution of missing values in azdias dataset",fontsize=13,fontweight="bold") plt.xlabel("Features", fontsize=13) plt.ylabel("Precent of missing values", fontsize=13) customers_miss = find_missing_vals(customers_clean) customers_miss.head(30).plot(kind='bar', figsize=(20,8), fontsize=13) plt.title("Distribution of missing values in customers dataset",fontsize=13,fontweight="bold") plt.xlabel("Features", fontsize=13) plt.ylabel("Precent of missing values", fontsize=13) # Find the columns that have > 25% of missing values azdias_drop = azdias_miss[azdias_miss['percent_missing'] > 25] azdias_drop # Let's remove those columns. azdias_clean = azdias_clean.drop(azdias_drop[azdias_drop['percent_missing']>25]['column_name'], axis = 1) customers_clean = customers_clean.drop(azdias_drop[azdias_drop['percent_missing']>25]['column_name'], axis = 1) azdias_clean.head() customers_clean.head() ### Make sure we have numerical attributes only azdias_clean.dtypes.value_counts() azdias_clean.dtypes[azdias_clean.dtypes == 'object'] azdias_clean[['CAMEO_DEU_2015', 'CAMEO_DEUG_2015', 'CAMEO_INTL_2015', 'EINGEFUEGT_AM', 'OST_WEST_KZ']] azdias_clean['CAMEO_DEUG_2015'].value_counts() azdias_clean['CAMEO_INTL_2015'].value_counts() azdias_clean['OST_WEST_KZ'].value_counts() ## Clean the object columns for azdias ## We need to: ## 1. Replace strings like X or XX with missing values ## 2. Modify categorical variables to become dummy variables ## 3. Drop other column types azdias_clean = azdias_clean.replace('X', np.nan).replace('XX', np.nan) azdias_clean['CAMEO_DEUG_2015'] = pd.to_numeric(azdias_clean["CAMEO_DEUG_2015"]) azdias_clean['CAMEO_INTL_2015'] = pd.to_numeric(azdias_clean["CAMEO_INTL_2015"]) azdias_clean['OST_WEST_KZ'] = [1 if x == 'W' else 0 for x in azdias_clean['OST_WEST_KZ']] azdias_clean = azdias_clean.drop(['CAMEO_DEU_2015', 'EINGEFUEGT_AM'], axis=1) azdias_clean.dtypes.value_counts() customers_clean.dtypes.value_counts() customers_clean.dtypes[customers_clean.dtypes == 'object'] customers_clean[['PRODUCT_GROUP', 'CUSTOMER_GROUP']] customers_clean['PRODUCT_GROUP'].value_counts() customers_clean['CUSTOMER_GROUP'].value_counts() ## Clean the object columns for customers ## We need to: ## 1. Replace strings like X or XX with missing values ## 2. Modify categorical variables to become dummy variables ## 3. Drop other column types customers_clean = customers_clean.replace('X', np.nan).replace('XX', np.nan) customers_clean['CAMEO_DEUG_2015'] = pd.to_numeric(customers_clean["CAMEO_DEUG_2015"]) customers_clean['CAMEO_INTL_2015'] = pd.to_numeric(customers_clean["CAMEO_INTL_2015"]) customers_clean['OST_WEST_KZ'] = [1 if x == 'W' else 0 for x in customers_clean['OST_WEST_KZ']] customers_clean['CUSTOMER_GROUP_MULTI'] = [1 if x == 'MULTI_BUYER' else 0 for x in customers_clean['CUSTOMER_GROUP']] customers_clean['PRODUCT_GROUP_FOOD'] = [1 if x == 'FOOD' else 0 for x in customers_clean['PRODUCT_GROUP']] customers_clean['PRODUCT_GROUP_COSMETIC_AND_FOOD'] = [1 if x == 'COSMETIC_AND_FOOD' else 0 for x in customers_clean['PRODUCT_GROUP']] customers_clean = customers_clean.drop(['CAMEO_DEU_2015', 'EINGEFUEGT_AM', 'PRODUCT_GROUP', 'CUSTOMER_GROUP'], axis=1) customers_clean.dtypes.value_counts() ###Output _____no_output_____ ###Markdown 0.2 Dealing with outliers ###Code azdias_clean.describe() customers_clean.describe() ###Output _____no_output_____ ###Markdown We see that there is a lot of columns with some extreme values deviating from both the mean and the mean +- 2 standard deviations. For example, for azdias, the mean of ANZ_HAUSHALTE_AKTIV column is 8.29 with standard deviation of 15.62 but we have a value as extreme as 595. We should make sure we treat such outliers. We will use the Tukey rule to remove outliers. ###Code def tukey_rule(data_frame, column_name): data = data_frame[column_name] Q1 = data.quantile(0.25) Q3 = data.quantile(0.75) IQR = Q3 - Q1 max_value = Q3 + 1.5 * IQR min_value = Q1 - 1.5 * IQR return data_frame[(data_frame[column_name] < max_value) & (data_frame[column_name] > min_value)] ###Output _____no_output_____ ###Markdown If we just run tukey rule on this data set we would remove... all the rows! Hence some standard deviations must be very large - let's find those columns. ###Code azdias_clean.iloc[:,(-azdias_clean.std()).argsort()] azdias_clean.std().sort_values().tail(20) ###Output _____no_output_____ ###Markdown Ok, so we probably shouldn't run the Tukey rule on the ordering number or the year of birth...LNR is the ordering number - that should be dropped.KBA13_ANZAHL_PKW is the number of cars within the zone - to make sure this doesn't corrupt our data let's drop it too.ANZ_HAUSHALTE_AKTIV should be a number from 1-10 - remove values from outside this range before applying Tukey rule.LP_LEBENSPHASE_FEIN should be a number from 1-40 - remove values from outside this range before applying Tukey rule.ALTER_HH should be a number from 1-21 - remove values from outside this range before applying Tukey rule.Also note that for categorical variables we shouldn't be using the Tukey rule at all!!! ###Code cat_vars = pd.read_csv("cat_vars.csv") azdias_outliers_removed = azdias_clean.copy() customers_outliers_removed = customers_clean.copy() azdias_outliers_removed = azdias_outliers_removed.drop(['LNR', 'KBA13_ANZAHL_PKW'], axis=1) #dropping LNR as it is ordering number customers_outliers_removed = customers_outliers_removed.drop(['LNR', 'KBA13_ANZAHL_PKW'], axis=1) #dropping LNR as it is ordering number azdias_outliers_removed['ANZ_HAUSHALTE_AKTIV'] = [val if val <= 10 and val >= 1 else np.nan for val in azdias_outliers_removed['ANZ_HAUSHALTE_AKTIV']] azdias_outliers_removed['LP_LEBENSPHASE_FEIN'] = [val if val <= 40 and val >= 1 else np.nan for val in azdias_outliers_removed['LP_LEBENSPHASE_FEIN']] azdias_outliers_removed['ALTER_HH'] = [val if val <= 21 and val >= 1 else np.nan for val in azdias_outliers_removed['ALTER_HH']] azdias_outliers_removed = azdias_outliers_removed.drop(['LP_LEBENSPHASE_FEIN', 'GEMEINDETYP', 'VERDICHTUNGSRAUM', 'GEBURTSJAHR', 'EINGEZOGENAM_HH_JAHR', 'MIN_GEBAEUDEJAHR'], axis=1) customers_outliers_removed['ANZ_HAUSHALTE_AKTIV'] = [val if val <= 10 and val >= 1 else np.nan for val in customers_outliers_removed['ANZ_HAUSHALTE_AKTIV']] customers_outliers_removed['LP_LEBENSPHASE_FEIN'] = [val if val <= 40 and val >= 1 else np.nan for val in customers_outliers_removed['LP_LEBENSPHASE_FEIN']] customers_outliers_removed['ALTER_HH'] = [val if val <= 21 and val >= 1 else np.nan for val in customers_outliers_removed['ALTER_HH']] customers_outliers_removed = customers_outliers_removed.drop(['LP_LEBENSPHASE_FEIN', 'GEMEINDETYP', 'VERDICHTUNGSRAUM', 'GEBURTSJAHR', 'EINGEZOGENAM_HH_JAHR', 'MIN_GEBAEUDEJAHR'], axis=1) def modified_tukey_rule(data_frame, column_name): data = data_frame[column_name] Q1 = data.quantile(0.001) Q3 = data.quantile(0.999) IQR = Q3 - Q1 max_value = Q3 + 1.5 * IQR min_value = Q1 - 1.5 * IQR return data_frame[((data_frame[column_name] < max_value) & (data_frame[column_name] > min_value)) | (data_frame[column_name].isna())] for column in [col for col in list(azdias_outliers_removed.columns) if col not in cat_vars['cat_var']]: azdias_outliers_removed = modified_tukey_rule(azdias_outliers_removed, column) for column in [col for col in list(customers_outliers_removed.columns) if col not in cat_vars['cat_var']]: customers_outliers_removed = modified_tukey_rule(customers_outliers_removed, column) azdias_outliers_removed azdias_outliers_removed.describe() customers_outliers_removed = customers_clean.copy() customers_outliers_removed = customers_outliers_removed.drop(['LNR', 'KBA13_ANZAHL_PKW'], axis=1) #dropping LNR as it is ordering number customers_outliers_removed['ANZ_HAUSHALTE_AKTIV'] = [val if val <= 10 and val >= 1 else np.nan for val in customers_outliers_removed['ANZ_HAUSHALTE_AKTIV']] customers_outliers_removed['LP_LEBENSPHASE_FEIN'] = [val if val <= 40 and val >= 1 else np.nan for val in customers_outliers_removed['LP_LEBENSPHASE_FEIN']] customers_outliers_removed['ALTER_HH'] = [val if val <= 21 and val >= 1 else np.nan for val in customers_outliers_removed['ALTER_HH']] customers_outliers_removed = customers_outliers_removed.drop(['LP_LEBENSPHASE_FEIN', 'GEMEINDETYP', 'VERDICHTUNGSRAUM', 'GEBURTSJAHR', 'EINGEZOGENAM_HH_JAHR', 'MIN_GEBAEUDEJAHR'], axis=1) for column in [col for col in list(customers_outliers_removed.columns) if col not in cat_vars['cat_var']]: customers_outliers_removed = modified_tukey_rule(customers_outliers_removed, column) customers_outliers_removed customers_outliers_removed.describe() ###Output _____no_output_____ ###Markdown 0.3 Imputing the rest of the missing values ###Code ## For the remiander of the columns, swap missing values for the mean of the attribute. def replace_nan_with_median(df): column_medians = df.median() df = df.fillna(column_medians) return df azdias = replace_nan_with_median(azdias_outliers_removed) customers = replace_nan_with_median(customers_outliers_removed) azdias.head() customers.head() azdias.to_csv("azdias.csv", index = False) customers.to_csv("customers.csv", index = False) ###Output _____no_output_____ ###Markdown Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. 1.1 Create a PCA Pipeline that we will use in our unsupervised training model As our dataset has a very large number of features, a goid idea is to choose the Principal Component Analysis (PCA) as our unsupervised learning model. ###Code pipeline = Pipeline([ ('impute', SimpleImputer()), ('scale', StandardScaler()), ('pca' , PCA()), ]) # Fit the model azdias_pca_fit = pipeline.fit(azdias) pca = pipeline[2] # Investigate the variance accounted for by each principal component. plt.figure(figsize=(20, 20)) plt.subplot(2,1,1) plt.bar(list(range(len(pca.explained_variance_ratio_))), pca.explained_variance_ratio_) plt.xlabel('Principle Componets') plt.ylabel('Explained Variance Ratio') plt.subplot(2,1,2) plt.plot(pca.explained_variance_ratio_.cumsum()) plt.xlabel('Principle Componets') plt.ylabel('Cumulative Explained Variance Ratio') # Investigate the variance accounted for by top 170 principal components. plt.figure(figsize=(20, 20)) plt.subplot(2,1,1) plt.bar(list(range(len(pca.explained_variance_ratio_)))[:170], pca.explained_variance_ratio_[:170]) plt.xlabel('Principle Componets') plt.ylabel('Explained Variance Ratio') print('The amount of variance explained by top 170 components is {}.'.format(pca.explained_variance_ratio_[:170].sum())) ###Output The amount of variance explained by top 170 components is 0.9088575062057613. ###Markdown Top 170 components explain over 90% of the variation in the data. ###Code # Investigate the variance accounted for by top 10 principal components. plt.figure(figsize=(20, 20)) plt.subplot(2,1,1) plt.bar(list(range(len(pca.explained_variance_ratio_)))[:10], pca.explained_variance_ratio_[:10]) plt.xlabel('Principle Componets') plt.ylabel('Explained Variance Ratio') ###Output _____no_output_____ ###Markdown For the purpose of clustering, let's use the first 5 proncipal components. Let's incorporate the newly obtained PCA scores in the K-means algorithm. That's how we can perform segmentation based on principal components scores instead of the original features.Let's start with the PCS scores we can obtain via using the fit_transform() method. ###Code # Fit the model with new number of features and transform the data pipeline = Pipeline([ ('impute', SimpleImputer()), ('scale', StandardScaler()), ('pca' , PCA(n_components = 5)), ]) azdias_pca = pipeline.fit_transform(azdias) wcss = [] n_clusters = 10 for i in range(1, n_clusters): kmeans_pca = KMeans(n_clusters = i, init = 'k-means++', random_state = 1) kmeans_pca.fit(azdias_pca) wcss.append(kmeans_pca.inertia_) # Plot the WCSS against the number of components - the "elbow curve" fig = plt.figure(figsize = (14,8)) plt.plot(range(1,n_clusters), wcss, marker = 'o', linestyle = '--') plt.xlabel('Number of Clusters') plt.ylabel('WCSS') plt.title('K-means with PCA Clustering') plt.show() ###Output _____no_output_____ ###Markdown Based on the elbow curve, we will choose 4 clusters for our model. ###Code # We have chosen 4 clusters, so we run K-means with number of clusters = 4 # Pick same initializer and random state as before kmeans_pca = KMeans(n_clusters = 4, init = 'k-means++', random_state = 1) # Fit the data with K-means PCA kmeans_pca.fit(azdias_pca) ###Output _____no_output_____ ###Markdown 1.2 K-means clustering with PCA results ###Code # Now we can predict the cluster for our customers based on the above results customers_pca = pipeline.transform(customers.drop(['ONLINE_PURCHASE', 'CUSTOMER_GROUP_MULTI', 'PRODUCT_GROUP_FOOD', 'PRODUCT_GROUP_COSMETIC_AND_FOOD'], axis=1)) customers_clusters = kmeans_pca.predict(customers_pca) customers_pca_kmeans = pd.DataFrame(customers_pca) customers_pca_kmeans.columns = ['Component 1','Component 2','Component 3','Component 4','Component 5'] customers_pca_kmeans['Cluster'] = customers_clusters customers_pca_kmeans customers_pca_kmeans['Cluster'].value_counts() x_axis = customers_pca_kmeans['Component 2'] y_axis = customers_pca_kmeans['Component 1'] fig = plt.figure(figsize = (14,8)) sns.scatterplot(x_axis, y_axis, hue = customers_pca_kmeans['Cluster']) plt.title('Clusters by PCA Components') plt.show() customers_with_clusters = customers.copy() customers_with_clusters['Cluster'] = customers_clusters customers_with_clusters0 = customers_with_clusters[customers_with_clusters['Cluster'] == 0] customers_with_clusters1 = customers_with_clusters[customers_with_clusters['Cluster'] == 1] customers_with_clusters2 = customers_with_clusters[customers_with_clusters['Cluster'] == 2] customers_with_clusters3 = customers_with_clusters[customers_with_clusters['Cluster'] == 3] # Find what are the main differences between the clusters diff = pd.DataFrame({'Cluster 3': customers_with_clusters3.mean(), 'Cluster 1': customers_with_clusters1.mean()}) diff['delta'] = abs(diff['Cluster 3'] - diff['Cluster 1']) diff.sort_values(['delta'], ascending = False).head(5) ###Output _____no_output_____ ###Markdown Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code mailout_train = pd.read_csv('../Capstone/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') mailout_train.head() mailout_train.shape ###Output _____no_output_____ ###Markdown 2.1 Clean the data ###Code ## Take only the columns as we had before mailout_train = mailout_train[list(azdias.columns) + ['RESPONSE', 'LNR']] ## Clean the data mailout_train_miss = find_missing_vals(mailout_train) mailout_train_miss.head(30).plot(kind='bar', figsize=(20,8), fontsize=13) plt.title("Distribution of missing values in train dataset", fontsize=13, fontweight="bold") plt.xlabel("Features", fontsize=13) plt.ylabel("Precent of missing values", fontsize=13) ###Output _____no_output_____ ###Markdown That is fine, we don't need to remove any other features. ###Code mailout_train.dtypes.value_counts() mailout_train.dtypes[mailout_train.dtypes == 'object'] mailout_train['CAMEO_DEUG_2015'].value_counts() mailout_train['CAMEO_INTL_2015'].value_counts() mailout_train['OST_WEST_KZ'].value_counts() ## Clean the object columns for azdias ## We need to: ## 1. Replace strings like X or XX with missing values ## 2. Modify categorical variables to become dummy variables ## 3. Drop other column types mailout_train = mailout_train.replace('X', np.nan).replace('XX', np.nan) mailout_train['CAMEO_DEUG_2015'] = pd.to_numeric(mailout_train["CAMEO_DEUG_2015"]) mailout_train['CAMEO_INTL_2015'] = pd.to_numeric(mailout_train["CAMEO_INTL_2015"]) mailout_train['OST_WEST_KZ'] = [1 if x == 'W' else 0 for x in mailout_train['OST_WEST_KZ']] mailout_train.dtypes.value_counts() mailout_train = replace_nan_with_median(mailout_train) mailout_train.head() ###Output _____no_output_____ ###Markdown 2.2 Investigate the response variable ###Code sns.countplot("RESPONSE", data=mailout_train) ###Output /Users/kozersky/opt/anaconda3/lib/python3.8/site-packages/seaborn/_decorators.py:36: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. warnings.warn( ###Markdown The response variable is binary, so I think it will be a good idea to try three models: a logistic regression, XGBoost Classifier and LGBM Classifier. ###Code X = mailout_train.drop(['RESPONSE', 'LNR'], axis = 1) y = mailout_train['RESPONSE'] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) models = {'LR': LogisticRegression(), 'LGBM': LGBMClassifier()} metric_table = pd.DataFrame(columns=['classifiers', 'fpr','tpr','auc']) # Now let's use each model and compare their performance for model in models: pipeline = Pipeline([ ('impute', SimpleImputer()), ('scale', StandardScaler()), ('clf', models[model]) ]) pipeline.fit(X_train, y_train) y_proba = pipeline.predict_proba(X_test)[::,1] fpr, tpr, _ = roc_curve(y_test, y_proba) auc = roc_auc_score(y_test, y_proba) metric_table = metric_table.append({'classifier':models[model].__class__.__name__, 'fpr':fpr, 'tpr':tpr, 'auc':auc}, ignore_index=True) metric_table[['classifier', 'auc']] fig = plt.figure(figsize=(14,8)) for i in metric_table.index: plt.plot(metric_table.loc[i]['fpr'], metric_table.loc[i]['tpr'], label="{}, AUC={:.3f}".format(metric_table.loc[i]['classifier'], metric_table.loc[i]['auc'])) plt.plot([0,1], [0,1], color='orange', linestyle='--') plt.xticks(np.arange(0.0, 1.1, step=0.1)) plt.xlabel("False Positive Rate", fontsize=15) plt.yticks(np.arange(0.0, 1.1, step=0.1)) plt.ylabel("True Positive Rate", fontsize=15) plt.title('ROC Curve Analysis', fontweight='bold', fontsize=15) plt.legend(prop={'size':13}, loc='lower right') plt.show() ###Output _____no_output_____ ###Markdown At the moment it looks like LGBM performs better with AUC = 0.641. For now it's not a bad start, but we could try to improve those results. Note that we got the error for the logistic regression /Users/kozersky/opt/anaconda3/lib/python3.8/site-packages/sklearn/linear_model/_logistic.py:762: ConvergenceWarning: lbfgs failed to converge (status=1):STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.We can try to take this into account to improve our Logistic Regression. ###Code X = mailout_train.drop(['RESPONSE'], axis = 1) y = mailout_train['RESPONSE'] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) models = {'LR': LogisticRegression(max_iter=10000), 'LGBM': LGBMClassifier()} metric_table = pd.DataFrame(columns=['classifiers', 'fpr','tpr','auc']) # Now let's use each model and compare their performance for model in models: pipeline = Pipeline([ ('impute', SimpleImputer()), ('scale', StandardScaler()), ('clf', models[model]) ]) pipeline.fit(X_train, y_train) y_proba = pipeline.predict_proba(X_test)[::,1] fpr, tpr, _ = roc_curve(y_test, y_proba) auc = roc_auc_score(y_test, y_proba) metric_table = metric_table.append({'classifier':models[model].__class__.__name__, 'fpr':fpr, 'tpr':tpr, 'auc':auc}, ignore_index=True) metric_table[['classifier', 'auc']] fig = plt.figure(figsize=(14,8)) for i in metric_table.index: plt.plot(metric_table.loc[i]['fpr'], metric_table.loc[i]['tpr'], label="{}, AUC={:.3f}".format(metric_table.loc[i]['classifier'], metric_table.loc[i]['auc'])) plt.plot([0,1], [0,1], color='orange', linestyle='--') plt.xticks(np.arange(0.0, 1.1, step=0.1)) plt.xlabel("False Positive Rate", fontsize=15) plt.yticks(np.arange(0.0, 1.1, step=0.1)) plt.ylabel("True Positive Rate", fontsize=15) plt.title('ROC Curve Analysis', fontweight='bold', fontsize=15) plt.legend(prop={'size':13}, loc='lower right') plt.show() ###Output _____no_output_____ ###Markdown That did improve AUC but not by a lot. Another thing we can try is to perform a Grid Search. Let's try it for LGBM as it's the better performing model. ###Code ## Inspired by https://www.kaggle.com/bitit1994/parameter-grid-search-lgbm-with-scikit-learn gridParams = { 'learning_rate': [0.005, 0.01], 'n_estimators': [8,16], 'num_leaves': [8,12,16], # large num_leaves helps improve accuracy but might lead to over-fitting 'objective' : ['binary'], 'random_state' : [500], 'reg_lambda' : [1,1.2,1.4], } grid = GridSearchCV(LGBMClassifier(), gridParams, verbose=1, cv=4, n_jobs=-1) # Run the grid grid.fit(X, y) # Print the best parameters found print(grid.best_params_) print(grid.best_score_) X = mailout_train.drop(['RESPONSE'], axis = 1) y = mailout_train['RESPONSE'] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) models = {'LR': LogisticRegression(max_iter=10000), 'LGBM': LGBMClassifier(learning_rate= 0.005, n_estimators=8, num_leaves= 8, objective= 'binary', random_state= 0, reg_lambda= 1)} metric_table = pd.DataFrame(columns=['classifiers', 'fpr','tpr','auc']) # Now let's use each model and compare their performance for model in models: pipeline = Pipeline([ ('impute', SimpleImputer()), ('scale', StandardScaler()), ('clf', models[model]) ]) pipeline.fit(X_train, y_train) y_proba = pipeline.predict_proba(X_test)[::,1] fpr, tpr, _ = roc_curve(y_test, y_proba) auc = roc_auc_score(y_test, y_proba) metric_table = metric_table.append({'classifier':models[model].__class__.__name__, 'fpr':fpr, 'tpr':tpr, 'auc':auc}, ignore_index=True) metric_table[['classifier', 'auc']] fig = plt.figure(figsize=(14,8)) for i in metric_table.index: plt.plot(metric_table.loc[i]['fpr'], metric_table.loc[i]['tpr'], label="{}, AUC={:.3f}".format(metric_table.loc[i]['classifier'], metric_table.loc[i]['auc'])) plt.plot([0,1], [0,1], color='orange', linestyle='--') plt.xticks(np.arange(0.0, 1.1, step=0.1)) plt.xlabel("False Positive Rate", fontsize=15) plt.yticks(np.arange(0.0, 1.1, step=0.1)) plt.ylabel("True Positive Rate", fontsize=15) plt.title('ROC Curve Analysis', fontweight='bold', fontsize=15) plt.legend(prop={'size':13}, loc='lower right') plt.show() ###Output _____no_output_____ ###Markdown Nice! Our LGBM model's AUC has improved! Let's submit it! Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter.Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code mailout_test = pd.read_csv('../Capstone/Udacity_MAILOUT_052018_TEST.csv', sep=';') mailout_test = mailout_test[list(azdias.columns) + ['LNR']] ## Clean the object columns for azdias ## We need to: ## 1. Replace strings like X or XX with missing values ## 2. Modify categorical variables to become dummy variables ## 3. Drop other column types mailout_test = mailout_test.replace('X', np.nan).replace('XX', np.nan) mailout_test['CAMEO_DEUG_2015'] = pd.to_numeric(mailout_test["CAMEO_DEUG_2015"]) mailout_test['CAMEO_INTL_2015'] = pd.to_numeric(mailout_test["CAMEO_INTL_2015"]) mailout_test['OST_WEST_KZ'] = [1 if x == 'W' else 0 for x in mailout_test['OST_WEST_KZ']] mailout_test.dtypes.value_counts() my_model = LGBMClassifier(learning_rate= 0.005, n_estimators=8, num_leaves= 8, objective= 'binary', random_state= 0, reg_lambda= 1) my_model.fit(mailout_train.drop(['RESPONSE', 'LNR'], axis = 1), mailout_train['RESPONSE']) predictions = my_model.predict_proba(mailout_test.drop(['LNR'], axis = 1))[:,1] kaggle = pd.DataFrame(index=mailout_test['LNR'], data=predictions) kaggle.rename(columns={0: "RESPONSE"}, inplace=True) kaggle.head() kaggle.to_csv('kaggle.csv') kaggle.shape ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # Import libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt from IPython.display import display, display_markdown # Scikit-learn from sklearn import preprocessing from sklearn.impute import SimpleImputer from sklearn.decomposition import PCA from sklearn.cluster import KMeans from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split, GridSearchCV, RepeatedStratifiedKFold from sklearn.metrics import roc_auc_score, roc_curve from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier from sklearn.linear_model import LogisticRegression # Magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # Uncomment/comment the appropriate paths for working from the Udacity workspace, or locally # azdias_path = '../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv' azdias_path = './Data/Udacity_AZDIAS_052018.csv' # customers_path = '../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv' customers_path = './Data/Udacity_CUSTOMERS_052018.csv' # Load in the data print('Loading azdias data... ', end='') azdias = pd.read_csv(azdias_path, sep=';') print("Done!") print('Loading customers data... ', end='') customers = pd.read_csv(customers_path, sep=';') print("Done!") ###Output Loading azdias data... ###Markdown Start by printing out a few statistics about the dataframes; their sizes, column types, and how much data is missing. ###Code def dataframe_overview(df): ''' Calculate and output various statistics about a dataframe Args: df (DataFrame): dataframe to analyse ''' # Print shape of dataframe print("{} rows and {} columns.".format(df.shape[0], df.shape[1])) # Print counts of columns with type number and object types = ['number', 'object'] for col_type in types: count = df.select_dtypes(include=col_type).shape[1] print('{} columns of type \'{}\'.'.format(count, col_type)) # Print fraction of missing data missing_fraction = df.isna().mean().mean() print('{:.0%} of the data is null.'.format(missing_fraction)) print('Overview of azdias:') dataframe_overview(azdias) print('=====') print('Overview of customers:') dataframe_overview(customers) ###Output Overview of azdias: 891221 rows and 366 columns. 360 columns of type 'number'. 6 columns of type 'object'. 10% of the data is null. ===== Overview of customers: 191652 rows and 369 columns. 361 columns of type 'number'. 8 columns of type 'object'. 20% of the data is null. ###Markdown We can see that both datasets contain a mix of numeric and object types, and have a significant amount of missing data. We'll start cleaning the data, by analysing the `azdias` dataset to create a set a cleaning functions that we can then apply to both the `azdias` and `customers` datasets. 0.1 Invalid valuesLoading the `azdias` data gives a warning that columns 18 and 19 have mixed types. Before we proceed, let's check out those columns to see what is triggering the warning. ###Code # Look at the values in column 18 azdias.iloc[:, 18].value_counts() # Look at the values in column 19 azdias.iloc[:, 19].value_counts() ###Output _____no_output_____ ###Markdown We can see that column 18 (`CAMEO_DEUG_2015`) has 373 rows with the value `X`, and column 19 (`CAMEO_INTL_2015`) has 373 rows with the value `XX`. According to the `DIAS Attributes - Values 2017.xlsx` file these columns should only have integer values, so we should set these to NaN and set the column type to numeric. This will also fix the problem of having (for example) `35` and `35.0` counted as different values.Whilst no other columns raised an error due to this, let's check out all columns that have been read in as type `object`. These could be hiding similar values if the column only contains strings. ###Code # List to hold column names in cols_with_x = [] # Iterate through columns with type object for col in azdias.columns[azdias.dtypes == 'object']: # Count the number of values with /X+/ (i.e. X, XX) xs_found = azdias[col].str.match(r'X+', na=False).sum() # If found, print the column name and number found if xs_found > 0: print("Found {} values matching '/X+/' in column '{}'.".format(xs_found, col)) cols_with_x.append(col) cols_with_x ###Output Found 373 values matching '/X+/' in column 'CAMEO_DEU_2015'. Found 373 values matching '/X+/' in column 'CAMEO_DEUG_2015'. Found 373 values matching '/X+/' in column 'CAMEO_INTL_2015'. ###Markdown There are three columns (`CAMEO_DEU_2015`, `CAMEO_DEUG_2015`, `CAMEO_INTL_2015`) that have invalid X or XX values. We'll adapt the code above to define a function to replace these values with NaN. ###Code def replace_invalid_x_values(df): ''' Replace invalid values ('X', 'XX') with np.nan and convert to numeric if possible Args: df (DataFrame): dataframe to process Returns: df (DataFrame): processed dataframe ''' # Iterate through all columns with type object for col in df.columns[df.dtypes == 'object']: xs_found = df[col].str.match(r'X+', na=False).sum() if xs_found > 0: df[col] = df[col].replace(r'X+', np.nan, regex=True) print("Found {} values matching '/X+/' in column '{}'. replaced with NaN.".format(xs_found, col)) # If possible, convert to numeric df[col] = pd.to_numeric(df[col], errors='ignore') return df ###Output _____no_output_____ ###Markdown The `DIAS Attributes - Values 2017.xlsx` file tells us that some columns are using values other than NaN to denote missing data; such as -1, 0, or 9. We should replace these values with NaN to get a better picture of missing data in the dataset. To do this efficiently, we can use the Excel file to generate a mapping of column names to values used for unknown data. ###Code # Use the provided Excel file to generate a mapping of columns to the values used for unknown data # Load in the Excel file attributes_df = pd.read_excel('DIAS Attributes - Values 2017.xlsx', skiprows=[0]) attributes_df.head() # Drop the empty first column, and forward fill in the NaNs in the Attribute and Description columns attributes_df.drop(labels=['Unnamed: 0'], axis=1, inplace=True) attributes_df[['Attribute', 'Description']] = attributes_df[['Attribute', 'Description']].fillna(method='ffill') attributes_df.head() # Filter down to only the rows concerning values for unknown data unknown_val_df = attributes_df[['Attribute', 'Value']][attributes_df['Meaning'].str.startswith('unknown', na=False)] unknown_val_df.head() # Convert to dict, and split out strings into lists of ints unknown_value_dict = dict(zip(unknown_val_df['Attribute'], unknown_val_df['Value'])) for k, v in unknown_value_dict.items(): if type(v) == str: unknown_value_dict[k] = list(map(int, list(v.split(',')))) unknown_value_dict ###Output _____no_output_____ ###Markdown We can use this dictionary (`unknown_value_dict`) to iterate through the columns in the dataset and replace the placeholders used for unknown data with NaN. ###Code def replace_unknown_with_nan(df, unknown_value_dict): ''' Replace values in the dataframe with np.nan using the provided dict as a map Args: df (DataFrame): dataframe to process unknown_value_dict (dict): mapping of column names to values used for unknown data Returns: df (DataFrame): processed dataframe columns_not_found (list): list of columns in the dict not found in the dataframe ''' # Keep track of any columns not found in the dataset columns_not_found = [] # Iterate through the dict of alternative values for null data, replacing with nan for col_name, null_values in unknown_value_dict.items(): if col_name in df.columns: df[col_name] = df[col_name].replace(null_values, np.nan) else: columns_not_found.append(col_name) return df, columns_not_found ###Output _____no_output_____ ###Markdown We can also use the Excel file to generate a list of columns for which we have explanations of their meanings. We cannot be sure of the meaning of any additional columns in the dataset, and should drop them. ###Code known_columns = attributes_df['Attribute'].unique() print('There are {} columns with known meanings.'.format(len(known_columns))) unknown_columns = np.setdiff1d(azdias.columns, known_columns) print('There are {} columns in the dataframe without a known meaning.'.format(len(unknown_columns))) print(unknown_columns) ###Output There are 314 columns with known meanings. There are 94 columns in the dataframe without a known meaning. ['AKT_DAT_KL' 'ALTERSKATEGORIE_FEIN' 'ALTER_KIND1' 'ALTER_KIND2' 'ALTER_KIND3' 'ALTER_KIND4' 'ANZ_KINDER' 'ANZ_STATISTISCHE_HAUSHALTE' 'ARBEIT' 'CAMEO_INTL_2015' 'CJT_KATALOGNUTZER' 'CJT_TYP_1' 'CJT_TYP_2' 'CJT_TYP_3' 'CJT_TYP_4' 'CJT_TYP_5' 'CJT_TYP_6' 'D19_BANKEN_DIREKT' 'D19_BANKEN_GROSS' 'D19_BANKEN_LOKAL' 'D19_BANKEN_REST' 'D19_BEKLEIDUNG_GEH' 'D19_BEKLEIDUNG_REST' 'D19_BILDUNG' 'D19_BIO_OEKO' 'D19_BUCH_CD' 'D19_DIGIT_SERV' 'D19_DROGERIEARTIKEL' 'D19_ENERGIE' 'D19_FREIZEIT' 'D19_GARTEN' 'D19_HANDWERK' 'D19_HAUS_DEKO' 'D19_KINDERARTIKEL' 'D19_KONSUMTYP_MAX' 'D19_KOSMETIK' 'D19_LEBENSMITTEL' 'D19_LETZTER_KAUF_BRANCHE' 'D19_LOTTO' 'D19_NAHRUNGSERGAENZUNG' 'D19_RATGEBER' 'D19_REISEN' 'D19_SAMMELARTIKEL' 'D19_SCHUHE' 'D19_SONSTIGE' 'D19_SOZIALES' 'D19_TECHNIK' 'D19_TELKO_MOBILE' 'D19_TELKO_ONLINE_QUOTE_12' 'D19_TELKO_REST' 'D19_TIERARTIKEL' 'D19_VERSAND_REST' 'D19_VERSICHERUNGEN' 'D19_VERSI_DATUM' 'D19_VERSI_OFFLINE_DATUM' 'D19_VERSI_ONLINE_DATUM' 'D19_VERSI_ONLINE_QUOTE_12' 'D19_VOLLSORTIMENT' 'D19_WEIN_FEINKOST' 'DSL_FLAG' 'EINGEFUEGT_AM' 'EINGEZOGENAM_HH_JAHR' 'EXTSEL992' 'FIRMENDICHTE' 'GEMEINDETYP' 'HH_DELTA_FLAG' 'KBA13_ANTG1' 'KBA13_ANTG2' 'KBA13_ANTG3' 'KBA13_ANTG4' 'KBA13_BAUMAX' 'KBA13_CCM_1401_2500' 'KBA13_GBZ' 'KBA13_HHZ' 'KBA13_KMH_210' 'KK_KUNDENTYP' 'KOMBIALTER' 'KONSUMZELLE' 'LNR' 'MOBI_RASTER' 'RT_KEIN_ANREIZ' 'RT_SCHNAEPPCHEN' 'RT_UEBERGROESSE' 'SOHO_KZ' 'STRUKTURTYP' 'UMFELD_ALT' 'UMFELD_JUNG' 'UNGLEICHENN_FLAG' 'VERDICHTUNGSRAUM' 'VHA' 'VHN' 'VK_DHT4A' 'VK_DISTANZ' 'VK_ZG11'] ###Markdown 0.2 Missing valuesNext, we'll look how much data is missing in the `azdias` dataframe, and use this as a basis to determine if any columns and rows need to be dropped. ###Code # Define a function to calculate the fraction of nan values in each column def get_null_fractions(df, threshold, plot=False): ''' Function to return the columns with fraction of null data greater than the threshold value. Optionally plot the results. Args: df (DataFrame): dataframe to analyse threshold (float): return columns with fraction over this value plot (boolean): whether to plot the result (default=False) Return: fraction_null (Series): Columns and the fraction of null values ''' # Select columns with a fraction of missing data that exceeds the given threshold fraction_null = df.isna().mean() fraction_null = fraction_null[fraction_null > threshold] if plot == True: fraction_null.plot(kind="bar") plt.xlabel("Column name") plt.ylabel("Fraction of NaN values") plt.title("Columns with >{:.0%} values as NaN".format(threshold)); return fraction_null # Show columns with more than 50% null values get_null_fractions(azdias, threshold=0.5, plot=True) ###Output _____no_output_____ ###Markdown We can see that there are six columns with over half null values. There may be more once we have run the function to replace the additional values for unknown data with NaN. We will later drop these columns from the dataset with the following function. ###Code def drop_columns(df, columns): ''' Drop columns from the dataframe Args: df (DataFrame): dataframe to process columns (list): columns to drop Return: df_dropped (DataFrame): dataframe with columns dropped ''' df_dropped = df.drop(labels=columns, axis=1) return df_dropped ###Output _____no_output_____ ###Markdown Now let's look at missing values by row. ###Code # Calculate the fraction of null values on each row of the dataframe fraction_null_by_row = azdias.isna().mean(axis=1) # Plot a weighted histogram of the result plt.hist(fraction_null_by_row, weights=np.ones(len(fraction_null_by_row)) / len(fraction_null_by_row)) plt.xlabel("Fraction of null values") plt.ylabel("Fraction of dataset") plt.title("Histogram of null values by row"); ###Output _____no_output_____ ###Markdown We can see that most of the rows (>80%) have few missing values (20% missing values using the following function. ###Code def drop_rows_with_missing_values(df, threshold): ''' Drop rows in the dataframe where the fraction of missing values exceeds a threshold Args: df (DataFrame): dataframe to process threshold (float): threshold of missing values above which a row will be dropped Returns: df_dropped (DataFrame): processed dataframe ''' fraction_null_by_row = df.isna().mean(axis=1) # Get indices of rows where the threshold of null values is exceeded idx_to_drop = fraction_null_by_row[fraction_null_by_row > threshold].index # Drop these from the dataframe df_dropped = df.drop(labels=idx_to_drop, axis=0) return df_dropped ###Output _____no_output_____ ###Markdown 0.3 Non-numeric columnsThere are a few remaining non-numeric columns in the dataset. As most ML models work best with numeric data, we should convert them where possible. ###Code # Look at non-numeric columns in the dataset azdias[azdias.columns[azdias.dtypes == 'object']].head() ###Output _____no_output_____ ###Markdown We know from earlier that:* `CAMEO_DEUG_2015` and `CAMEO_INTL_2015` will be numeric once we have run our cleaning function, and* `D19_LETZTER_KAUF_BRANCHE` and `EINGEFUEGT_AM` are on the list of unknown columns generated earlier, and will be dropped, and* `CAMEO_DEU_2015` is a non-numeric column.Let's look at the remaining column, `OST_WEST_KZ`. ###Code print(azdias['OST_WEST_KZ'].value_counts()) ###Output W 629528 O 168545 Name: OST_WEST_KZ, dtype: int64 ###Markdown `CAMEO_DEU_2015` and `OST_WEST_KZ` can be simply encoded numerically using sklearn's `LabelEncoder` function. ###Code def encode_categorical_columns(df, columns, encoders_to_use=None): ''' Encode categorical columns with sklearn's LabelEncoder function Args: df (DataFrame): dataframe to process columns: categorical columns to encode encoders_to_use (list): list of pre-fitted encoders to use, otherwise will be generated (default None) Returns: df (DataFrame): dataframe with columns encoded encoders_used (list): list of fitted encoders used ''' encoders_used = [] for i, col in enumerate(columns): print("Encoding column '{}'... ".format(col), end="") # If no encoders given, instantiate a new one and fit to the data if encoders_to_use is None: le = preprocessing.LabelEncoder() le.fit(df[col].dropna().unique()) # If encoders are given, use them else: le = encoders_to_use[i] print("Using existing encoder... ", end="") df[col] = le.transform(df[col]) encoders_used.append(le) print("Complete.") return df, encoders_used ###Output _____no_output_____ ###Markdown 0.4 Imputing missing valuesFor the remaining missing values in the dataset we can fill with the modal value. ###Code def impute_with_mode(df): ''' Impute missing values in the dataframe with the most frequent value. Uses sklearn's SimpleImputer. Args: df (DataFrame): dataframe to process Returns: df (DataFrame): dataframe with values imputed ''' imputer = SimpleImputer(strategy='most_frequent', missing_values=np.nan) imputer = imputer.fit(df) df.iloc[:,:] = imputer.transform(df) return df ###Output _____no_output_____ ###Markdown 0.5 Bringing it all togetherIn the steps above we've defined a series of fuctions that perform specific cleaning steps. Now we can bring all of them together into a single function that will:1. Remove invalid values (e.g. X, XX)2. Replace values used for unknown data with NaN3. Drop columns for which we don't know their meaning4. Identify and drop columns and rows with lots of missing values5. Drop or encode non-numeric columns6. Impute missing valuesThe function will first clean the `azdias` dataset, and then clean the `customers` dataset, dropping the same columns and using the same encoders for non-numeric data. ###Code # Based on section 0.3 we know that the following columns need to be encoded cat_cols_to_encode = ['CAMEO_DEU_2015', 'OST_WEST_KZ'] def clean_df(df, cols_to_encode, columns_to_drop=None, col_nan_threshold=0.5, row_nan_threshold=0.2, encoders=None): ''' Clean dataframe Args: df (DataFrame): DataFrame to clean cols_to_encode (list): list of non-numeric columns to encode numerically. columns_to_drop (list): list of columns to drop, if known. Otherwise columns with null fraction > col_nan_threshold will be dropped (default None). col_nan_threshold (float): fraction of null values allowed before column is dropped (default 0.5). row_nan_threshold (float): fraction of null values allowed before row is dropped (default 0.2). encoders (list): list of pre-fitted encoders to use with cols_to_encode (default None) Returns: df_clean (DataFrame): cleaned DataFrame cols_with_missing (list): list of columns with null fraction > col_nan_threshold that were dropped. encoders (list): list of fitted encoders used with cols_to_encode ''' # Replace invalid X values print('Replacing invalid X values... ') df_clean = replace_invalid_x_values(df) print('Complete.') # Replace values used for unknown data with np.nan print('Replacing unknown values with NaN... ', end='') df_clean, _ = replace_unknown_with_nan(df_clean, unknown_value_dict) print('Complete.') # Drop columns for which we don't know their meaning print('Dropping unknown columns... ', end='') df_clean = drop_columns(df_clean, unknown_columns) print('Complete.') # Find columns with nan fraction over threshold, and drop # If columns passed as argument, use these instead if columns_to_drop is None: cols_with_missing = get_null_fractions(df_clean, threshold=col_nan_threshold) else: cols_with_missing = columns_to_drop print('Finding and dropping columns with over {:.0%} NaN... '.format(col_nan_threshold), end='') df_clean = drop_columns(df_clean, cols_with_missing.index) print('Complete.') # Find rows with nan fraction over threshold, and drop print('Finding and dropping rows with over {:.0%} NaN... '.format(row_nan_threshold), end='') df_clean = drop_rows_with_missing_values(df_clean, threshold=row_nan_threshold) print('Complete.') # Impute remaining nan values with most frequent value print('Imputing NaN values with most frequent values... ', end='') df_clean = impute_with_mode(df_clean) print('Complete.') # Encode non-numeric columns, using pre-fitted encoders if given print('Encoding non-numeric columns... ') if encoders is None: df_clean, encoders = encode_categorical_columns(df_clean, cat_cols_to_encode) else: df_clean, encoders = encode_categorical_columns(df_clean, cat_cols_to_encode, encoders_to_use=encoders) print('Complete.') # Return cleaned dataframe print('Cleaned dataframe returned.') return df_clean, cols_with_missing, encoders def clean_dataframes(df_azdias, df_customers, cols_to_encode): ''' Cleans the two dataframes, dropping same columns in df_customers as found in df_azdias Args: df_azdias (DataFrame): DataFrame of general population df_customers (DataFrame): DataFrame of customers cols_to_encode (list): list of non-numeric columns to encode numerically Returns: df_azdias_clean (DataFrame): Cleaned dataframe of general population df_customers_clean (DataFrame): Cleaned dataframe of customers ''' print('Cleaning azdias dataframe...') # Clean azdias, and get the list of columns dropped and encoders used df_azdias_clean, cols_to_drop, encoders = clean_df(df_azdias, cat_cols_to_encode) print('==========') print('Cleaning customers dataframe...') # Clean customers, dropping the same columns and using the encoders from cleaning azdias df_customers_clean, _, _ = clean_df(df_customers, cat_cols_to_encode, cols_to_drop, encoders=encoders) # Return cleaned dataframes return df_azdias_clean, df_customers_clean azdias_clean, customers_clean = clean_dataframes(azdias, customers, cat_cols_to_encode) ###Output Cleaning azdias dataframe... Replacing invalid X values... Found 373 values matching '/X+/' in column 'CAMEO_DEU_2015'. replaced with NaN. Found 373 values matching '/X+/' in column 'CAMEO_DEUG_2015'. replaced with NaN. Found 373 values matching '/X+/' in column 'CAMEO_INTL_2015'. replaced with NaN. Complete. Replacing unknown values with NaN... Complete. Dropping unknown columns... Complete. Finding and dropping columns with over 50% NaN... Complete. Finding and dropping rows with over 20% NaN... Complete. Imputing NaN values with most frequent values... Complete. Encoding non-numeric columns... Encoding column 'CAMEO_DEU_2015'... Complete. Encoding column 'OST_WEST_KZ'... Complete. Complete. Cleaned dataframe returned. ========== Cleaning customers dataframe... Replacing invalid X values... Found 126 values matching '/X+/' in column 'CAMEO_DEU_2015'. replaced with NaN. Found 126 values matching '/X+/' in column 'CAMEO_DEUG_2015'. replaced with NaN. Found 126 values matching '/X+/' in column 'CAMEO_INTL_2015'. replaced with NaN. Complete. Replacing unknown values with NaN... Complete. Dropping unknown columns... Complete. Finding and dropping columns with over 50% NaN... Complete. Finding and dropping rows with over 20% NaN... Complete. Imputing NaN values with most frequent values... Complete. Encoding non-numeric columns... Encoding column 'CAMEO_DEU_2015'... Using existing encoder... Complete. Encoding column 'OST_WEST_KZ'... Using existing encoder... Complete. Complete. Cleaned dataframe returned. ###Markdown Now that we've cleaned the dataframes, let's re-run the overview function to see how they are looking. ###Code print('Overview of azdias_clean:') dataframe_overview(azdias_clean) print('') print('Overview of customers_clean:') dataframe_overview(customers_clean) ###Output Overview of azdias_clean: 737288 rows and 269 columns. 269 columns of type 'number'. 0 columns of type 'object'. 0% of the data is null. Overview of customers_clean: 134246 rows and 272 columns. 270 columns of type 'number'. 2 columns of type 'object'. 0% of the data is null. ###Markdown Great! We've got no missing data, and the only non-numeric columns are two of the extra columns in the customers dataframe (expected). We can now move on and start extracting information from our datasets. Part 1: Customer Segmentation ReportHere we will use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, we will be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. To describe the relationship between the demographics of the company's existing customers and the general population of Germany we will use *k*-means clustering. To reduce the computational complexity we first want to reduce the dimensionality of the datasets. We will do this with PCA (Principal Component Analysis); identifying the aspects of the data that explain the majority of the variance seen in the data. 1.1 Principal Component AnalysisTo perform PCA, we first need to scale the data such that each column has the same variance. We can use sklearn's [`StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) to do this. ###Code scaler = preprocessing.StandardScaler() azdias_scaled = scaler.fit(azdias_clean).transform(azdias_clean) ###Output _____no_output_____ ###Markdown Now we can apply PCA, and look at how the number of components relates to the level of explained variance. ###Code # Apply PCA pca = PCA() pca.fit(azdias_scaled) # Set threshold for explained variance var_threshold = 0.85 num_components = np.where(cum_var > var_threshold)[0][0] print('{} components can be used to explain {:.0%} of the variance.'.format(num_components, var_threshold)) # Plot explained variance against number of components cum_var = np.cumsum(pca.explained_variance_ratio_) plt.plot(cum_var) # Highlight threshold point plt.plot([0, num_components, num_components], [var_threshold, var_threshold, 0], '--') plt.xlabel('Number of components') plt.ylabel('Explained variance') plt.title('Explained variance vs. number of components'); ###Output 104 components can be used to explain 85% of the variance. ###Markdown We can see from the PCA analysis that we only need 104 components to explain 85% of the variance in the dataset. We can repeat the PCA process with this number of components to produce a reduced dataset for use with *k*-means clustering. ###Code # Apply PCA using the reduced number of components pca = PCA(n_components=num_components) azdias_reduced = pca.fit_transform(azdias_scaled) # Verify that the reduced dataset has the properties we expect print('Shape of reduced dataset: {}'.format(azdias_reduced.shape)) print('{:.0%} variance explained.'.format(np.sum(pca.explained_variance_ratio_))) ###Output Shape of reduced dataset: (737288, 104) 85% variance explained. ###Markdown 1.2 *k*-means clusteringWe can now apply *k*-means clustering to the reduced dataset. The optimal number of clusters, *k*, can be determined by using the elbow method. We will run through a range of number of clusters, and store the interia (sum of squared distances of samples to their closest cluster center) for each. By plotting this we will find the lowest value of *k* that provides a good (low) intertia value. ###Code # Range of k values to test k_range = range(1,16) # List to store the inertia values in inertias = [] # Iterate through k values, storing inertia values for k in k_range: kmc = KMeans(n_clusters=k, init='k-means++') kmc.fit(azdias_reduced) inertias.append(kmc.inertia_) print('{} clusters, inertia={:.0f}.'.format(k, kmc.inertia_)) # Plot inertia vs. the number of clusters plt.plot(k_range, inertias, '-o') plt.xlabel('Number of clusters') plt.ylabel('Inertia') plt.title('Inertia vs. number of clusters') plt.xticks(k_range); ###Output _____no_output_____ ###Markdown Whilst there isn't a sharp 'elbow', we can see that by *k*=10 the curve has flattened considerably. We will use this value. ###Code k_clusters = 10 ###Output _____no_output_____ ###Markdown Using the `azdias` dataset we've determined:1. The number of components to use in PCA to reduce the dimensionality, and2. The number of clusters to use in *k*-means clustering.We can now create a pipeline with all of the required steps (scale, reduce, cluster). We will `fit()` the pipeline using the azdias dataset, and then use `predict()` on both datasets. ###Code # Define the pipeline of transformations required # Setting a static random_state for PCA and KMeans to give repeatable output pca_pipeline = Pipeline([ ('scale', preprocessing.StandardScaler()), ('reduce', PCA(n_components=num_components, random_state=42)), ('cluster', KMeans(n_clusters=k_clusters, init='k-means++', random_state=21)) ]) # Fit the pipeline on the azdias data pca_pipeline.fit(azdias_clean) # Predict to get the clustered data azdias_clustered = pca_pipeline.predict(azdias_clean) azdias_clustered_df = pd.DataFrame(azdias_clustered, columns=['Cluster']) # For the customers data, select only columns that are in the azdias dataset (i.e. not the extra 3) customers_clustered = pca_pipeline.predict(customers_clean[azdias_clean.columns]) customers_clustered_df = pd.DataFrame(customers_clustered, columns=['Cluster']) # Count how many records are in each cluster azdias_counts = azdias_clustered_df.value_counts().sort_index() customers_counts = customers_clustered_df.value_counts().sort_index() # Visualise this plt.bar(np.arange(k_clusters)+1, azdias_counts.values) plt.xticks(np.arange(k_clusters)+1) plt.xlabel('Cluster') plt.ylabel('Count') plt.title('Count of records in each cluster (azdias)') plt.show(); plt.bar(np.arange(k_clusters)+1, customers_counts.values) plt.xticks(np.arange(k_clusters)+1) plt.xlabel('Cluster') plt.ylabel('Count') plt.title('Count of records in each cluster (customers)'); ###Output _____no_output_____ ###Markdown 1.3 Comparison of customers to overall populationNow that we have clustered both the overall population data and the customers data, we can compare them and determine which parts of the population are more or less likely to be customers. ###Code # Combine the cluster information into a single df, and calculate proportions and differences df_clusters = pd.DataFrame(index=np.arange(k_clusters)+1, columns=['Population Abs.', 'Customers Abs.']) # Insert absolute counts of each cluster df_clusters['Population Abs.'] = azdias_counts.values df_clusters['Customers Abs.'] = customers_counts.values # Normalise relative to number of records in each dataset df_clusters['Population %'] = 100*df_clusters['Population Abs.'] / azdias_counts.sum() df_clusters['Customers %'] = 100*df_clusters['Customers Abs.'] / customers_counts.sum() # Calculate the difference between the customers and population dataset df_clusters['Delta'] = df_clusters['Customers %']-df_clusters['Population %'] df_clusters # Plot the % of records in each cluster for both the customers and overall population plt.bar([x-0.1 for x in df_clusters.index], height=df_clusters['Population %'], width=0.2) plt.bar([x+0.1 for x in df_clusters.index], height=df_clusters['Customers %'], width=0.2) plt.xticks(df_clusters.index) plt.xlabel('Cluster') plt.ylabel('% of records') plt.legend(['Population', 'Customers']); ###Output _____no_output_____ ###Markdown We can see that there are significant differences. These will be easier to see if we just plot the sorted differences. ###Code # Sort the dataframe by the size of the difference df_clusters = df_clusters.sort_values(by='Delta', ascending=False) # Plot the differences between the customers data and the overall population df_clusters['Delta'].plot.bar() plt.xlabel('Cluster') plt.ylabel('Percentage points difference') plt.title('Difference between customers and overall population'); # Pull out the top and bottom two clusters top_two = list(df_clusters.index[:2]) bottom_two = list(df_clusters.index[-2:]) print('Clusters {} have higher representation in the customers dataset.'.format(top_two)) print('Clusters {} have lower representation in the customers dataset.'.format(bottom_two)) ###Output Clusters [8, 3] have higher representation in the customers dataset. Clusters [4, 2] have lower representation in the customers dataset. ###Markdown 1.4 Cluster meanings We know now which clusters are over- and under-represented in the customers dataset versus the general population. To interpret the meanings of these clusters, and therefore understand the demographic factors that make someone more or less likely to be a customer, we can run the inverse of the steps in the pipeline to get back to the original, untransformed data. ###Code def sorted_columns_from_cluster(pl, cluster): ''' Invert the pipeline operations to recover data in original form, and sort by most impactful columns for a given cluster. Args: pl (Pipeline): pipeline to invert cluster (int): cluster number to investigate Returns: cluster_df (DataFrame): DataFrame with column names as index, scaled and unscaled values, sorted by scaled value descending ''' # Get scale, reduce, and cluster steps from pipeline scl = pl.named_steps['scale'] pca = pl.named_steps['reduce'] kmc = pl.named_steps['cluster'] # Invert the PCA transform on the cluster centres. Gives scaled values. inv_cluster = pca.inverse_transform(kmc.cluster_centers_) # Invert the scaling transform on the cluster centers. Gives unscaled values. inv_scale = scl.inverse_transform(inv_cluster) # Put the values for the selected cluster into a dataframe cluster_df = pd.DataFrame(inv_cluster[cluster-1], index=azdias_clean.columns, columns=['Scaled Value']) cluster_df['Value'] = inv_scale[cluster-1] # Sort by the scaled value - largest impact first cluster_df.sort_values(by='Scaled Value', ascending=False, inplace=True) return cluster_df def display_cluster_meaning(pl, cluster_numbers, display_n=5): ''' Displays the top indicators for a given cluster number by inverting the steps of the pipeline to recover the original, untransformed data. Args: pl (Pipline): pipeline with scale, reduce, and cluster steps cluster_number (list): list of cluster numbers to display meaning of display_n (int): number of indicators to display (default 5) ''' for c in cluster_numbers: cluster_df = sorted_columns_from_cluster(pl, c) # Get descriptions and meanings for each factor descriptions = [] meanings = [] for i in range(len(cluster_df)): # Get the description of the factor attr = attributes_df[attributes_df['Attribute'] == cluster_df.iloc[i].name] descriptions.append(attr['Description'].values[0]) # Try to look up the meaning of the value # This is a quick approach that doesn't always succeed val = np.floor(cluster_df.iloc[i]['Value']) try: meaning = list(attr[attr['Value'] == val]['Meaning'])[0] # If there isn't a match (due to encoding etc.), just fill with nan except: meaning = np.nan meanings.append(meaning) cluster_df['Description'] = descriptions cluster_df['Meaning'] = meanings # Display the result using Ipython.display to format nicely display_markdown('Top {} indicators in cluster {}'.format(display_n, c), raw=True) display(cluster_df.head(display_n)) display_markdown('---', raw=True) # Display meanings of the top and bottom two clusters display_markdown('#### Top positive clusters for customers', raw=True) display_cluster_meaning(pca_pipeline, top_two) display_markdown('#### Top negative clusters for customers', raw=True) display_cluster_meaning(pca_pipeline, bottom_two) ###Output _____no_output_____ ###Markdown From this we can describe what characteristics make a person in the general population more or less likely to be a customer.From clusters 8 and 3 we can see that customers are **more** likely to:* Live in an area with a high proportion high-end cars (BMW and Mercedes-Benz)* Live in an area with a high proportion of high-powered cars (max. speed > 210 kph, engine power > 121 kW)* Low mobility, owning homes in lower density areas* Have average income* Have interest in finances (inferred from "low financial interest" being "low")From clusters 4 and 2 we can see that customers are **less** likely to:* Be financial investors or money savers* Be "working class"* Live in an area with mainly 6-10 family homes (i.e. high density)**In summary: financially-aware people with average income, but high expenditure (high-end cars, home ownership) are most likely to be customers.** Part 2: Supervised Learning ModelNow that we've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, we will verify our model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, we'll create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code # Uncomment/comment the appropriate paths for working from the Udacity workspace, or locally # mailout_train_path = '../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv' mailout_train_path = './Data/Udacity_MAILOUT_052018_TRAIN.csv' mailout_train = pd.read_csv(mailout_train_path, sep=';') ###Output /Users/guy/miniforge3/envs/.venv/lib/python3.9/site-packages/IPython/core/interactiveshell.py:3444: DtypeWarning: Columns (18,19) have mixed types.Specify dtype option on import or set low_memory=False. exec(code_obj, self.user_global_ns, self.user_ns) ###Markdown 2.1 Cleaning the dataAs with the previous data files, we need to clean the data before we can use it to generate a model. We can use the same functions that we defined previously to do this.First, let's look at an overview. ###Code dataframe_overview(mailout_train) ###Output 42962 rows and 367 columns. 361 columns of type 'number'. 6 columns of type 'object'. 14% of the data is null. ###Markdown Now let's apply the same cleaning steps to the data. As a reminder, this will:1. Remove invalid values (e.g. X, XX)2. Replace values used for unknown data with NaN3. Drop columns for which we don't know their meaning4. Identify and drop columns and rows with lots of missing values5. Drop or encode non-numeric columns6. Impute missing values ###Code mailout_train_clean, mailout_dropped_cols, mailout_encoders = clean_df(mailout_train, cat_cols_to_encode) ###Output Replacing invalid X values... Found 11 values matching '/X+/' in column 'CAMEO_DEU_2015'. replaced with NaN. Found 11 values matching '/X+/' in column 'CAMEO_DEUG_2015'. replaced with NaN. Found 11 values matching '/X+/' in column 'CAMEO_INTL_2015'. replaced with NaN. Complete. Replacing unknown values with NaN... Complete. Dropping unknown columns... Complete. Finding and dropping columns with over 50% NaN... Complete. Finding and dropping rows with over 20% NaN... Complete. Imputing NaN values with most frequent values... Complete. Encoding non-numeric columns... Encoding column 'CAMEO_DEU_2015'... Complete. Encoding column 'OST_WEST_KZ'... Complete. Complete. Cleaned dataframe returned. ###Markdown Let's see what it looks like now. ###Code dataframe_overview(mailout_train_clean) mailout_dropped_cols ###Output 33837 rows and 271 columns. 271 columns of type 'number'. 0 columns of type 'object'. 0% of the data is null. ###Markdown Great! We've removed or imputed all the null data, and have only numeric columns in the dataset. Two columns were dropped due to high fractions of null values: `KBA05_BAUMAX` and `TITEL_KZ`, with 53% and 99% null values, respectively. 2.2 Addressing imbalanceThe `RESPONSE` column contains the values we want to predict. Let's take a look at this column in more detail. ###Code # Print and plot the counts of values in the RESPONSE column response_counts = mailout_train_clean['RESPONSE'].value_counts() print(response_counts) plt.pie(response_counts) plt.legend(response_counts.index); ###Output 0 33421 1 416 Name: RESPONSE, dtype: int64 ###Markdown We can see that this is very imbalanced; there are ~80x more zeroes (no reponse) compared to ones (reponse). We need to be aware of this when creating our model, otherwise it's likely that the model will overpredict the minority class (no response) as an outcome when using e.g. accuracy as the scoring method.One approach we can take is to score our models using ROC AUC, rather than accuracy. 2.3 Evaluating different modelling approachesNow that we have clean data, we can begin to generate models and evaluate their performance. The variables we're going to explore are:1. Different machine learning models.2. Inclusion of all columns, versus the most impactful columns as found in our unsupervised model in part 1.3. Hyperparameter tuning of the best model/columns combination using GridSearchCV. 2.3.1 Model selection ###Code # Split the data into X (factors) and y (response) dataframes y = mailout_train_clean['RESPONSE'] X = mailout_train_clean.drop(columns=['RESPONSE']) # Split into test and train datasets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42) ###Output _____no_output_____ ###Markdown We're going to evaluate the performance of the model using the ROC AUC score. ###Code def generate_evaluate_model(X_train, X_test, y_train, y_test, model): ''' Fit and score a model against training and test data using ROC AUC Args: X_train (array): X training data X_test (array): X test data y_train (array): y (response) training data y_test (array): y (response) test data model: sklearn classifier or pipeline Returns: roc_auc (float): ROC AUC score fp_rate (list): false positive rate tp_rate (list): true positive rate ''' model.fit(X_train, y_train) y_pred = model.predict_proba(X_test) roc_auc = roc_auc_score(y_test, y_pred[:,1]) fp_rate, tp_rate, threshold = roc_curve(y_test, y_pred[:,1]) return roc_auc, fp_rate, tp_rate ###Output _____no_output_____ ###Markdown We're going to test four different classifiers:1. Logistic regression2. Random forest3. Adaptive boosting (AdaBoost)4. Gradient boostingWe'll use each one with its default parameters, and see which gives the highest ROC AUC score to determine which to take forward. ###Code # Dictionary of models to test models = { 'LogisticRegression': LogisticRegression(max_iter=1000, random_state=42), 'RandomForest' : RandomForestClassifier(n_jobs=-1, random_state=42), 'AdaBoost': AdaBoostClassifier(random_state=42), 'GradientBoosting': GradientBoostingClassifier(random_state=42) } # Iterate through models for model_name, model in models.items(): # Create pipeline with scaling step first (required for some models) pl = Pipeline([ ('scale', preprocessing.MinMaxScaler()), ('clf', model) ]) score, fp_rate, tp_rate = generate_evaluate_model(X_train, X_test, y_train, y_test, pl) # Plot ROC curve, with ROC AUC score in the title plt.plot(fp_rate, tp_rate) plt.plot([0, 1], [0, 1], linestyle='--') plt.xlabel('False positive rate') plt.ylabel('True positive rate') plt.title('ROC curve: {} (AUC: {:.4f})'.format(model_name, score)) plt.show(); ###Output _____no_output_____ ###Markdown The charts above show that the model based on the LogisticRegression classifier gives the highest ROC AUC score (0.6119). We will use this model, and try to optimise it via choice of columns and using GridSearchCV. 2.3.2 Column selectionFrom part 1, we know that some columns offer greater correlation with being a customer than others. It's possible that we might be able to create a better model by restricting the dataset to these columns with greater correlation. We will take the top two clusters that positively correlate with customers, and the bottom two clusters that negatively correlated with customers (we identified these in section 1.3). From each of these we'll create a list of the *n* most impactful columns from the dataset, and see if our model improves when we use only these columns. ###Code def select_impactful_columns(n, pipeline, most_impactful_clusters): ''' Select the n most impactful columns from the given clusters Args: n (int): number of columns to return pipeline (Pipeline): pipeline used to generate clusters most_impactful_clusters (list): list of cluster numbers to include Returns: most_impactful_cols (list): list of column names ''' # First get the n most impactful columns for each cluster most_impactful_cols_per_cluster = [] for cluster in most_impactful_clusters: # Get columns and values from the cluster df_cluster = sorted_columns_from_cluster(pipeline, cluster) # Add the top n columns to the list of impactful columns most_impactful_cols_per_cluster.append(list(df_cluster.index)[:n]) # Then loop through, adding columns (if not already in the list) # until we have >= n columns most_impactful_cols = [] i = 0 while len(most_impactful_cols) < n: for j in range(len(most_impactful_clusters)): new_col = most_impactful_cols_per_cluster[j][i] if new_col not in most_impactful_cols: most_impactful_cols.append(new_col) i += 1 # Select first n elements most_impactful_cols = most_impactful_cols[:n] return most_impactful_cols ###Output _____no_output_____ ###Markdown Now we'll create models with increasing numbers of impactful columns included, and determine which gives the highest score. ###Code # Use top two (+ve correlating) and bottom two (-ve correlating) clusters from earlier analysis most_impactful_clusters = top_two + bottom_two # Logistic regression pipeline pipeline = Pipeline([ ('scale', preprocessing.MinMaxScaler()), ('clf', LogisticRegression(max_iter=1000, random_state=42)) ]) # Range of numbers of columns to test n_cols_range = range(1, X_train.shape[1]) scores = [] for n_cols in n_cols_range: # Select columns, generate and score model print('Testing with {} columns... '.format(n_cols), end='') cols = select_impactful_columns(n_cols, pca_pipeline, most_impactful_clusters) score, _, _ = generate_evaluate_model(X_train[cols], X_test[cols], y_train, y_test, pipeline) print('Complete (ROC AUC: {:.4f}).'.format(score)) scores.append(score) # Pull out the number of columns where the ROC AUC score was maximum n_cols_max_score = n_cols_range[np.where(scores == max(scores))[0][0]] print('Best score ({:.4f}) achieved with {} columns.'.format(max(scores), n_cols_max_score)) # Plot the relationship between number of cols and ROC AUC score, highlighting the max plt.plot(n_cols_range, scores) plt.plot([0, n_cols_max_score, n_cols_max_score], [max(scores), max(scores), min(scores)], '--') plt.title('ROC AUC vs. number of columns in model') plt.xlabel('Number of columns') plt.ylabel('ROC AUC'); most_impactful_columns = select_impactful_columns(n_cols_max_score, pca_pipeline, most_impactful_clusters) ###Output Best score (0.6743) achieved with 48 columns. ###Markdown By using only the 48 most impactful columns in the model we get a ROC AUC score of 0.6743 versus 0.6119 using all columns; a ~10% improvement! 2.3.3 Model optimisationWe'll now try and optimise this model further by tuning the hyperparameters using [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html). We can look at the [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) documentation to see what tunable parameters are available. We will test:1. `penalty`: `l1`, `l2`, and `none`2. `solver`: `lbfgs`, `liblinear`, and `sag`3. `C`: 15 values from 0.1 to 1.0, log spaced**Note:** GridSearchCV will test every permutation of these parameters, but not all permutations are valid (e.g. `l1` penalty with the `sag` solver). This will raise a warning when tested, and the score will be set to np.nan. ###Code # Define the pipeline to the optimised estimator_pipeline = Pipeline([ ('scale', preprocessing.MinMaxScaler()), ('clf', LogisticRegression(max_iter=1000)) ]) # Hyperparameters to test parameters = { 'clf__penalty': ['l1', 'l2', 'none'], 'clf__solver': ['lbfgs', 'liblinear', 'sag'], 'clf__C': np.logspace(-1, 0, 15) } # Cross validation strategy cv = RepeatedStratifiedKFold(n_splits=5, n_repeats=3, random_state=42) # GridSearch model, using ROC AUC for scoring gridsearch_model = GridSearchCV(estimator=estimator_pipeline, param_grid=parameters, verbose=2, scoring='roc_auc', cv=cv) ###Output _____no_output_____ ###Markdown We can now run GridSearchCV, using the 48 most impactful columns we identified earlier. As noted above, we will expect a number of warnings due to some permutations of the hyperparameters being invalid. ###Code gcv_score, fp_rate, tp_rate = generate_evaluate_model(X_train[most_impactful_columns], X_test[most_impactful_columns], y_train, y_test, gridsearch_model) # Plot the ROC curve for the optimised model plt.plot(fp_rate, tp_rate) plt.plot([0, 1], [0, 1], linestyle='--') plt.xlabel('False positive rate') plt.ylabel('True positive rate') plt.title('ROC curves: optimised model') plt.show(); # Print the ROC AUC score and the best hyperparameters print('ROC AUC score for optimised model: {:.4f}'.format(gcv_score)) print('Best parameters: ', end='') print(gridsearch_model.best_params_) ###Output _____no_output_____ ###Markdown Using GridsearchCV has resulted in a small improvement in the ROC AUC score: from 0.6743 to 0.6753. Part 3: Kaggle CompetitionWe'll now apply our optimised model to the TEST dataset and make predictions to submit to the [Kaggle competition](https://www.kaggle.com/c/udacity-arvato-identify-customers/). ###Code # Uncomment/comment the appropriate paths for working from the Udacity workspace, or locally # mailout_train_path = '../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv' mailout_test_path = './Data/Udacity_MAILOUT_052018_TEST.csv' mailout_test = pd.read_csv(mailout_test_path, sep=';') dataframe_overview(mailout_test) ###Output 42833 rows and 366 columns. 360 columns of type 'number'. 6 columns of type 'object'. 14% of the data is null. ###Markdown We can use our cleaning function again to clean the dataset. This time, however, we **don't** want to drop any rows (we want to make a prediction for every row), so we set the missing data threshold to 1 (100%) to ensure no rows are dropped. The missing data will be imputed instead. ###Code mailout_test_clean, _, _ = clean_df(mailout_test, cat_cols_to_encode, col_nan_threshold=0.5, row_nan_threshold=1) ###Output Replacing invalid X values... Found 7 values matching '/X+/' in column 'CAMEO_DEU_2015'. replaced with NaN. Found 7 values matching '/X+/' in column 'CAMEO_DEUG_2015'. replaced with NaN. Found 7 values matching '/X+/' in column 'CAMEO_INTL_2015'. replaced with NaN. Complete. Replacing unknown values with NaN... Complete. Dropping unknown columns... Complete. Finding and dropping columns with over 50% NaN... Complete. Finding and dropping rows with over 100% NaN... Complete. Imputing NaN values with most frequent values... Complete. Encoding non-numeric columns... Encoding column 'CAMEO_DEU_2015'... Complete. Encoding column 'OST_WEST_KZ'... Complete. Complete. Cleaned dataframe returned. ###Markdown The model we have selected uses a subset of the columns. Let's reduce the cleaned dataset down to these columns, and then look at its properties again. ###Code mailout_test_subset = mailout_test_clean[most_impactful_columns] dataframe_overview(mailout_test_subset) ###Output 42833 rows and 48 columns. 48 columns of type 'number'. 0 columns of type 'object'. 0% of the data is null. ###Markdown The dataset is now clean: fully numeric, no missing data, and reduced down to the columns used in our model. We can now use the optimised model generated using GridsearchCV to make predictions based on this data. ###Code mailout_test_preds = gridsearch_model.predict_proba(mailout_test_subset) # Slice down to the probabilities of row being in class 1 (i.e. a customer) mailout_test_class_1 = mailout_test_preds[:, 1] # Construct a dataframe of the predictions with the LNR as the index kaggle_df = pd.DataFrame(mailout_test_class_1, index=mailout_test['LNR'], columns=['RESPONSE']) kaggle_df.head() # Export as a CSV file to submit to the Kaggle competition kaggle_df.to_csv('./kaggle_competition.csv') ###Output _____no_output_____ ###Markdown Capstone Project: Create a Customer Segmentation Report for Arvato Financial ServicesIn this project, you will analyze demographics data for customers of a mail-order sales company in Germany, comparing it against demographics information for the general population. You'll use unsupervised learning techniques to perform customer segmentation, identifying the parts of the population that best describe the core customer base of the company. Then, you'll apply what you've learned on a third dataset with demographics information for targets of a marketing campaign for the company, and use a model to predict which individuals are most likely to convert into becoming customers for the company. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.If you completed the first term of this program, you will be familiar with the first part of this project, from the unsupervised learning project. The versions of those two datasets used in this project will include many more features and has not been pre-cleaned. You are also free to choose whatever approach you'd like to analyzing the data rather than follow pre-determined steps. In your work on this project, make sure that you carefully document your steps and decisions, since your main deliverable for this project will be a blog post reporting your findings. ###Code # import libraries here; add more as necessary import numpy as np import pandas as pd import statistics import matplotlib.pyplot as plt import matplotlib.cm as cm import seaborn as sns import skopt from skopt import BayesSearchCV from sklearn.cluster import KMeans from sklearn.svm import SVC from sklearn.preprocessing import MinMaxScaler, LabelEncoder from sklearn.decomposition import PCA from sklearn.model_selection import train_test_split, StratifiedKFold, cross_val_score, learning_curve from sklearn.linear_model import LogisticRegression from sklearn.neural_network import MLPClassifier from sklearn.ensemble import ExtraTreesClassifier, RandomForestClassifier, GradientBoostingClassifier from sklearn.metrics import roc_auc_score, recall_score import lightgbm as lgb import xgboost as xgb # magic word for producing visualizations in notebook %matplotlib inline ###Output _____no_output_____ ###Markdown Part 0: Get to Know the DataThere are four data files associated with this project:- `Udacity_AZDIAS_052018.csv`: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns).- `Udacity_CUSTOMERS_052018.csv`: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).- `Udacity_MAILOUT_052018_TRAIN.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).- `Udacity_MAILOUT_052018_TEST.csv`: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. Use the information from the first two files to figure out how customers ("CUSTOMERS") are similar to or differ from the general population at large ("AZDIAS"), then use your analysis to make predictions on the other two files ("MAILOUT"), predicting which recipients are most likely to become a customer for the mail-order company.The "CUSTOMERS" file contains three extra columns ('CUSTOMER_GROUP', 'ONLINE_PURCHASE', and 'PRODUCT_GROUP'), which provide broad information about the customers depicted in the file. The original "MAILOUT" file included one additional column, "RESPONSE", which indicated whether or not each recipient became a customer of the company. For the "TRAIN" subset, this column has been retained, but in the "TEST" subset it has been removed; it is against that withheld column that your final predictions will be assessed in the Kaggle competition.Otherwise, all of the remaining columns are the same between the three data files. For more information about the columns depicted in the files, you can refer to two Excel spreadsheets provided in the workspace. [One of them](./DIAS Information Levels - Attributes 2017.xlsx) is a top-level list of attributes and descriptions, organized by informational category. [The other](./DIAS Attributes - Values 2017.xlsx) is a detailed mapping of data values for each feature in alphabetical order.In the below cell, we've provided some initial code to load in the first two datasets. Note for all of the `.csv` data files in this project that they're semicolon (`;`) delimited, so an additional argument in the [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call has been included to read in the data properly. Also, considering the size of the datasets, it may take some time for them to load completely.You'll notice when the data is loaded in that a warning message will immediately pop up. Before you really start digging into the modeling and analysis, you're going to need to perform some cleaning. Take some time to browse the structure of the data and look over the informational spreadsheets to understand the data values. Make some decisions on which features to keep, which features to drop, and if any revisions need to be made on data formats. It'll be a good idea to create a function with pre-processing steps, since you'll need to clean all of the datasets before you work with them. ###Code # load in the data azdias = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_AZDIAS_052018.csv', sep=';') customers = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_CUSTOMERS_052018.csv', sep=';') print(azdias.iloc[:,19:21].columns) ###Output Index(['CAMEO_DEUG_2015', 'CAMEO_INTL_2015'], dtype='object') ###Markdown So we see that the problem comes from 'CAMEO_DEUG_2015' and 'CAMEO_INTL_2015' columns, we will investigate a little more. ###Code azdias.CAMEO_DEUG_2015.unique() azdias.CAMEO_INTL_2015.unique() ###Output _____no_output_____ ###Markdown We can see that there are ints , floats and strings We could convert everthing to float after we replace the 'X' and 'XX' with np.nan ###Code def cameo_fix(df): ''' Fix the X and XX in cameo columns by replacing it with nan Args df (df): demographic dataframe returns (df): dataframe with X or XX replaced with nan ''' cols = ['CAMEO_DEUG_2015', 'CAMEO_INTL_2015'] df[cols] = df[cols].replace({'XX': np.nan, 'X':np.nan}) df[cols] = df[cols].astype(float) return df #Applying the fix to azdias and customers dataframes azdias = cameo_fix(azdias) customers = cameo_fix(customers) azdias.CAMEO_DEUG_2015.unique() azdias.CAMEO_INTL_2015.unique() ###Output _____no_output_____ ###Markdown Difference between datasetsThe azdias dataset don't have 'PRODUCT_GROUP', 'CUSTOMER_GROUP' and 'ONLINE_PURCHASE' ###Code #Dropping 'PRODUCT_GROUP', 'CUSTOMER_GROUP' and 'ONLINE_PURCHASE' from customers dataframe customers = customers.drop(['CUSTOMER_GROUP', 'ONLINE_PURCHASE', 'PRODUCT_GROUP'], inplace=False, axis=1) list(set(azdias.columns) - set(customers.columns)) list(set(customers.columns) - set(azdias.columns)) # creating a function to determine percentage of missing values def missing_pct(df): ''' Calculates the percentage of missing values per columns in a dataframe Args df (df): dataframe return missing_df (df): ''' missing = df.isnull().sum()* 100/len(df) missing_df = pd.DataFrame({'column_name': df.columns, 'percent_missing': missing}) return missing_df azdias_missing = missing_pct(azdias) azdias_missing azdias.select_dtypes(include='object') #From reading the DIAS Attributes - Values 2017.xlsx creating a list of features that are categorical categorical = ['AGER_TYP', 'ANREDE_KZ', 'CAMEO_DEU_2015', 'CAMEO_DEUG_2015', 'CAMEO_INTL_2015', 'CJT_GESAMTTYP', 'D19_BANKEN_DATUM', 'D19_BANKEN_OFFLINE_DATUM', 'D19_BANKEN_ONLINE_DATUM', 'D19_GESAMT_DATUM', 'D19_GESAMT_OFFLINE_DATUM', 'D19_GESAMT_ONLINE_DATUM', 'D19_KONSUMTYP', 'D19_TELKO_DATUM', 'D19_TELKO_OFFLINE_DATUM', 'D19_TELKO_ONLINE_DATUM', 'D19_VERSAND_DATUM', 'D19_VERSAND_OFFLINE_DATUM', 'D19_VERSAND_ONLINE_DATUM', 'D19_VERSI_DATUM', 'D19_VERSI_OFFLINE_DATUM', 'D19_VERSI_ONLINE_DATUM', 'FINANZTYP', 'GEBAEUDETYP', 'GFK_URLAUBERTYP', 'GREEN_AVANTGARDE', 'KBA05_BAUMAX', 'KK_KUNDENTYP', 'LP_FAMILIE_FEIN', 'LP_FAMILIE_GROB', 'LP_STATUS_FEIN', 'LP_STATUS_GROB', 'NATIONALITAET_KZ', 'OST_WEST_KZ', 'PLZ8_BAUMAX', 'SHOPPER_TYP', 'SOHO_KZ', 'TITEL_KZ', 'VERS_TYP', 'WOHNLAGE', 'ZABEOTYP'] def cat_count(df, categorical): ''' Given a demographic dataframe and a list of categorical features, prints the amount of categorical variables per feature Args: df (df): demographics dataframe categorical (list): a list of categorical features returns: None ''' cat = [x for x in categorical if x in df.columns] print(df[cat].nunique()) cat_count(azdias, categorical) #load in the dias attributes data dias_attributes = pd.read_excel('DIAS Attributes - Values 2017.xlsx', skiprows=[0]) dias_attributes.drop(['Unnamed: 0'], axis = 1, inplace = True) dias_attributes.head(15) #Find unknown values for each attributes attributes_unknown = {} for i in range(len(dias_attributes)): if type(dias_attributes.iloc[i]['Attribute']) == str: tmp = dias_attributes.iloc[i]['Attribute'] if type(dias_attributes.iloc[i]['Meaning']) == str: if not dias_attributes.iloc[i]['Meaning'].find('unknown') == -1\ or not dias_attributes.iloc[i]['Meaning'].find('uniformly distributed') == -1 or\ not dias_attributes.iloc[i]['Meaning'].find('missing') == -1: if tmp in attributes_unknown: attributes_unknown[tmp].append(str(dias_attributes.iloc[i]['Value'])) else: attributes_unknown[tmp] = [str(dias_attributes.iloc[i]['Value'])] name = [] attr = [] for i in attributes_unknown: name.append(i) attr.append(attributes_unknown[i]) tmp = [] for j in attributes_unknown[i]: if j.find(','): tmp += j.replace(' ','').split(',') else: tmp.append(j) for k in range(len(tmp)): tmp[k] = int(tmp[k]) attributes_unknown[i] = tmp #for clean printing purpose for idx, i in enumerate(attr): attr[idx] = ','.join(attr[idx]).replace(' ','') a = {'features':name, 'unknowns':attr} l = ['features','unknowns'] attr_df = pd.DataFrame(a, columns=l) name = [] attr = [] attr_df.head(30) def missing_to_nans(df, attributes_unknown): ''' Replace the missing value in a demographic dataframe with nan Args: df (df): demographic dataframe attributes_unknown (dict): a dictionary where the keys are the features attributes and containing a list of the unknown value for this specific attribute returns: None ''' for feature in attributes_unknown: if feature in df: for missing in attributes_unknown[feature]: df[feature].replace(missing, np.nan, inplace=True) missing_to_nans(azdias, attributes_unknown) missing_to_nans(customers, attributes_unknown) azdias_missing = missing_pct(azdias) customers_missing = missing_pct(customers) def feature_cap(missing, cap): ''' Compute the number of features that have less missing values than the cap Args: missing (df): missing value dataframe cap (int): an interger representing in % the maximum of missing value that a feature can have returns (list): a list of feature that have less missing value than the cap ''' res = [] for i in range(len(missing)): if missing.iloc[i]['percent_missing'] <= cap: res.append(missing.iloc[i]['column_name']) return res azdias_x=[] azdias_y=[] for cap in range(101): azdias_x.append(cap) azdias_y.append(len(feature_cap(azdias_missing, cap))) plt.plot(azdias_x, azdias_y) plt.xlabel('% of missing value') plt.xticks(np.arange(0, 110, step=10)) plt.ylabel('Number of features') plt.title('Azdias') plt.grid(b=True) plt.show() customers_x=[] customers_y=[] for cap in range(101): customers_x.append(cap) customers_y.append(len(feature_cap(customers_missing, cap))) plt.plot(customers_x, customers_y) plt.xlabel('% of missing value') plt.xticks(np.arange(0, 110, step=10)) plt.ylabel('Number of features') plt.title('Customers') plt.grid(b=True) plt.show() ###Output _____no_output_____ ###Markdown From theese plot we can see that we could take a cap of around 18% for azdias dataset, but most of customers datasets columns have more than that. 30% seems to be a good choice overall ###Code azdias_features_selected = feature_cap(azdias_missing, 30) print(len(azdias_features_selected)) customers_features_selected = feature_cap(customers_missing, 30) print(len(customers_features_selected)) ###Output 355 ###Markdown We can see that with the same cap we dont have the same amount of features selected between azdias and customers dataframe. Therefore we will need to check for the features that are selected in both. ###Code list(set(azdias_features_selected) - set(customers_features_selected)) list(set(customers_features_selected) - set(azdias_features_selected)) #since there is only 2 features selected more in azdias than customers and none the other way around, #we just keep customers_features_selected as features_selected for both in the future features_selected = customers_features_selected #getting rid of the features we dont need azdias = azdias[features_selected] ###Output _____no_output_____ ###Markdown We now need to do some feature engineering around the categorical data ###Code def features_engineering(df): ''' This function takes a demographic dataframe to create new features and encode select categorical features Args: df (df) : demographic dataframe returns: df (df) : dataframe with new features ''' # Dealing with Unnamed 0 if 'Unnamed: 0' in df: df.drop(['Unnamed: 0'], axis = 1, inplace = True) # Dealing with ANREDE_KZ if 'ANREDE_KZ' in df: df = pd.get_dummies(df, columns = ['ANREDE_KZ'], prefix = ['ANREDE_KZ'], dummy_na = True, drop_first = True) # Dealing with CAMEO_DEU_2015 if 'CAMEO_DEU_2015' in df: most_frequent = df['CAMEO_DEU_2015'].value_counts().idxmax() df['CAMEO_DEU_2015'] = df['CAMEO_DEU_2015'].replace(['XX'], most_frequent).fillna(most_frequent) values = np.array(df['CAMEO_DEU_2015']) encoder = LabelEncoder() encoded = encoder.fit_transform(values) df['CAMEO_DEU_2015'] = encoded #dealing with CAMEO_INTL_2015 if 'CAMEO_INTL_2015' in df: most_frequent = df['CAMEO_INTL_2015'].value_counts().idxmax() df['CAMEO_INTL_2015'] = df['CAMEO_INTL_2015'].fillna(most_frequent) df['FAMILY_STATUS'] = df['CAMEO_INTL_2015'].apply(lambda x: float(str(x)[1])) df['FAMILY_REVENUE'] = df['CAMEO_INTL_2015'].apply(lambda x: float(str(x)[0])) df.drop(['CAMEO_INTL_2015'], axis = 1, inplace = True) # Dealing with EINGEFUEGT_AM if 'EINGEFUEGT_AM' in df: df['EINGEFUEGT_AM'] = pd.to_datetime(df['EINGEFUEGT_AM']).dt.year #dealing with D19_LETZTER_KAUF_BRANCHE if 'D19_LETZTER_KAUF_BRANCHE' in df: df.drop(['D19_LETZTER_KAUF_BRANCHE'], axis = 1, inplace = True) #dealing with LP_LEBENSPHASE_FEIN if 'LP_LEBENSPHASE_FEIN' in df: replace_dict = {1: 1, 2: 1, 3: 2, 4: 2, 5: 1, 6: 1, 7: 2, 8: 2, 9: 2, 10: 3, 11: 2, 12: 2, 13: 4, 14: 2, 15: 1, 16: 2, 17: 2, 18: 3, 19: 3, 20: 4, 21: 1, 22: 2, 23: 3, 24: 1, 25: 2, 26: 2, 27: 2, 28: 4, 29: 1, 30: 2, 31: 1, 32: 2, 33: 2, 34: 2, 35: 4, 36: 2, 37: 2, 38: 2, 39: 4, 40: 4} df['LP_LEBENSPHASE_FEIN_WEALTH'] = df['LP_LEBENSPHASE_FEIN'].map(replace_dict) replace_dict = {1: 1, 2: 2, 3: 1, 4: 2, 5: 3, 6: 4, 7: 3, 8: 4, 9: 2, 10: 2, 11: 3, 12: 4, 13: 3, 14: 1, 15: 3, 16: 3, 17: 2, 18: 1, 19: 3, 20: 3, 21: 2, 22: 2, 23: 2, 24: 2, 25: 2, 26: 2, 27: 2, 28: 2, 29: 1, 30: 1, 31: 3, 32: 3, 33: 1, 34: 1, 35: 1, 36: 3, 37: 3, 38: 4, 39: 2, 40: 4} df['LP_LEBENSPHASE_FEIN_AGE'] = df['LP_LEBENSPHASE_FEIN'].map(replace_dict) df.drop(['LP_LEBENSPHASE_FEIN'], axis = 1, inplace = True) # Dealing with OST_WEST_KZ if 'OST_WEST_KZ' in df: replace_dict = {'W':0, 'O':1} df['OST_WEST_KZ'] = df['OST_WEST_KZ'].map(replace_dict) # Dealing with PRAEGENDE_JUGENDJAHRE if 'PRAEGENDE_JUGENDJAHRE' in df: replace_dict = {2: 1, 3: 2, 4: 2, 5: 3, 6: 3, 7: 3, 8: 4, 9: 4, 10: 5, 11: 5, 12: 5, 13: 5, 14: 6, 15: 6} df['PRAEGENDE_JUGENDJAHRE_NEW'] = df['PRAEGENDE_JUGENDJAHRE'].map(replace_dict) df.drop(['PRAEGENDE_JUGENDJAHRE'], axis = 1, inplace = True) #Dealing with WOHNLAGE if 'WOHNLAGE' in df: replace_dict = {0.0:3, 1.0:1, 2.0:2, 3.0:3, 4.0:4, 5.0:5, 7.0:3, 8.0:3} df['WOHNLAGE_QUALITAT'] = df['WOHNLAGE'].map(replace_dict) replace_dict = {1.0:0, 2.0:0, 3.0:0, 4.0:0, 5.0:0, 7.0:1, 8.0:1} df['WOHNLAGE_RURAL'] = df['WOHNLAGE'].map(replace_dict) df.drop(['WOHNLAGE'], axis = 1, inplace = True) print('Replacing the NaNs value in the dataframe with the most frequent one') for feat in df: most_frequent = df[feat].value_counts().idxmax() df[feat] = df[feat].fillna(most_frequent) return df azdias = features_engineering(azdias) #quick check if everything has gone as intended missing_pct(azdias)['percent_missing'].sum() customers = features_engineering(customers) missing_pct(customers)['percent_missing'].sum() #check if there is any categorical feature that would need engineering selected_categorical = [] for cat in categorical: if cat in azdias: selected_categorical.append(cat) for i in selected_categorical: print(i, azdias[i].unique()) ###Output CAMEO_DEU_2015 [25 35 15 5 37 13 8 0 4 42 20 36 30 21 43 40 1 12 17 14 11 18 31 39 27 28 7 32 41 33 22 3 38 26 24 19 16 9 6 34 10 29 23 2] CAMEO_DEUG_2015 [8. 4. 2. 6. 1. 9. 5. 7. 3.] CJT_GESAMTTYP [2. 5. 3. 4. 1. 6.] D19_BANKEN_DATUM [10 5 8 6 9 1 7 4 2 3] D19_BANKEN_OFFLINE_DATUM [10 9 8 2 5 4 1 6 7 3] D19_BANKEN_ONLINE_DATUM [10 5 8 6 9 1 4 7 2 3] D19_GESAMT_DATUM [10 1 3 5 9 4 7 6 8 2] D19_GESAMT_OFFLINE_DATUM [10 6 8 9 5 2 4 1 7 3] D19_GESAMT_ONLINE_DATUM [10 1 3 5 9 4 7 6 8 2] D19_KONSUMTYP [9. 1. 4. 3. 6. 5. 2.] D19_TELKO_DATUM [10 6 9 8 7 5 4 2 1 3] D19_TELKO_OFFLINE_DATUM [10 8 9 5 6 7 4 2 3 1] D19_TELKO_ONLINE_DATUM [10 9 7 8 6 5 4 1 2 3] D19_VERSAND_DATUM [10 1 5 9 4 8 7 6 3 2] D19_VERSAND_OFFLINE_DATUM [10 9 6 8 5 2 1 4 7 3] D19_VERSAND_ONLINE_DATUM [10 1 5 9 4 8 7 6 3 2] D19_VERSI_DATUM [10 2 8 9 6 7 5 1 4 3] D19_VERSI_OFFLINE_DATUM [10 7 9 6 4 8 5 2 3 1] D19_VERSI_ONLINE_DATUM [10 8 9 5 6 7 4 1 2 3] FINANZTYP [4 1 6 5 2 3] GEBAEUDETYP [1. 8. 3. 2. 6. 4. 5.] GFK_URLAUBERTYP [10. 1. 5. 12. 9. 3. 8. 11. 4. 2. 7. 6.] GREEN_AVANTGARDE [0 1] LP_FAMILIE_FEIN [ 2. 5. 1. 0. 10. 7. 11. 3. 8. 4. 6. 9.] LP_FAMILIE_GROB [2. 3. 1. 0. 5. 4.] LP_STATUS_FEIN [ 1. 2. 3. 9. 4. 10. 5. 8. 6. 7.] LP_STATUS_GROB [1. 2. 4. 5. 3.] NATIONALITAET_KZ [1. 3. 2.] OST_WEST_KZ [0. 1.] PLZ8_BAUMAX [1. 2. 4. 5. 3.] SHOPPER_TYP [1. 3. 2. 0.] SOHO_KZ [0. 1.] VERS_TYP [2. 1.] ZABEOTYP [3 5 4 1 6 2] ###Markdown We now have a clean dataset to work with, but the range of value can be significantly different from on column to an other, so we will need to perform some feature scaling first ###Code def scaler_tool(df): ''' This function takes a dataframe of numbers and transform it through MinMaxScaler. Args: df (df) : a dataframe returns: res_df (df) : dataframe with scaled values ''' features_list = df.columns scaler = MinMaxScaler() scaler.fit(df) res_df = pd.DataFrame(scaler.transform(df)) res_df.columns = features_list return res_df azdias = scaler_tool(azdias) customers = scaler_tool(customers) ###Output _____no_output_____ ###Markdown We will now check our options with dimensionality reduction ###Code def pca_model(df, n_components): ''' This function defines a model that takes in a previously scaled dataframe and returns the result of the transformation. The output is an object created post data fitting Args: df (df) : a dataframe n_components (int) : number of components of the dataframe returns: model_pca (object) : a pca object fit with the df ''' pca = PCA(n_components) model_pca = pca.fit(df) return model_pca #explained_variance for PCA def explained_variance_plots(scaler, title): ''' Function that plots the explained variance sum for each number of component of the PCA Args: scaler (object) : a scaler object title (str) : name of the dataset we will show in the plot's title returns: None ''' plt.plot(np.cumsum(scaler.explained_variance_ratio_)) plt.title(title) plt.xlabel('Number of Components') plt.ylabel('Explained Variance Ratio') plt.grid(b=True) plot = plt.show() n_components_azdias = len(azdias.columns) azdias_pca = pca_model(azdias, n_components_azdias) type(azdias_pca) explained_variance_plots(azdias_pca, 'azdias') ###Output _____no_output_____ ###Markdown We will now choose 150 components for the features and then perform a Gap Statistic analysis on KMeans clustering to select the number of cluster we will use. Part 1: Customer Segmentation ReportThe main bulk of your analysis will come in this part of the project. Here, you should use unsupervised learning techniques to describe the relationship between the demographics of the company's existing customers and the general population of Germany. By the end of this part, you should be able to describe parts of the general population that are more likely to be part of the mail-order company's main customer base, and which parts of the general population are less so. ###Code pca = PCA(150) azdias_pca = pca.fit_transform(azdias) customers_pca = pca.fit_transform(customers) def optimalK(data, nrefs=3, maxClusters=10): """ Calculates KMeans optimal K using Gap Statistic from Tibshirani, Walther, Hastie Params: data: ndarry of shape (n_samples, n_features) nrefs: number of sample reference datasets to create maxClusters: Maximum number of clusters to test for Returns: (gaps, optimalK) """ gaps = np.zeros((len(range(1, maxClusters)),)) resultsdf = pd.DataFrame({'clusterCount':[], 'gap':[]}) for gap_index, k in enumerate(range(1, maxClusters)): print('k :',k) # Holder for reference dispersion results refDisps = np.zeros(nrefs) # For n references, generate random sample and perform kmeans getting resulting dispersion of each loop for i in range(nrefs): # Create new random reference set randomReference = np.random.random_sample(size=data.shape) # Fit to it km = KMeans(k) km.fit(randomReference) refDisp = km.inertia_ refDisps[i] = refDisp # Fit cluster to original data and create dispersion km = KMeans(k) km.fit(data) origDisp = km.inertia_ # Calculate gap statistic gap = np.log(np.mean(refDisps)) - np.log(origDisp) # Assign this loop's gap statistic to gaps gaps[gap_index] = gap resultsdf = resultsdf.append({'clusterCount':k, 'gap':gap}, ignore_index=True) return (gaps.argmax() + 1, resultsdf) # Plus 1 because index of 0 means 1 cluster is optimal, index 2 = 3 clusters are optimal def bestK(df): """ Compute the best k with the 1-standard-error method Params: df: a DataFrame with gap value for each clusterCount Returns: (int) best number of clusters """ gap_list = list(df['gap']) gap_std = statistics.stdev(gap_list)/10 for i in range(1,len(gap_list)): if gap_list[i] - gap_list[i-1] < gap_std: return(i-1) k, gapdf = optimalK(azdias_pca, nrefs=5, maxClusters=15) print('Optimal k is: ', k) gapdf #Finding the best K with 1 standard error method k = bestK(gapdf) print('Optimal k is:', k) plt.plot(gapdf.clusterCount, gapdf.gap, linewidth=3) plt.scatter(gapdf[gapdf.clusterCount == k].clusterCount, gapdf[gapdf.clusterCount == k].gap, s=250, c='r') plt.grid(True) plt.xticks(np.arange(0, 15, 1)) plt.xlabel('Cluster Count') plt.ylabel('Gap Value') plt.title('Gap Values by Cluster Count') plt.show() ###Output _____no_output_____ ###Markdown So the optimal number of cluster is 9 ###Code kmeans = KMeans(9) model = kmeans.fit(azdias_pca) cluster = pd.DataFrame() cluster['LNR'] = azdias.index.values cluster['cluster'] = model.labels_ from collections import Counter azdias_labels = kmeans.labels_ customers_labels = kmeans.labels_ model_feat = list(azdias.columns) cust_feat = list(customers.columns) model_feat_df = pd.DataFrame() model_feat_df['model_feat'] = model_feat model_feat_notin_cust = [feat for feat in model_feat if feat not in cust_feat] len(model_feat_notin_cust) customers_labels = kmeans.predict(customers_pca) counts_customer = Counter(customers_labels) n_customers = customers_pca.shape[0] customer_freqs = {label: 100*(freq / n_customers) for label, freq in counts_customer.items()} counts_population = Counter(azdias_labels) n_population = azdias_pca.shape[0] population_freqs = {label: 100*(freq / n_population) for label, freq in counts_population.items()} customer_clusters = pd.DataFrame.from_dict(customer_freqs, orient='index', columns=['% of data']) customer_clusters['Cluster'] = customer_clusters.index customer_clusters['DataSet'] = 'Customers Data' population_clusters = pd.DataFrame.from_dict(population_freqs, orient='index', columns=['% of data']) population_clusters['Cluster'] = population_clusters.index population_clusters['DataSet'] = 'General Population' all_clusters = pd.concat([customer_clusters, population_clusters]) sns.catplot(x='Cluster', y='% of data', hue='DataSet', data=all_clusters, kind='bar') plt.show() ###Output _____no_output_____ ###Markdown Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. ###Code # load in the data mailout_data = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';') mailout_test = pd.read_csv('../../data/Term2/capstone/arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';') #How much in % is there of response print(len(list(mailout_data.loc[mailout_data['RESPONSE'] == 1].index))/len(mailout_data)*100,"%") ###Output 1.2383036171500394 % ###Markdown It appears that the class are very imbalanced as there is only around 1.24% of response, so the accuracy won't be a good metric to evaluate the model performances ###Code #Fixing Cameo columns mailout_data = cameo_fix(mailout_data) mailout_test = cameo_fix(mailout_test) #Replacing missing/unknowns values with nan missing_to_nans(mailout_data, attributes_unknown) missing_to_nans(mailout_test, attributes_unknown) mailout_data_missing = missing_pct(mailout_data) mailout_test_missing = missing_pct(mailout_test) mailout_x=[] mailout_y=[] for cap in range(101): mailout_x.append(cap) mailout_y.append(len(feature_cap(mailout_data_missing, cap))) mailout_test_x=[] mailout_test_y=[] for cap in range(101): mailout_test_x.append(cap) mailout_test_y.append(len(feature_cap(mailout_test_missing, cap))) plt.plot(mailout_x, mailout_y) plt.xlabel('% of missing value') plt.xticks(np.arange(0, 110, step=10)) plt.ylabel('Number of features') plt.title('Mailout') plt.grid(b=True) plt.show() plt.plot(mailout_test_x, mailout_test_y) plt.xlabel('% of missing value') plt.xticks(np.arange(0, 110, step=10)) plt.ylabel('Number of features') plt.title('Mailout test') plt.grid(b=True) plt.show() #Computing the list of columns in mailout_data that have less than 30% of missing values mailout_data_missing = missing_pct(mailout_data) mailout_data_features_selected = feature_cap(mailout_data_missing, 30) #Computing the list of columns in mailout_test that have less than 30% of missing values mailout_test_missing = missing_pct(mailout_test) mailout_test_features_selected = feature_cap(mailout_test_missing, 30) print(len(mailout_data_features_selected)) print(len(mailout_test_features_selected)) list(set(mailout_data_features_selected) - set(mailout_test_features_selected)) list(set(mailout_test_features_selected) - set(mailout_data_features_selected)) mailout_data = mailout_data[mailout_data_features_selected] mailout_test = mailout_test[mailout_test_features_selected] ###Output _____no_output_____ ###Markdown So we have the same columns exept for the RESPONSE that is not in the test dataset ###Code mailout_data = features_engineering(mailout_data) mailout_test = features_engineering(mailout_test) mailout_train_X = mailout_data.drop(['RESPONSE'], inplace=False, axis=1) mailout_train_y = mailout_data['RESPONSE'] mailout_train_X = mailout_train_X.drop(['LNR'], inplace=False, axis=1) mailout_test_X = mailout_test.drop(['LNR'], inplace=False, axis=1) mailout_train_X.shape mailout_test_X.shape scaler = MinMaxScaler() scaler.fit(mailout_train_X.astype(float)) mailout_train_X_scaled = scaler.transform(mailout_train_X) mailout_test_X_scaled = scaler.transform(mailout_test_X) seed = 42 models = [('MLP', MLPClassifier(random_state=seed)), ('LR', LogisticRegression(solver='liblinear', random_state=seed)), ('RF', RandomForestClassifier(n_estimators=250, random_state=seed)), ('LGBM', lgb.LGBMClassifier(random_state=seed)), ('GB', GradientBoostingClassifier(random_state=seed)), ('XGB', xgb.XGBClassifier(random_state=seed))] def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None, n_jobs=1, train_sizes=np.linspace(.1, 1.0, 10)): """ Generate a simple plot of the test and traning learning curve. Parameters ---------- estimator : object type that implements the "fit" and "predict" methods An object of that type which is cloned for each validation. title : string Title for the chart. X : array-like, shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape (n_samples) or (n_samples, n_features), optional Target relative to X for classification or regression; None for unsupervised learning. ylim : tuple, shape (ymin, ymax), optional Defines minimum and maximum yvalues plotted. cv : integer, cross-validation generator, optional If an integer is passed, it is the number of folds (defaults to 3). Specific cross-validation objects can be passed, see sklearn.cross_validation module for the list of possible objects n_jobs : integer, optional Number of jobs to run in parallel (default 1). return : float, the test score mean """ plt.figure() plt.title("Learning curve ({})".format(title)) if ylim is not None: plt.ylim(*ylim) plt.xlabel("Training examples") plt.ylabel("Score") train_sizes, train_scores, test_scores = learning_curve( estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes, scoring = 'roc_auc') train_scores_mean = np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) test_scores_mean = np.mean(test_scores, axis=1) test_scores_std = np.std(test_scores, axis=1) plt.grid() plt.fill_between(train_sizes, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.1, color="r") plt.fill_between(train_sizes, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.1, color="g") plt.plot(train_sizes, train_scores_mean, 'o-', color="r", label="Training score") plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score") plt.legend(loc="best") plt.yticks(np.arange(0.20, 1.1, 0.1)) plt.show() return test_scores_mean[-1] %%time cv = StratifiedKFold(n_splits=5, random_state=42, shuffle=True) models_list = [ [MLPClassifier(random_state=seed), "Multilayer Perceptron"], [LogisticRegression(solver='liblinear', random_state=seed), "Logistic Regression"], [RandomForestClassifier(n_estimators=200, random_state=seed), "Random Forest"], [lgb.LGBMClassifier(random_state=seed), "Light GBM"], [GradientBoostingClassifier(random_state=seed), "Gradient Boosting"], [xgb.XGBClassifier(random_state=seed), "XGBoost"]] score = [] model = [] for estimator, title in models_list: score.append( round( plot_learning_curve(estimator, title, mailout_train_X_scaled, mailout_train_y, cv=cv, n_jobs=5),3)) model.append(title) score_pd = pd.DataFrame({'Model':model, 'Score': score}) score_pd ###Output _____no_output_____ ###Markdown With the given results, we will now tune the xgboost hyperparameters using BayesSearchCV because Random search would takes to much time to achieve the same results ###Code bayes_cv_tuner_xg = BayesSearchCV( estimator = xgb.XGBClassifier( n_jobs = -1, objective = 'binary:logistic', eval_metric = 'auc', verbosity=1, ), search_spaces = { 'booster': ['gbtree','dart'], 'learning_rate': (0.001, 1.0, 'log-uniform'), 'max_depth': (1, 10), 'n_estimators': (10, 500), 'min_child_weight': (1, 10), 'gamma': (0.0, 1.0, 'uniform'), 'subsample': (0.5, 1.0, 'uniform'), 'colsample_bytree': (0.5, 1.0, 'uniform'), 'reg_alpha': (1e-10, 1.0, 'log-uniform'), 'scale_pos_weight': (1,100) }, scoring = 'roc_auc', cv = StratifiedKFold( n_splits=5, shuffle=True, random_state= seed ), n_jobs = -1, n_iter = 225, verbose = 0, refit = True, random_state = np.random.RandomState(50) ) def status_print(optim_result): """Status callback durring bayesian hyperparameter search""" # Get all the models tested so far in DataFrame format }, all_models = pd.DataFrame(bayes_cv_tuner_xg.cv_results_) # Get current parameters and the best parameters best_params = pd.Series(bayes_cv_tuner_xg.best_params_) print('Model #{}\nBest ROC-AUC: {}\nBest params: {}\n'.format( len(all_models), np.round(bayes_cv_tuner_xg.best_score_, 4), bayes_cv_tuner_xg.best_params_ )) # Save all model results clf_name = bayes_cv_tuner_xg.estimator.__class__.__name__ all_models.to_csv(clf_name+"_cv_results.csv") %%time result_xgb = bayes_cv_tuner_xg.fit(mailout_train_X_scaled, mailout_train_y, callback=status_print) ###Output Model #1 Best ROC-AUC: 0.7168 Best params: OrderedDict([('booster', 'dart'), ('colsample_bytree', 0.7256080813960445), ('gamma', 0.5005020213127345), ('learning_rate', 0.49358830548776716), ('max_depth', 1), ('min_child_weight', 5), ('n_estimators', 399), ('reg_alpha', 7.001696412022888e-07), ('scale_pos_weight', 2), ('subsample', 0.7236051016015188)]) Model #2 Best ROC-AUC: 0.7168 Best params: OrderedDict([('booster', 'dart'), ('colsample_bytree', 0.7256080813960445), ('gamma', 0.5005020213127345), ('learning_rate', 0.49358830548776716), ('max_depth', 1), ('min_child_weight', 5), ('n_estimators', 399), ('reg_alpha', 7.001696412022888e-07), ('scale_pos_weight', 2), ('subsample', 0.7236051016015188)]) Model #3 Best ROC-AUC: 0.7168 Best params: OrderedDict([('booster', 'dart'), ('colsample_bytree', 0.7256080813960445), ('gamma', 0.5005020213127345), ('learning_rate', 0.49358830548776716), ('max_depth', 1), ('min_child_weight', 5), ('n_estimators', 399), ('reg_alpha', 7.001696412022888e-07), ('scale_pos_weight', 2), ('subsample', 0.7236051016015188)]) Model #4 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #5 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #6 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #7 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #8 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #9 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #10 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #11 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #12 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #13 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #14 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #15 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #16 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #17 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #18 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #19 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #20 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #21 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #22 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) Model #23 Best ROC-AUC: 0.7631 Best params: OrderedDict([('booster', 'gbtree'), ('colsample_bytree', 0.9473443165107535), ('gamma', 0.20929813103737765), ('learning_rate', 0.0010260376104840778), ('max_depth', 5), ('min_child_weight', 3), ('n_estimators', 499), ('reg_alpha', 3.041690499298179e-10), ('scale_pos_weight', 11), ('subsample', 0.5603976607323644)]) ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code #Best test bayes_xgb = xgb.XGBClassifier(booster='gbtree', colsample_bytree=0.5, gamma=1.0, learning_rate=0.0020276515169578386, max_depth=5, min_child_weight=10, n_estimators=251, reg_alpha=1.0, scale_pos_weight=34, subsample=0.5, eval_metric='auc', verbosity=1, n_jobs=-1) bayes_xgb.fit(mailout_train_X_scaled, mailout_train_y) feat_imp = pd.Series(bayes_xgb.feature_importances_, index=mailout_test_X.columns).sort_values(ascending=False) # plot the 25 most important features fig = plt.figure(figsize=(18, 10)) feat_imp.iloc[:25].plot(kind='barh') #, title='Feature Importances') plt.xlabel('Importance', fontsize=12) plt.ylabel('Features', fontsize=12) plt.title('most predictive features') plt.show() lnr = pd.DataFrame(mailout_test['LNR']) pred = bayes_xgb.predict_proba(mailout_test_X_scaled)[:,1] pred = pd.DataFrame(pred) sub = pd.concat([lnr,pred], axis=1) sub = sub.loc[~np.isnan(sub['LNR'])] #change LNR column type from float to int sub['LNR'] = sub['LNR'].astype(int) sub = sub.rename(columns={0: "RESPONSE"}) sub.set_index('LNR', inplace = True) sub.to_csv('submission.csv') ###Output _____no_output_____
src/data-collection-processing/user-data/04. adjacency-matrices.ipynb
###Markdown Constructing sparse user adjacency matricesThis notebook constructs sparse interaction user matrices for downstream clustering / other analysis tasks. ###Code from pathlib import Path import csv import json import ctypes as ct from tqdm import tqdm import sqlite3 as sq import numpy as np import scipy.sparse as sps csv.field_size_limit(int(ct.c_ulong(-1).value // 2)) path = "../../../data/users/" DB_path = path + 'users.sqlite.db' ###Output _____no_output_____ ###Markdown Adding an identifier to SQL database that can be mapped to the adjacency matrices ###Code with sq.connect(DB_path) as conn: cur = conn.cursor() try: cur.execute("ALTER TABLE users ADD COLUMN matrix_id int") cur.execute("drop table if exists tmp;") cur.execute(""" CREATE TABLE tmp as SELECT user_name, row_number() over (order by total_activity DESC) as no FROM users WHERE is_selected = True;""" ) cur.execute("CREATE INDEX IF NOT EXISTS user_name_idx ON tmp(user_name)") cur.execute("UPDATE users SET matrix_id = (SELECT no FROM tmp WHERE tmp.user_name = users.user_name);") cur.execute("drop table if exists tmp;") except sq.OperationalError: print("columns already exist") #retrieving a dictionary to map user names to IDs with sq.connect(DB_path) as conn: cur = conn.cursor() cur.execute("SELECT user_name, matrix_id FROM users WHERE matrix_id IS NOT NULL") mapped = dict(cur.fetchall()) print("Total number of users: {}".format(len(mapped))) files = [f.absolute() for f in Path(path + 'raw/').glob("*.csv")] print("Total files to be processed: {}".format(len(files))) no_users = len(mapped) int_dict = { "indirects": { "data_col": 5, "matrix": np.zeros((no_users, no_users)) }, "directs": { "data_col": 4, "matrix": np.zeros((no_users, no_users)) } } for f in tqdm(files): with open(f) as file: reader = csv.reader(file, delimiter=',') next(reader) #skip the header for row in reader: for d in int_dict.values(): interactions = json.loads(row[d['data_col']]) #get interaction dictionary user_name = row[8] for interlocutor, intensity in interactions.items(): #get IDs try: userid = mapped[user_name] - 1 # matrix_ids are 1-indexed in the DB! interid = mapped[interlocutor] - 1 #update the adjacency matrix d['matrix'][userid, interid] += intensity except KeyError: # ignore cases where there is no user in the mapped list # this only happens because mapped list is pre-filtered pass for int_name, int_vals in int_dict.items(): #then, generate a scipy matrix with condensed info and save sparse_matrix = sps.csr_matrix(int_vals['matrix']) adj_matrix_path = f"{path}adj_matrix-{int_name}-latest.npz" sps.save_npz(adj_matrix_path, sparse_matrix) ###Output _____no_output_____
16_nlp.ipynb
###Markdown Natural Language Processing with RNNs and Attention ###Code # FIXME: meke autocompletion working again %config Completer.use_jedi = False import os %matplotlib inline import matplotlib.pyplot as plt import numpy as np import tensorflow as tf physical_devices = tf.config.list_physical_devices('GPU') if not physical_devices: print("No GPU was detected.") else: # https://stackoverflow.com/a/60699372 tf.config.experimental.set_memory_growth(physical_devices[0], True) from tensorflow import keras ###Output No GPU was detected. ###Markdown Char-RNNLet's build a RNN processing sequences of text and predicting single character. Loading the Data and Preparing the DatasetFollowing example uses famous Shakespear's texts. ###Code # Set RNG state np.random.seed(42) tf.random.set_seed(42) # Download the dataset filepath = keras.utils.get_file( "shakespeare.txt", "https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt" ) # Load raw dataset with open(filepath) as f: shakespeare_text = f.read() # Show a pice of the text print(shakespeare_text[:148]) # Setup a character-based text tokenizer tokenizer = keras.preprocessing.text.Tokenizer(char_level=True) tokenizer.fit_on_texts(shakespeare_text) # Convert a text to a sequence of character IDs tokenizer.texts_to_sequences(["First"]) # Convert a sequence of character IDs back to text tokenizer.sequences_to_texts([[20, 6, 9, 8, 3]]) # Set RNG state np.random.seed(42) tf.random.set_seed(42) # number of distinct characters max_id = len(tokenizer.word_index) # total number of characters dataset_size = tokenizer.document_count # Encode the whole dataset # - TF tokenizer assigns the first character it encounters with ID=1, we shift it back to start from 0 [encoded] = np.array(tokenizer.texts_to_sequences([shakespeare_text])) - 1 # Build a training TF Dataset from the first 90% of the text train_size = dataset_size * 90 // 100 dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size]) # Preprocessing parameters # - length of a training instance (sequence of text) # - size of a training micro-batch n_steps = 100 batch_size = 32 # target = input shifted 1 character ahead window_length = n_steps + 1 # Create training instances (sequences of text) by sliding a window over the text # - each time we shift it by single character (`shift=1`) # - `drop_remainder=True` means that we don't want to include final shortened windows with length < window length dataset = dataset.repeat().window(window_length, shift=1, drop_remainder=True) # Because `window()` creates a nested Dataset (containing sub-datasets), we want to flatten and convert it to single dataset of tensors # - the trick here is that we batch the windows to the same length they already have dataset = dataset.flat_map(lambda window: window.batch(window_length)) # Now we can safely shuffle the dataset and not to break the text # - note: shuffling ensures some degree of i.i.d. which is necessary for SGD to work well # - we also create training micro-batches dataset = dataset.shuffle(10000).batch(batch_size) # Split the instances to (inputs, target) where the target is the next character dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:])) # As the last step we must either encode or embed categorical features (characters) # - here we use 1-hot encoding since there's fairly few distinct characters dataset = dataset.map(lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch)) # Finally we prefetch the data for better training performance dataset = dataset.prefetch(1) # Show shapes of 1st batch tensors for X_batch, Y_batch in dataset.take(1): print(X_batch.shape, Y_batch.shape) ###Output (32, 100, 39) (32, 100) ###Markdown Creating and Training the Model ###Code # Build a simple Char-RNN model: # - there are two GRU recurrent layers with 128 units, both of which use a 20% dropout (`recurrent_dropout`) # - there's also a 20% input dropout (`dropout` parameter of the 1st layer) # - the output layer is a time-distributed dense layer with 39 units and softmax activation to predict each character's class probability model = keras.models.Sequential([ keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id], dropout=0.2, recurrent_dropout=0.2), keras.layers.GRU(128, return_sequences=True, dropout=0.2, recurrent_dropout=0.2), keras.layers.TimeDistributed(keras.layers.Dense(max_id, activation="softmax")) ]) model.compile(loss="sparse_categorical_crossentropy", optimizer="adam") # Train and validate the model for 10 epochs # - Note: This would take forever to train on my PC, so let's use just few batches history = model.fit(dataset.take(40), epochs=10) # history = model.fit(dataset, steps_per_epoch=train_size // batch_size, epochs=10) ###Output Epoch 1/10 40/40 [==============================] - 13s 191ms/step - loss: 3.4063 Epoch 2/10 40/40 [==============================] - 9s 189ms/step - loss: 2.9625 Epoch 3/10 40/40 [==============================] - 9s 189ms/step - loss: 2.6457 Epoch 4/10 40/40 [==============================] - 9s 189ms/step - loss: 2.4480 Epoch 5/10 40/40 [==============================] - 9s 189ms/step - loss: 2.3615 Epoch 6/10 40/40 [==============================] - 9s 191ms/step - loss: 2.2821 Epoch 7/10 40/40 [==============================] - 9s 190ms/step - loss: 2.2109 Epoch 8/10 40/40 [==============================] - 9s 190ms/step - loss: 2.1420 Epoch 9/10 40/40 [==============================] - 9s 201ms/step - loss: 2.0706 Epoch 10/10 40/40 [==============================] - 8s 188ms/step - loss: 2.0143 ###Markdown Using the Model to Generate Text ###Code def preprocess(texts): """Preprocess given text to conform to Char-RNN's input""" X = np.array(tokenizer.texts_to_sequences(texts)) - 1 return tf.one_hot(X, max_id) # Make a new prediction using the model X_new = preprocess(["How are yo"]) Y_pred = np.argmax(model.predict(X_new), axis=-1) # Show the prediction as text: 1st sentence, last char tokenizer.sequences_to_texts(Y_pred + 1)[0][-1] ###Output _____no_output_____ ###Markdown Next, let's generate not only single letter but whole new text. One approach is to repeatedly call the above. However, this often leads to repeating the same letter over and over again. Better approach is to select next letter randomly based on the learned class probabilities. ###Code def next_char(text, temperature=1): """ Generate new characters based on given text. 1. we pre-process and predict as before but return all character probablilities 2. then we compute the log of probabilities and scale it by the `temperature` parameter (the higher, the more in favour of higher prob. letters) 3. finally we select single character randomly given these log-probs. and convert the character ID back to text """ X_new = preprocess([text]) y_proba = model.predict(X_new)[0, -1:, :] rescaled_logits = tf.math.log(y_proba) / temperature char_id = tf.random.categorical(rescaled_logits, num_samples=1) + 1 return tokenizer.sequences_to_texts(char_id.numpy())[0] def complete_text(text, n_chars=50, temperature=1): """Extend given text with `n_chars` new letters""" for _ in range(n_chars): text += next_char(text, temperature) return text # Reset RNG state tf.random.set_seed(42) # Complete some text using different temperatures # - Note: this example dosn't present the model very well since it's not been trained on the full dataset print(complete_text("t", temperature=0.2)) print(complete_text("t", temperature=1)) print(complete_text("t", temperature=2)) ###Output ty no c't; mest,-haigeatfrai' at:, mearbsgr: ger. b ###Markdown Stateful RNNThe premise of a *Stateful RNN* is simple: So far we've thrown all neurons' hidden states away after applying BPTT on a training batch. In other words, hidden states were re-initialized for each partial update and so the model had hard time to learn long term patterns. The idea of a *Stateful RNN* is to keep the hidden state from previous batch and not to initialize it over again.This has, however, a consequence for the pre-processing logic. If we assume the state is transferred over from previous batches, these batches of training instances cannot overlap - they must consecutively extend each one. In our text generating example, this means we can't use overlapping windows and shuffling anymore. ###Code # Reset RNG state tf.random.set_seed(42) # (a) Updated pre-processing logic for Stateful Char-RNN # - In this version we apply single window at a time dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size]) # Contrary to before, we shift windows by full `n_steps` to create non-overlapping inputs dataset = dataset.window(window_length, shift=n_steps, drop_remainder=True) dataset = dataset.flat_map(lambda window: window.batch(window_length)) # We skip shuffling altogether so that we don't break the preserved state and batch by 1 # - batching by 1 means that we apply just single window at a time and, again, preserve the state dataset = dataset.repeat().batch(1) # The rest of the logic is analogous dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:])) dataset = dataset.map(lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch)) dataset = dataset.prefetch(1) # (b) Updated pre-processing logic for Stateful Char-RNN # - In this more complicated version we apply a micro-batch of windows as before batch_size = 32 @tf.function def make_windowed_ds(encoded_part): """Creates a flat windowed TF Dataset of non-overlapping windows""" dataset = tf.data.Dataset.from_tensor_slices(encoded_part) dataset = dataset.window(window_length, shift=n_steps, drop_remainder=True) return dataset.flat_map(lambda window: window.batch(window_length)) # Contrary to before, we make a windowed Dataset in two steps: # 1. We split the dateset into equal length batches and make windowed Dataset from each batch # 2. Then we put put all these batches back together and stack the windows so that # the n-th inputs sequence of a batch starts where the n-th sequence of the previous one ended datasets = map(make_windowed_ds, np.array_split(encoded[:train_size], batch_size)) dataset = tf.data.Dataset.zip(tuple(datasets)).map(lambda *windows: tf.stack(windows)) # Final steps are the same: # - Split each window to (inputs, target) # - 1-hot encode the categorical input features # - Prefetch the data for better performance dataset = dataset.repeat().map(lambda windows: (windows[:, :-1], windows[:, 1:])) dataset = dataset.map(lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch)) dataset = dataset.prefetch(1) # Build a Stateful RNN model # The architecture is basically the same as before, notice two distinctions: # - `stateful=True` on the recurrent layers to preserve hidden state # - `batch_input_shape` set for the initial recurrent layer to let the model know the shape (batch size) for the hidden state model = keras.models.Sequential([ keras.layers.GRU( 128, return_sequences=True, stateful=True, dropout=0.2, recurrent_dropout=0.2, batch_input_shape=[batch_size, None, max_id], ), keras.layers.GRU( 128, return_sequences=True, stateful=True, dropout=0.2, recurrent_dropout=0.2, ), keras.layers.TimeDistributed(keras.layers.Dense(max_id, activation="softmax")), ]) model.compile(loss="sparse_categorical_crossentropy", optimizer="adam") # Train and validate the model # - we use custom callback to reset model's state at the start of each epoch (instead of each batch) # - we train the model for 50 epochs, also notice the updated `steps_per_epoch` class ResetStatesCallback(keras.callbacks.Callback): """Callback that resets model's state each epoch""" def on_epoch_begin(self, epoch, logs): self.model.reset_states() history = model.fit( dataset, steps_per_epoch=train_size // batch_size // n_steps, epochs=50, callbacks=[ResetStatesCallback()], ) ###Output Epoch 1/50 313/313 [==============================] - 55s 168ms/step - loss: 2.9061 Epoch 2/50 313/313 [==============================] - 52s 165ms/step - loss: 2.2807 Epoch 3/50 313/313 [==============================] - 51s 164ms/step - loss: 2.5372 Epoch 4/50 313/313 [==============================] - 64s 206ms/step - loss: 2.6584 Epoch 5/50 313/313 [==============================] - 60s 193ms/step - loss: 2.2960 Epoch 6/50 313/313 [==============================] - 58s 186ms/step - loss: 2.2210 Epoch 7/50 313/313 [==============================] - 58s 186ms/step - loss: 2.1384 Epoch 8/50 313/313 [==============================] - 58s 185ms/step - loss: 2.0743 Epoch 9/50 313/313 [==============================] - 58s 187ms/step - loss: 2.0146 Epoch 10/50 313/313 [==============================] - 57s 182ms/step - loss: 1.9615 Epoch 11/50 313/313 [==============================] - 59s 188ms/step - loss: 1.9495 Epoch 12/50 313/313 [==============================] - 58s 184ms/step - loss: 1.9280 Epoch 13/50 313/313 [==============================] - 58s 186ms/step - loss: 1.9009 Epoch 14/50 313/313 [==============================] - 58s 184ms/step - loss: 1.8700 Epoch 15/50 313/313 [==============================] - 60s 192ms/step - loss: 1.8451 Epoch 16/50 313/313 [==============================] - 58s 184ms/step - loss: 1.8009 Epoch 17/50 313/313 [==============================] - 56s 177ms/step - loss: 1.7641 Epoch 18/50 313/313 [==============================] - 52s 165ms/step - loss: 1.7425 Epoch 19/50 313/313 [==============================] - 52s 165ms/step - loss: 1.7211 Epoch 20/50 313/313 [==============================] - 52s 165ms/step - loss: 1.7044 Epoch 21/50 313/313 [==============================] - 52s 165ms/step - loss: 1.6913 Epoch 22/50 313/313 [==============================] - 52s 166ms/step - loss: 1.6802 Epoch 23/50 313/313 [==============================] - 53s 170ms/step - loss: 1.6696 Epoch 24/50 313/313 [==============================] - 52s 166ms/step - loss: 1.6621 Epoch 25/50 313/313 [==============================] - 52s 166ms/step - loss: 1.6509 Epoch 26/50 313/313 [==============================] - 52s 166ms/step - loss: 1.6448 Epoch 27/50 313/313 [==============================] - 52s 166ms/step - loss: 1.6379 Epoch 28/50 313/313 [==============================] - 54s 173ms/step - loss: 1.6318 Epoch 29/50 313/313 [==============================] - 57s 182ms/step - loss: 1.6254 Epoch 30/50 313/313 [==============================] - 57s 180ms/step - loss: 1.6196 Epoch 31/50 313/313 [==============================] - 56s 180ms/step - loss: 1.6159 Epoch 32/50 313/313 [==============================] - 62s 197ms/step - loss: 1.6106 Epoch 33/50 313/313 [==============================] - 58s 186ms/step - loss: 1.6069 Epoch 34/50 313/313 [==============================] - 56s 178ms/step - loss: 1.6044 Epoch 35/50 313/313 [==============================] - 59s 187ms/step - loss: 1.6008 Epoch 36/50 313/313 [==============================] - 53s 170ms/step - loss: 1.5956 Epoch 37/50 313/313 [==============================] - 62s 197ms/step - loss: 1.5922 Epoch 38/50 313/313 [==============================] - 57s 182ms/step - loss: 1.5886 Epoch 39/50 313/313 [==============================] - 64s 206ms/step - loss: 1.5878 Epoch 40/50 313/313 [==============================] - 57s 182ms/step - loss: 1.5844 Epoch 41/50 313/313 [==============================] - 57s 181ms/step - loss: 1.5828 Epoch 42/50 313/313 [==============================] - 57s 181ms/step - loss: 1.5780 Epoch 43/50 313/313 [==============================] - 57s 183ms/step - loss: 1.5758 Epoch 44/50 313/313 [==============================] - 57s 183ms/step - loss: 1.5734 Epoch 45/50 313/313 [==============================] - 56s 179ms/step - loss: 1.5717 Epoch 46/50 313/313 [==============================] - 54s 172ms/step - loss: 1.5708 Epoch 47/50 313/313 [==============================] - 54s 173ms/step - loss: 1.5689 Epoch 48/50 313/313 [==============================] - 54s 172ms/step - loss: 1.5662 Epoch 49/50 313/313 [==============================] - 54s 173ms/step - loss: 1.5647 Epoch 50/50 313/313 [==============================] - 54s 173ms/step - loss: 1.5627 ###Markdown To use the model with different batch sizes, we need to create a stateless copy. We can get rid of dropout since it is only used during training. ###Code # Set RNG state tf.random.set_seed(42) # Create a steteless Char-RNN model # - This model is based on our steteful Char-RNN but used only for making predictions # - Notice: We don't need dropout since it's used only during training stateless_model = keras.models.Sequential([ keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id]), keras.layers.GRU(128, return_sequences=True), keras.layers.TimeDistributed(keras.layers.Dense(max_id, activation="softmax")), ]) # Build the stateless model # - Firstly, we can loosen the fixed batch size restriction # - Secondly, we copy learned weights from the stateful model (this works fine since dropout layers have no trainable params) stateless_model.build(tf.TensorShape([None, None, max_id])) stateless_model.set_weights(model.get_weights()) # Replace our main model by this one # - because `complete_text()` implicitly works with `model` model = stateless_model # Try to complete some text print(complete_text("t")) ###Output thee: do your carioble, thou like saggn,' dear chop ###Markdown Sentiment AnalysisLet's take a step further from the character-level RNNs to word-level sentiment analysis. Typical dataset from this taks is the IMDb reviews dataset, so let's play. ###Code # Reset RNG state tf.random.set_seed(42) # Load the IMDb reviews dataset (X_train, y_test), (X_valid, y_test) = keras.datasets.imdb.load_data() # Show a training instance # - The dataset is already preprocessed, each instance is a sequence integers which represent an ID of a word X_train[0][:10] # In order to reconstruct a word we can load the word to ID index word_index = keras.datasets.imdb.get_word_index() # And then create an inverse mapping # - Note: We shift the ID by 3 to reserve first three IDs for special markers id_to_word = {id_ + 3: word for word, id_ in word_index.items()} # These special markers are for the: # - padding symbol # - start of sequence # - unknown word for id_, token in enumerate(("<pad>", "<sos>", "<unk>")): id_to_word[id_] = token # Show a sample of decoded words " ".join(id_to_word[id_] for id_ in X_train[0][:10]) ###Output _____no_output_____ ###Markdown Now, let's create the same pre-processing logic and trainable dataset using TensorFlow's Datasets API. ###Code import tensorflow_datasets as tfds # Load the IMDb reviews TF Dataset # - Note: Using TF-only functions allows us to reuse the same pre-processing logic in every environment datasets, info = tfds.load("imdb_reviews", as_supervised=True, with_info=True) # List the dataset content datasets.keys() # Save and show training and test set sizes train_size = info.splits["train"].num_examples test_size = info.splits["test"].num_examples train_size, test_size # Peek the training dataset for X_batch, y_batch in datasets["train"].batch(2).take(1): for review, label in zip(X_batch.numpy(), y_batch.numpy()): print("Review:", review.decode("utf-8")[:200], "...") print("Label:", label, "= Positive" if label else "= Negative") print() def preprocess(X_batch, y_batch): """ Pre-process an input batch: 1. Crops each instance to first 300 characters (speeds up training and sentiment can usually be deduced by the first few sentences) 2. Replaces '<br />' symbols by a space character 3. Replaces each non-letter and quote character by a space 4. Splits instances by space creating a ragged tensor 5. Returns a dense tensor (and original label) made by padding the splits with '<pad>' """ X_batch = tf.strings.substr(X_batch, 0, 300) X_batch = tf.strings.regex_replace(X_batch, rb"<br\s*/?>", b" ") X_batch = tf.strings.regex_replace(X_batch, b"[^a-zA-Z']", b" ") X_batch = tf.strings.split(X_batch) return X_batch.to_tensor(default_value=b"<pad>"), y_batch # Try the preprocessing logic on the first training batch preprocess(X_batch, y_batch) from collections import Counter batch_size = 32 # Do a word-count over the whole pre-processed training dataset (in one pass) vocabulary = Counter( word.numpy() for X_batch, _ in datasets["train"].batch(batch_size).map(preprocess) for review in X_batch for word in review ) # Show first 3 most common words in the training corpus vocabulary.most_common()[:3] len(vocabulary) # Drop the least important words and keep just 10k most frequent ones vocab_size = 10_000 truncated_vocabulary = [word for word, _ in vocabulary.most_common(vocab_size)] # Make a word index from the truncated vocabulary word_to_id = {word: index for index, word in enumerate(truncated_vocabulary)} # Test the word index on an example sentence for word in b"This movie was faaaaaantastic".split(): print(word_to_id.get(word) or vocab_size) # Build a static vocabulary table with 1k OOV buckets num_oov_buckets = 1000 # Initialize the vocabulary from our truncated vocabulary and word index words = tf.constant(truncated_vocabulary) word_ids = tf.range(len(truncated_vocabulary), dtype=tf.int64) vocab_init = tf.lookup.KeyValueTensorInitializer(words, word_ids) # Build the lookup table table = tf.lookup.StaticVocabularyTable(vocab_init, num_oov_buckets) # Test the lookup table on the example sentence we used before table.lookup(tf.constant([b"This movie was faaaaaantastic".split()])) def encode_words(X_batch, y_batch): """Encode each word in an input batch using the static vocabulary table""" return table.lookup(X_batch), y_batch # Preprocess and encode the whole training set train_set = ( datasets["train"] .repeat() .batch(batch_size) .map(preprocess) .map(encode_words) .prefetch(1) ) # Display the 1st training batch for X_batch, y_batch in train_set.take(1): print(X_batch) print(y_batch) # The embedding dimention hyperparameter embed_size = 128 # Build a classification RNN with initial word embedding layer # - This layer's matrix has shape [ID count = vocabulary size + OOV buckets, embedding dimension] # - So the model's inputs are 2D tensors of shape [batch size, time steps], the embedding output is 3D tensor [batch size, time steps, embedding size] # - `mask_zero=True` means that we ignore ID=0 - the most frequent word which in our case is `<pad>` (so the model doesn't have to learn to ignore it) # - note: It would clearner to ensure that the padding word really has ID 0 than to count on the fact that it's the most frequent one. model = keras.models.Sequential([ keras.layers.Embedding(vocab_size + num_oov_buckets, embed_size, mask_zero=True, input_shape=[None]), keras.layers.GRU(128, return_sequences=True), keras.layers.GRU(128), keras.layers.Dense(1, activation="sigmoid"), ]) model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]) # Train and validate the model for 5 epochs history = model.fit(train_set, steps_per_epoch=train_size // batch_size, epochs=5) ###Output Epoch 1/5 781/781 [==============================] - 82s 97ms/step - loss: 0.5957 - accuracy: 0.6606 Epoch 2/5 781/781 [==============================] - 88s 112ms/step - loss: 0.3701 - accuracy: 0.8398 Epoch 3/5 781/781 [==============================] - 81s 104ms/step - loss: 0.2081 - accuracy: 0.9237 Epoch 4/5 781/781 [==============================] - 79s 101ms/step - loss: 0.1412 - accuracy: 0.9512 Epoch 5/5 781/781 [==============================] - 114s 146ms/step - loss: 0.1072 - accuracy: 0.9602 ###Markdown Manual Masking ###Code K = keras.backend # Define an input layer inputs = keras.layers.Input(shape=[None]) # Create a mask that ignores inputs equal to 0 mask = keras.layers.Lambda(lambda inputs: K.not_equal(inputs, 0))(inputs) # Build the same model structure as before but with explicit masking of layer inputs # - Note: In the previous example the output dense layer didn't receive the implicit mask because the time dimension was not the same, # so the explicit masking is necessary if we want to propagate this information all the way to the loss function. # - Note 2: The downside is that LSTMs and GRUs won't use optimized impl. for GPUs and so the training might be slower. z = keras.layers.Embedding(vocab_size + num_oov_buckets, embed_size)(inputs) z = keras.layers.GRU(128, return_sequences=True)(z, mask=mask) z = keras.layers.GRU(128)(z, mask=mask) # Define model's outputs outputs = keras.layers.Dense(1, activation="sigmoid")(z) # Compose and compile the model model = keras.models.Model(inputs=[inputs], outputs=[outputs]) model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]) # Train and validate the model for 5 epochs history = model.fit(train_set, steps_per_epoch=train_size // batch_size, epochs=5) ###Output Epoch 1/5 781/781 [==============================] - 128s 152ms/step - loss: 0.6093 - accuracy: 0.6406 Epoch 2/5 781/781 [==============================] - 132s 169ms/step - loss: 0.3711 - accuracy: 0.8425 Epoch 3/5 781/781 [==============================] - 126s 161ms/step - loss: 0.1953 - accuracy: 0.9286 Epoch 4/5 781/781 [==============================] - 132s 169ms/step - loss: 0.1205 - accuracy: 0.9582 Epoch 5/5 781/781 [==============================] - 116s 148ms/step - loss: 0.1056 - accuracy: 0.9631 ###Markdown Reusing Pretrained Embeddings ###Code import tensorflow_hub as hub # Reset RNG state tf.random.set_seed(42) # Build a model with pre-trained layers: # - Main portion of this model reuses Google's model that pre-processes and embeds words from an input text to 50 dimensional vectors # - Then we just add two dense layers for our classification task of sentiment analysis # - Note: By default TF Hub downloads models to /tmp, one can override this by setting `TFHUB_CACHE_DIR` env. variable # - Note 2: TF Hub layers are also by default non-trainable - if we want to tweak their weights we must unfreeze them model = keras.Sequential([ hub.KerasLayer( "https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1", dtype=tf.string, input_shape=[], output_shape=[50], ), keras.layers.Dense(128, activation="relu"), keras.layers.Dense(1, activation="sigmoid") ]) model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]) # Then we can just load the IMDb reviews dataset datasets, info = tfds.load("imdb_reviews", as_supervised=True, with_info=True) # Take the training set and just batch it (and prefetch) # - Note: The rest of the preprocessing logic is handled by the TF Hub portion of the model train_size = info.splits["train"].num_examples train_set = datasets["train"].repeat().batch(batch_size).prefetch(1) # Finally we just train and validate the model on our IMDb dataset history = model.fit(train_set, steps_per_epoch=train_size // batch_size, epochs=5) ###Output Epoch 1/5 781/781 [==============================] - 4s 5ms/step - loss: 0.5861 - accuracy: 0.6919 Epoch 2/5 781/781 [==============================] - 3s 4ms/step - loss: 0.5181 - accuracy: 0.7445 Epoch 3/5 781/781 [==============================] - 4s 5ms/step - loss: 0.5122 - accuracy: 0.7494 Epoch 4/5 781/781 [==============================] - 3s 4ms/step - loss: 0.5086 - accuracy: 0.7492 Epoch 5/5 781/781 [==============================] - 3s 4ms/step - loss: 0.5052 - accuracy: 0.7518 ###Markdown Encoder-Decoder Network for Neural Machine TranslationAs the name suggests, in the *Encoder-Decoder* architecture we split a *sequence-to-sequence* RNN into two parts:1. Encoder - takes as inputs reversed sequences of words (or rather embeddings thereof; reversed so that the decoder reveives the first word first)1. Decoder - this part has actually two inputs, first the hidden states of the encoder and socond is either previous target word (during training; embedded) or the actual token that was output in the previous step (during inference; embedded)Additional notes to the architecture:* The outputs of the decoder are scores for each word in the vocabulary which are turned to probabilities using time-distributed *softmax*. Because we can easily get to very high-dimensional outputs, typically a *sampled softmax* is used for training and regular *softmax* for inference* In this task we cannot simply truncate input sequences to common length as before because we want to get complete translations. Also pedding to some large common lenght does not work. Instead, we can bucket the sentenced into sets of close-enough lenght and pad these to match the longes one in each set.* Finally, we should ignore part of the output after an `` token - both from the output and loss function ###Code import tensorflow_addons as tfa # Set the RNG state tf.random.set_seed(42) # Sutup vocabulary and embedding size hyperparameters vocab_size = 100 embed_size = 10 # Define Encoder and Decoder inputs encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32) # Create embedding layers for the Encoder and Decoder parts embeddings = keras.layers.Embedding(vocab_size, embed_size) encoder_embeddings = embeddings(encoder_inputs) decoder_embeddings = embeddings(decoder_inputs) # Encoder is a 512 unit LSTM layer # - we can ignore encoder ouputs but we return both the short-term and long-term states with `return_state=True` # - the complete hidden state of the encoder is a pair of the short and long-term states encoder = keras.layers.LSTM(512, return_state=True) _, state_h, state_c = encoder(encoder_embeddings) encoder_state = [state_h, state_c] # Decoder is based on the `BasicDecoder` from TF Addons # - Decoder cell is a 512 unit LSTM cell # - Sampler is a component tells the Decoder what it should pretend the last step's output was: # - in this case `TrainingSampler` takses the embedding of previous target token # - other option is `ScheduledEmbedingTrainingSampler` which randomly chooses between target and actual outputs # - Model's output is a dense layer with one unit per word in the vocabulary decoder_cell = keras.layers.LSTMCell(512) output_layer = keras.layers.Dense(vocab_size) decoder = tfa.seq2seq.basic_decoder.BasicDecoder( cell=decoder_cell, sampler=tfa.seq2seq.sampler.TrainingSampler(), output_layer=output_layer, ) # Construct the Decoder # - Initial state is the complete encoder state # - We can ignore final decoder state and sequence lengths but we do care about the final outputs final_outputs, _, _ = decoder( decoder_embeddings, initial_state=encoder_state, sequence_length=sequence_lengths, ) # Final class (word) probabilities are retrieved as the (sampled) softmax of the final outputs (decoder) Y_proba = tf.nn.softmax(final_outputs.rnn_output) # Build an Encoder-Decoder model # - Note: Because the task is basically a classification task, we can use `sparse_categorical_crossentropy` as the loss function model = keras.models.Model( inputs=[encoder_inputs, decoder_inputs, sequence_lengths], outputs=[Y_proba], ) model.compile(loss="sparse_categorical_crossentropy", optimizer="adam") # Build a random sequence dataset X = np.random.randint(100, size=10*1000).reshape(1000, 10) Y = np.random.randint(100, size=15*1000).reshape(1000, 15) X_decoder = np.c_[np.zeros((1000, 1)), Y[:, :-1]] seq_lengths = np.full([1000], 15) # Train and validate the model on the random dataset history = model.fit([X, X_decoder, seq_lengths], Y, epochs=2) ###Output Epoch 1/2 32/32 [==============================] - 11s 207ms/step - loss: 4.6052 Epoch 2/2 32/32 [==============================] - 6s 180ms/step - loss: 4.6024 ###Markdown Bidirectional RNNsFor forecasting future values in a time series we want to have a *causal* model - a model in which future values are predicted solely on the basis of past values. On the other hand in NLP tasks (such as Neural Machine Translation) it can be beneficial to embed a word based on both the past and future contexts.A *Bidirectional* layer is a layer in which is composed of two layers working on the same input. One layer reads the input from the original direction (left to right) and the other one is a clone except it read from the reverse direction (right to left). The final output is some sort of a combination of both outputs - typically a concatenation. ###Code # Build an example RNN with a bidirectional GRU layer # - `Bidirectional` wrapper creates a clone in the reverse direction of a layer passed as an argument and concatenates outputs # - Note: Adding a bidirectional wrapper implicitly doubles the number of units of the prototype model = keras.models.Sequential([ keras.layers.GRU(10, return_sequences=True, input_shape=[None, 10]), keras.layers.Bidirectional(keras.layers.GRU(10, return_sequences=True)) ]) # Show model's topology model.summary() ###Output Model: "sequential_5" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= gru_10 (GRU) (None, None, 10) 660 _________________________________________________________________ bidirectional (Bidirectional (None, None, 20) 1320 ================================================================= Total params: 1,980 Trainable params: 1,980 Non-trainable params: 0 _________________________________________________________________ ###Markdown Beam SearchAnother improvement to predicting sequences of words is not to build single greedy model at a time but multiple. At each frame we keep a small set of $k$ most promising predictions (the *beam width*). In the next step we clone the model and compute new distribution over the vocabulary for the next word. But this time it's conditional probablity based on the previous word's probablity. We keep $k$ best sequence continuations based on $p(w_1 w_2) = p(w_2|w_1)*p(w_1)$ and iterate.Application of the *Beam Search* can limit the chance of producing words which are frequent in the training but sub-optimal (wrong) for particular sentence.```pythonbeam_width = 10decoder = tfa.seq2seq.beam_search_decoder.BeamSearchDecoder( cell=decoder_cell, beam_width=beam_width, output_layer=output_layer,)final_outputs, _, _ = decoder( decoder_embeddings, start_tokens=start_tokens, end_tokes=end_tokens, initial_state=tfa.seq2seq.beam_search_decoder.tile_batch(encoder_state, multiplier=beam_width),)``` Attention MechanismsThe main problem of RNNs is their short-term memory (even though cells like LSTM and GRU help). For instance in an Encoder-Decoder architecture for NMT, it still takes too many time steps for an information (word) to propagate from the encoder to the decoder. I.e. at the time the decoder tries to decode a word, it doen't know what the encoder thought of this word - it's lost the *attention*.The trick here is to add a shortcut - an *alignment model* (*attention model*) which takes in all the encoder outputs and combines them with decoder's hidden states to produce attention weights $\alpha_{(t,i)}$ for the decoder (weights for the t-th decoder time step from i-th encoder output). These weights tell the decoder what to focus on.There three attention mechanisms, the former is the original one while the latter are typically performing better and are used nowadays:1. *Bahdanau attention (concatenative, additive)* - computes alphas by training them alongside the RNN by adding a time-distibuted dense layer feeding from concatenated `[endoder outputs; decoder hidden state]`, producing scores and applying a *softmax* (not time-distributed)1. *Luong attention (multiplicative)* - simplifies the mechanism by computing simple dot product between encoder's outputs and decoder's hidden state (scalar product is quite a successful similarity measure) instead of the dense layer to compute the scores; it also completely replaces decoder's previous hidden state by $\tilde{\mathbf{h}}_{(t)} = \sum_i \alpha_{(t,i)} \mathbf{y}_i$.1. *Luong attention (general)* - is a somewhat a middle ground, it does add a simple linear transformation to encoder's outputs (dense layer without biases and activation) but otherwise it's *Luong's attention*.More formally, these mechanisms can be summarized as follows:$$\tilde{\mathbf{h}}_{(t)} = \sum_i \alpha_{(t,i)} \mathbf{y}_i$$with$$\alpha_{(t,i)} = \frac{\exp(e_{(t,i)})}{\sum_{i'} \exp(e_{(t,i')})}$$and$$e_{(t,i)} = \begin{cases} \mathbf{h}_{(t)}^T \mathbf{y}_{(i)} & \quad \text{dot}\\ \mathbf{h}_{(t)}^T \mathbf{W} \mathbf{y}_{(i)} & \quad \text{general}\\ \mathbf{v}^T \tanh(\mathbf{W}[\mathbf{h}_{(t)}; \mathbf{y}_{(i)}]) & \quad \text{concat} \end{cases}$$where $\mathbf{v}$ is a rescaling parameter vector. Transformer ArchitectureThe *Transformer* takes the attention mechanism to the next level and presents a deep net architecture based solely on thiese modules (a bit extended) that does not contain recurrent or conv. layers yet works as an Encoder-Decoder.As any Encoder-Decoder, it has two sides where the final output of the Encoder feeds into the hidden part of the Decoder:* The encoder part is fairly simple: it starts with imput embeddings, after which it adds *positional encoding* vectors (dense vectors that encode absolute and relative word positions in the input). Next there are *Multi Head Attention* and *Feed Forward* modules, each followed by a layer normalization and added skip connection from module inputs. The feed forward part are just two dense layers, the former with ReLU activations and the latter without any. Finally, this whole stack is repeated N times.* The decoder is basically the same but starts with a *Masked Multi Head Attention* which only differs in that it masks out inputs "in the future". Outputs of the encoder are fed to the middle (hidden) attention module. The decoder stack is also repeated N times.* The final decoder output (from the last layer of the last repetition) is passed through a simple linear layer with softmax activation. Positional EncodingAs mentioned before, *Positional Encoding (PE)* is a dense vector encoding the word position in the input sequence which is added to the word embeddings. $PE_{p,i}$ is the i-th comonent (added to the i-th component of the word embedding) of the word located at p-th position in the sequence. The PE matric can be learned but it's typically pre-computed as a fixed encoding:$$PE_{p,i} = \begin{cases} \sin(p / 10000^{i/d}) & \quad \text{if } i \text{ is odd}\\ \cos(p / 10000^{(i - 1)/d}) & \quad \text{if } i \text{ is even} \end{cases}$$This fixed encoding is favoured because it has the same performance as learned and can extend to arbitrarily long sequences.TensorFlow does not have a `PositionalEncoding` layer but it's not hard to implement. ###Code class PositionalEncoding(keras.layers.Layer): """Positional encoding layer""" def __init__(self, max_steps, max_dims, dtype=tf.float32, **kwargs): super().__init__(dtype=dtype, **kwargs) # Ensure that `max_dims` is even if max_dims % 2 == 1: max_dims += 1 # Crate a space of possible positions and embedding indices p, i = np.meshgrid( np.arange(max_steps), np.arange(max_dims // 2), ) # Precompute the maximum PE matrix using the formula presented above pe = np.empty((1, max_steps, max_dims)) pe[0, :, ::2] = np.sin(p / 10000**(2 * i / max_dims)).T pe[0, :, 1::2] = np.cos(p / 10000**(2 * i / max_dims)).T # Save the PE as the requested data type self.positional_embedding = tf.constant(pe.astype(self.dtype)) def call(self, inputs): # Crop PE matrix to the shape of the inputs and add both together shape = tf.shape(inputs) return inputs + self.positional_embedding[:, :shape[-2], :shape[-1]] # Very simplified version of the Transformer # - Instead of Multi Head Attention uses plain Attention modules # - Is missing skip connections # - Omits layer normalization and dense nets # Hyperparameters of the model N = 6 embed_size = 512 max_steps = 500 vocab_size = 10000 # Define inputs for the two sides: encoder and decoder encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) # Define first layer - word embedding embeddings = keras.layers.Embedding(vocab_size, embed_size) encoder_embeddings = embeddings(encoder_inputs) decoder_embeddings = embeddings(decoder_inputs) # Add a Positional Encoding layer on top of embeddings positional_encoding = PositionalEncoding(max_steps, max_dims=embed_size) encoder_in = positional_encoding(encoder_embeddings) decoder_in = positional_encoding(decoder_embeddings) # Encoder stack Z = encoder_in for _ in range(N): Z = keras.layers.Attention(use_scale=True)([Z, Z]) encoder_outputs = Z # Decoder stack # - First attention module uses `causal=True`, i.e. masks out inputs "from the future" # - Encoder outputs feed the second attention module Z = decoder_in for _ in range(N): Z = keras.layers.Attention(use_scale=True, causal=True)([Z, Z]) Z = keras.layers.Attention(use_scale=True)([Z, encoder_outputs]) # Network outputs one probability for each word in the vocabulary # - Hence the dense layer of `vocab_size` units with softmax activation # - Inpouts are the outputs of the very last layer of the decoder outputs = keras.layers.TimeDistributed(keras.layers.Dense(vocab_size, activation="softmax"))(Z) ###Output _____no_output_____ ###Markdown Multi Head AttentionThe core component of a *Multi Head Attention* is a *Scaled Dot-Product* which was actually used in the example above (`use_scale=True`). The actual Multi Head Attention module is just a bunch of scaled do-product layers, each preceeded with three linear layers (time-distributed dense layer without activation; one for each $\mathbf{V}, \mathbf{K}, \mathbf{Q}$ - presented below). Finally, all outputs of the scaled dot-product layers are concatenated and passed through a linear layer (again time-distributed). Scaled Dot ProductLet's assume the encoder learns the meaning of words in a sentence - one can imagine this as a dictionary `"They played chess ..." -> {"subject": "They", "verb": "played", ...}`. The decoder then wants to do a lookup from this dictionary of, let's say, a `"verb"` - the issue is that we don't have discrete keys and values but rather vectorized representations of these.So instead of a lookup term we have a *query vector* $\mathbf{q}$ and instead of a keys we have also a vector $\mathbf{k}$. The dot product $\mathbf{q}^T \mathbf{k}$ is then a similarity score of how well the query matches the keys. If we pass it through a *softmax* (ensure it sums up to 1) and multiply the values $v$ we carry the relevance over from the key match to the values - i.e. query resutlts. The full scaled dot-product for a matrix of queries $\mathbf{Q}$, keys $\mathbf{K}$ and values $\mathbf{V}$ is$$Attention(\mathbf{Q}, \mathbf{K}, \mathbf{V}) = softmax (\frac{\mathbf{Q}\mathbf{K}^T}{\sqrt{d_{keys}}})$$where $\sqrt{d_{keys}}$ is there to prevent saturating the softmax (tiny gradients). The code above actually lerarns this scaling factor but the Transformer uses this key dimention instead. Finally, the meaning of these matrices in the Encoder-Decoder setup is:* All the $\mathbf{Q}$, $\mathbf{K}$, $\mathbf{V}$ in the encoder equal to the list of words in an input sequence. So the encoder learns the relationships beween all pairs of words.* In the decoder masked it's pretty much the same - these correspond to the words in the target sentence but masked so that words don't compare to those after it.* Decoder's upper layers simply have $\mathbf{K}$ and $\mathbf{V}$ equal to the word encodings produced by the encoder while $\mathbf{Q}$ is the word encodings produced by the decoder itself. The intuition behind Multi Head AttentionThe motivation behind using multiple heads (scaled dot-products) with preceeding linear layers is that a word encoding carries multiple information - about the word itself but also its position (due to PE) or e.g. past tense etc. The initial linear layers are there to make projections into these various sub-spaces, then we do the "looup" and finally project all these searches back with the output layer. ###Code K = keras.backend class MultiHeadAttention(keras.layers.Layer): def __init__(self, n_heads, causal=False, use_scale=False, **kwargs): self.n_heads = n_heads self.causal = causal self.use_scale = use_scale super().__init__(**kwargs) def build(self, batch_input_shape): self.dims = batch_input_shape[0][-1] # These could be hyperparameters instead self.q_dims, self.v_dims, self.k_dims = [self.dims // self.n_heads] * 3 # Build the initial Q, K and V linear layers for each head self.q_linear = keras.layers.Conv1D(self.n_heads * self.q_dims, kernel_size=1, use_bias=False) self.v_linear = keras.layers.Conv1D(self.n_heads * self.v_dims, kernel_size=1, use_bias=False) self.k_linear = keras.layers.Conv1D(self.n_heads * self.k_dims, kernel_size=1, use_bias=False) # The attention part self.attention = keras.layers.Attention(causal=self.causal, use_scale=self.use_scale) # Linear output layer self.out_linear = keras.layers.Conv1D(self.dims, kernel_size=1, use_bias=False) super().build(batch_input_shape) def _multi_head_linear(self, inputs, linear): shape = K.concatenate([K.shape(inputs)[:-1], [self.n_heads, -1]]) projected = K.reshape(linear(inputs), shape) perm = K.permute_dimensions(projected, [0, 2, 1, 3]) return K.reshape(perm, [shape[0] * self.n_heads, shape[1], -1]) def call(self, inputs): # Split the inputs into Q, K and V # - K = V is not given in the inputs q = inputs[0] v = inputs[1] k = inputs[2] if len(inputs) > 2 else v shape = K.shape(q) # Build the Q, K and V linear projections q_proj = self._multi_head_linear(q, self.q_linear) v_proj = self._multi_head_linear(v, self.v_linear) k_proj = self._multi_head_linear(k, self.k_linear) # Pass these projections to the attention heads multi_attended = self.attention([q_proj, v_proj, k_proj]) # Reshape and concatenate the attention heads' outputs shape_attended = K.shape(multi_attended) reshaped_attended = K.reshape(multi_attended, [shape[0], self.n_heads, shape_attended[1], shape_attended[2]]) perm = K.permute_dimensions(reshaped_attended, [0, 2, 1, 3]) concat = K.reshape(perm, [shape[0], shape_attended[1], -1]) # Finally apply project the outputs back with the last linear layer return self.out_linear(concat) # Generate some random queries and values Q = np.random.rand(2, 50, 512) V = np.random.rand(2, 80, 512) # Test our Multi Head Attention module on these inputs multi_attn = MultiHeadAttention(8) multi_attn([Q, V]).shape ###Output _____no_output_____ ###Markdown Exercises RNN verifying an embedded Reber grammar ###Code # Reset RNG state np.random.seed(42) # Define the finite state machine of the Reber grammar # - https://www.willamette.edu/~gorr/classes/cs449/reber.html # - encoded as a list of state transitions: `state -> .[(symbol, next state)]` default_reber_grammar = [ [("B", 1)], [("T", 2), ("P", 3)], [("S", 2), ("X", 4)], [("T", 3), ("V", 5)], [("X", 3), ("S", 6)], [("P", 4), ("V", 6)], [("E", None)], ] # Define the embedded Reber grammar # - https://www.willamette.edu/~gorr/classes/cs449/reber.html embedded_reber_grammar = [ [("B", 1)], [("T", 2), ("P", 3)], [(default_reber_grammar, 4)], [(default_reber_grammar, 5)], [("T", 6)], [("P", 6)], [("E", None)], ] def generate_string(grammar): """Generate a random string from given (embedded) Reber grammar""" # Start at the initial state state = 0 output = [] while state is not None: # Make random transition from current state transition_ix = np.random.randint(len(grammar[state])) production, state = grammar[state][transition_ix] if isinstance(production, list): # Recurse inside an embedding production = generate_string(grammar=production) # Collect produced symbols output.append(production) # Reconstruct a word from produced symbols return "".join(output) # Generate few sample strings from Raber grammar for _ in range(25): print(generate_string(default_reber_grammar), end=" ") # Reset RNG staet np.random.seed(42) # Generate few sample strings from embedded Raber grammar for _ in range(25): print(generate_string(embedded_reber_grammar), end=" ") # Reset RNG state np.random.seed(42) POSSIBLE_CHARS = "BEPSTVX" def generate_corrupted_string(grammar, chars=POSSIBLE_CHARS): # Generate a valid word good_string = generate_string(grammar) # Pick a position (and corresponding symbol) which should be broken replace_ix = np.random.randint(len(good_string)) good_char = good_string[replace_ix] # Pick new symbol to replace the old one at selected position bad_char = np.random.choice(sorted(set(chars) - set(good_char))) # Do the replacement return good_string[:replace_ix] + bad_char + good_string[replace_ix + 1:] # Sample some corrupted words from the embedded grammar for _ in range(25): print(generate_corrupted_string(embedded_reber_grammar), end=" ") def str2ids(s, chars=POSSIBLE_CHARS): return [POSSIBLE_CHARS.index(c) for c in s] str2ids("BTTTXXVVETE") # Reset RNG state np.random.seed(42) def generate_ids(corrupt=False): gen = generate_corrupted_string if corrupt else generate_string return str2ids(gen(embedded_reber_grammar)) def generate_dataset(size): n_valid = size // 2 n_invlaid = size - n_valid # Generate valid and invalid words valid = [generate_ids() for _ in range(n_valid)] invalid = [generate_ids(corrupt=True) for _ in range(n_invlaid)] X = tf.ragged.constant(valid + invalid, ragged_rank=1) # Generate corresponding labels pos_labels = [[1.] for _ in range(n_valid)] neg_labels = [[0.] for _ in range(n_invlaid)] y = np.array(pos_labels + neg_labels) return X, y # Generate the training and test datasets containing both valid and corrupted words X_train, y_train = generate_dataset(10000) X_valid, y_valid = generate_dataset(2000) # Peek the training dataset X_train[0], y_train[0] # Reset RNG state np.random.seed(42) tf.random.set_seed(42) # Model hypeparameters embedding_size = 5 n_gru_units = 30 # Build a simple binary classifier RNN with an embedding, GRU and final dense layer model = keras.models.Sequential([ keras.layers.InputLayer(input_shape=[None], dtype=tf.int32, ragged=True), keras.layers.Embedding(input_dim=len(POSSIBLE_CHARS), output_dim=embedding_size), keras.layers.GRU(n_gru_units), keras.layers.Dense(1, activation="sigmoid") ]) # Compile the model model.compile( loss="binary_crossentropy", optimizer=keras.optimizers.SGD(lr=0.02, momentum = 0.95, nesterov=True), metrics=["accuracy"], ) # Train and validate the model history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid)) # Build few test samples test_strings = [ "BPBTSSSSSSSXXTTVPXVPXTTTTTVVETE", "BPBTSSSSSSSXXTTVPXVPXTTTTTVVEPE", ] X_test = tf.ragged.constant([str2ids(s) for s in test_strings], ragged_rank=1) # Make a prediction on these test samples y_proba = model.predict(X_test) # Show the predictions and model confidence print() print("Estimated probability that these are Reber strings:") for i, s in enumerate(test_strings): print("{}: {:.2f}%".format(s, 100 * y_proba[i][0])) ###Output Estimated probability that these are Reber strings: BPBTSSSSSSSXXTTVPXVPXTTTTTVVETE: 0.08% BPBTSSSSSSSXXTTVPXVPXTTTTTVVEPE: 99.96% ###Markdown Encoder–Decoder model for date string conversion ###Code from datetime import date # Reset RNG state np.random.seed(42) # We cannot use strftime()'s %B format since it depends on the locale MONTHS = [ "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December", ] def random_dates(n_dates, min_date=date(1000, 1, 1), max_date=date(9999, 12, 31)): """Generate n random labeled instances between given min and max dates""" # Get ordinal values for date bounds min_date = min_date.toordinal() max_date = max_date.toordinal() # Generate n random dates between the bounds ordinals = np.random.randint(max_date - min_date, size=n_dates) + min_date dates = [date.fromordinal(ordinal) for ordinal in ordinals] # Instances are dates in "<month> <day>, <year>" format x = [MONTHS[dt.month - 1] + " " + dt.strftime("%d, %Y") for dt in dates] # Target is the standard date ISO format y = [dt.isoformat() for dt in dates] return x, y # Show few examples n_dates = 3 x_example, y_example = random_dates(n_dates) print("{:25s}{:25s}".format("Input", "Target")) print("-" * 50) for idx in range(n_dates): print(f"{x_example[idx]:25s}{y_example[idx]:25s}") # Define the input and output alphabets INPUT_CHARS = "".join(sorted(set("".join(MONTHS)))) + "01234567890, " OUTPUT_CHARS = "0123456789-" INPUT_CHARS, OUTPUT_CHARS def date_str_to_ids(date_str, chars=INPUT_CHARS): return [chars.index(c) for c in date_str] date_str_to_ids(x_example[0], INPUT_CHARS) date_str_to_ids(y_example[0], OUTPUT_CHARS) # Reset RNG state np.random.seed(42) def prepare_date_strs(date_strs, chars=INPUT_CHARS): """ Encode given date strings to character IDs, returns a ragged tensor. Note: ID=0 is used for the padding token, so every index to `chars` is shifted by 1. """ X_ids = [date_str_to_ids(dt, chars) for dt in date_strs] X = tf.ragged.constant(X_ids, ragged_rank=1) return (X + 1).to_tensor() def create_dataset(n_dates): x, y = random_dates(n_dates) return prepare_date_strs(x, INPUT_CHARS), prepare_date_strs(y, OUTPUT_CHARS) # Generate training, validation and test datesets X_train, Y_train = create_dataset(10000) X_valid, Y_valid = create_dataset(2000) X_test, Y_test = create_dataset(2000) Y_train[0] ###Output _____no_output_____ ###Markdown Basic seq2seq model ###Code # Reset RNG state np.random.seed(42) tf.random.set_seed(42) # Basic constants # - Note: Dimensions have +1 due to the extra tokens max_output_length = Y_train.shape[1] input_dim = len(INPUT_CHARS) + 1 output_dim = len(OUTPUT_CHARS) + 1 # Model hyperparameters embedding_size = 32 # Create an encoder encoder = keras.models.Sequential([ keras.layers.Embedding(input_dim=input_dim, output_dim=embedding_size, input_shape=[None]), keras.layers.LSTM(128), ]) # Create a decoder decoder = keras.models.Sequential([ keras.layers.LSTM(128, return_sequences=True), keras.layers.Dense(output_dim, activation="softmax") ]) # Build simple Encoder-Decoder model # - Note: We repeate encoder's output because it outputs a vector and decoder expects a sequence model = keras.models.Sequential([ encoder, keras.layers.RepeatVector(max_output_length), decoder, ]) model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.Nadam(), metrics=["accuracy"]) # Train and validate the model history = model.fit(X_train, Y_train, epochs=20, validation_data=(X_valid, Y_valid)) def ids_to_date_strs(ids, chars=OUTPUT_CHARS, pad="?"): symbols = pad + chars return ["".join(symbols[i] for i in seq) for seq in ids] # Generate few test examples X_new = prepare_date_strs([ "September 17, 2009", "July 14, 1789", "May 02, 2020", "July 14, 1789", ]) # Make a prediction on these examples ids = np.argmax(model.predict(X_new), axis=-1) # Show predictions for date_str in ids_to_date_strs(ids): print(date_str) ###Output 2009-09-17 1789-07-14 2020-05-02 1789-07-14 ###Markdown We need to ensure that we always pass sequences of the same length as during training, using padding if necessary. ###Code max_input_length = X_train.shape[1] def prepare_date_strs_padded(date_strs): X = prepare_date_strs(date_strs) input_length = X.shape[1] # Add padding tokens if necessary if input_length < max_input_length: X = tf.pad(X, [[0, 0], [0, max_input_length - input_length]]) return X def convert_date_strs(date_strs): """Make a prediction including preprocessing and postprocessing""" X = prepare_date_strs_padded(date_strs) ids = np.argmax(model.predict(X), axis=-1) return ids_to_date_strs(ids) # Try problematic instances again with this new preprocessing convert_date_strs(["May 02, 2020", "July 14, 1789"]) ###Output _____no_output_____ ###Markdown Feeding the shifted targets to the decoder ###Code # Start of sequence ID sos_id = len(OUTPUT_CHARS) + 1 def shifted_output_sequences(Y): # Shift the targets by 1 to the right # - So that the decoder will know the previous target character # - Note: Since we shift the targets, the decoder need a token for the first character, hence the SoS sos_tokens = tf.fill(dims=(len(Y), 1), value=sos_id) return tf.concat([sos_tokens, Y[:, :-1]], axis=1) # Create new decoder inputs by shift all targets by 1 to the right X_train_decoder = shifted_output_sequences(Y_train) X_valid_decoder = shifted_output_sequences(Y_valid) X_test_decoder = shifted_output_sequences(Y_test) X_train_decoder # Reset RNG state np.random.seed(42) tf.random.set_seed(42) # Define basic constants encoder_input_dim = len(INPUT_CHARS) + 1 # +1 for padding decoder_input_dim = len(OUTPUT_CHARS) + 2 # +1 for padding +1 for sos output_dim = len(OUTPUT_CHARS) + 1 # +1 for padding # Hyperparameters encoder_embedding_size = 32 decoder_embedding_size = 32 lstm_units = 128 # Create an encoder encoder_input = keras.layers.Input(shape=[None], dtype=tf.int32) encoder_embedding = keras.layers.Embedding(input_dim=encoder_input_dim, output_dim=encoder_embedding_size)(encoder_input) _, encoder_state_h, encoder_state_c = keras.layers.LSTM(lstm_units, return_state=True)(encoder_embedding) encoder_state = [encoder_state_h, encoder_state_c] # Create a decoder that takes two kinds of inputs: # 1. Shifted targets pass through an embedding and then directly to the LSTM layer # 2. Full encoder's state is passed as an initial state for the LSTM layer decoder_input = keras.layers.Input(shape=[None], dtype=tf.int32) decoder_embedding = keras.layers.Embedding(input_dim=decoder_input_dim, output_dim=decoder_embedding_size)(decoder_input) decoder_lstm_output = keras.layers.LSTM(lstm_units, return_sequences=True)(decoder_embedding, initial_state=encoder_state) decoder_output = keras.layers.Dense(output_dim, activation="softmax")(decoder_lstm_output) # Build an inproved Encoder-Decoder model model = keras.models.Model(inputs=[encoder_input, decoder_input], outputs=[decoder_output]) model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.Nadam(), metrics=["accuracy"]) # Train and validate the model # - This time we pass both inputs (one for the encoder and the other for decoder) # - Notice: We train the model for half the epochs compared to the last one, yet the validation accuracy is the same. history = model.fit([X_train, X_train_decoder], Y_train, epochs=10, validation_data=([X_valid, X_valid_decoder], Y_valid)) def predict_date_strs(date_strs): # Prepare both inputs (encoder, decoder) X = prepare_date_strs_padded(date_strs) Y_pred = tf.fill(dims=(len(X), 1), value=sos_id) # With this model we need to predict characters one by one for index in range(max_output_length): # Pad decoder inputs to the same lenght pad_size = max_output_length - Y_pred.shape[1] X_decoder = tf.pad(Y_pred, [[0, 0], [0, pad_size]]) # Make a single character prediction Y_probas_next = model.predict([X, X_decoder])[:, index:index+1] Y_pred_next = tf.argmax(Y_probas_next, axis=-1, output_type=tf.int32) # Build up the ouptut sequece / basis for the next input for the decoder Y_pred = tf.concat([Y_pred, Y_pred_next], axis=1) # Convert the output back to a date string return ids_to_date_strs(Y_pred[:, 1:]) # Make new predictions predict_date_strs(["July 14, 1789", "May 01, 2020"]) ###Output _____no_output_____ ###Markdown TF-Addons's seq2seq implementation ###Code import tensorflow_addons as tfa # Reset RNG state np.random.seed(42) tf.random.set_seed(42) # Define basic constants encoder_input_dim = len(INPUT_CHARS) + 1 decoder_input_dim = len(INPUT_CHARS) + 2 output_dim = len(OUTPUT_CHARS) + 1 # Hyperparameters encoder_embedding_size = 32 decoder_embedding_size = 32 units = 128 # Define inputs for both parts encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32) # Create embedding layers encoder_embeddings = keras.layers.Embedding(encoder_input_dim, encoder_embedding_size)(encoder_inputs) decoder_embedding_layer = keras.layers.Embedding(decoder_input_dim, decoder_embedding_size) decoder_embeddings = decoder_embedding_layer(decoder_inputs) # The Encoder encoder = keras.layers.LSTM(units, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_embeddings) encoder_state = [state_h, state_c] # Crate a training sampler sampler = tfa.seq2seq.sampler.TrainingSampler() # Define some reusable components for the Decoder decoder_cell = keras.layers.LSTMCell(units) output_layer = keras.layers.Dense(output_dim) # The Decoder and final output decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler, output_layer=output_layer) final_outputs, _, _ = decoder(decoder_embeddings, initial_state=encoder_state) Y_proba = keras.layers.Activation("softmax")(final_outputs.rnn_output) # Build the Encoder-Decoder model using TF Addons model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs], outputs=[Y_proba]) model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.Nadam(), metrics=["accuracy"]) # Train and validate the model history = model.fit([X_train, X_train_decoder], Y_train, epochs=15, validation_data=([X_valid, X_valid_decoder], Y_valid)) # Test the model by making new predictions predict_date_strs(["July 14, 1789", "May 01, 2020"]) ###Output Epoch 1/15 313/313 [==============================] - 13s 27ms/step - loss: 1.9228 - accuracy: 0.3129 - val_loss: 1.4618 - val_accuracy: 0.4195 Epoch 2/15 313/313 [==============================] - 7s 23ms/step - loss: 1.4352 - accuracy: 0.4320 - val_loss: 1.2536 - val_accuracy: 0.5224 Epoch 3/15 313/313 [==============================] - 7s 24ms/step - loss: 1.1839 - accuracy: 0.5535 - val_loss: 0.8897 - val_accuracy: 0.6775 Epoch 4/15 313/313 [==============================] - 7s 24ms/step - loss: 0.7203 - accuracy: 0.7446 - val_loss: 0.3395 - val_accuracy: 0.9007 Epoch 5/15 313/313 [==============================] - 7s 24ms/step - loss: 0.2633 - accuracy: 0.9310 - val_loss: 0.1159 - val_accuracy: 0.9818 Epoch 6/15 313/313 [==============================] - 7s 24ms/step - loss: 0.1015 - accuracy: 0.9857 - val_loss: 0.0425 - val_accuracy: 0.9983 Epoch 7/15 313/313 [==============================] - 7s 24ms/step - loss: 0.0653 - accuracy: 0.9915 - val_loss: 0.0242 - val_accuracy: 0.9996 Epoch 8/15 313/313 [==============================] - 9s 30ms/step - loss: 0.0292 - accuracy: 0.9974 - val_loss: 0.0248 - val_accuracy: 0.9992 Epoch 9/15 313/313 [==============================] - 9s 30ms/step - loss: 0.0178 - accuracy: 0.9998 - val_loss: 0.0118 - val_accuracy: 0.9998 Epoch 10/15 313/313 [==============================] - 9s 29ms/step - loss: 0.0097 - accuracy: 1.0000 - val_loss: 0.0083 - val_accuracy: 0.9999 Epoch 11/15 313/313 [==============================] - 9s 29ms/step - loss: 0.0068 - accuracy: 1.0000 - val_loss: 0.0061 - val_accuracy: 0.9999 Epoch 12/15 313/313 [==============================] - 9s 29ms/step - loss: 0.0049 - accuracy: 1.0000 - val_loss: 0.0046 - val_accuracy: 0.9999 Epoch 13/15 313/313 [==============================] - 9s 30ms/step - loss: 0.0037 - accuracy: 1.0000 - val_loss: 0.0036 - val_accuracy: 0.9999 Epoch 14/15 313/313 [==============================] - 9s 30ms/step - loss: 0.0029 - accuracy: 1.0000 - val_loss: 0.0029 - val_accuracy: 0.9999 Epoch 15/15 313/313 [==============================] - 9s 30ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 0.0023 - val_accuracy: 0.9999 ###Markdown Instead of manually making new predictions for each character, we can build new decoder component for the inference that does the same automatically. ###Code # Make a new sampler that each time computes the argmax of the decoder's outputs that feeds it back to the embedding layer / LSTM cell inference_sampler = tfa.seq2seq.sampler.GreedyEmbeddingSampler(embedding_fn=decoder_embedding_layer) # Build new inference Decoder # Note: `maximum_iterations` are there to prevent infinite loops # - if model never outputs the end token for at least one of the sequences inference_decoder = tfa.seq2seq.basic_decoder.BasicDecoder( decoder_cell, inference_sampler, output_layer=output_layer, maximum_iterations=max_output_length, ) batch_size = tf.shape(encoder_inputs)[:1] start_tokens = tf.fill(dims=batch_size, value=sos_id) final_outputs, _, _ = inference_decoder( start_tokens, initial_state=encoder_state, start_tokens=start_tokens, end_token=0, ) # Build new model for inference # - Note: We don't need decoder's inputs anymore as they will be generated dynamically # - Note 2: We return `sample_id` instead of all the logits inference_model = keras.models.Model(inputs=[encoder_inputs], outputs=[final_outputs.sample_id]) def fast_predict_date_strs(date_strs): """Inference function that calls the inference model just once""" X = prepare_date_strs_padded(date_strs) Y_pred = inference_model.predict(X) return ids_to_date_strs(Y_pred) # Test the inference model fast_predict_date_strs(["July 14, 1789", "May 01, 2020"]) %timeit predict_date_strs(["July 14, 1789", "May 01, 2020"]) %timeit fast_predict_date_strs(["July 14, 1789", "May 01, 2020"]) ###Output 37.2 ms ± 977 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) ###Markdown TF-Addons's seq2seq with a scheduled sampler ###Code # Reset RNG state np.random.seed(42) tf.random.set_seed(42) encoder_input_dim = len(INPUT_CHARS) + 1 decoder_input_dim = len(INPUT_CHARS) + 2 output_dim = len(INPUT_CHARS) + 1 # Hyperparameters n_epochs = 20 encoder_embedding_size = 32 decoder_embedding_size = 32 units = 128 # Build the Encoder-Decoder model # - Note: The only differencees are in the `ScheduledEmbeddingTrainingSampler` and addition of a sampling callback # Inputs encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32) sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32) # Embeddings encoder_embeddings = keras.layers.Embedding(encoder_input_dim, encoder_embedding_size)(encoder_inputs) decoder_embedding_layer = keras.layers.Embedding(decoder_input_dim, decoder_embedding_size) decoder_embeddings = decoder_embedding_layer(decoder_inputs) # The Encoder encoder = keras.layers.LSTM(units, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_embeddings) encoder_state = [state_h, state_c] # Scheduled sampler # - Sampler gradually replaces (with increasing probability) targets with previous predictions # - As the training progresses the decoder starts to get the same inputs as during inference sampler = tfa.seq2seq.sampler.ScheduledEmbeddingTrainingSampler( sampling_probability=0., embedding_fn=decoder_embedding_layer, ) sampler.sampling_probability = tf.Variable(0.) def update_sampling_probability(epoch, logs): """Function implementing a sampling probability schedule""" proba = min(1.0, epoch / (n_epochs - 10)) sampler.sampling_probability.assign(proba) # The Decoder and output decoder_cell = keras.layers.LSTMCell(units) output_layer = keras.layers.Dense(output_dim) decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler, output_layer=output_layer) final_outputs, _, _ = decoder(decoder_embeddings, initial_state=encoder_state) Y_proba = keras.layers.Activation("softmax")(final_outputs.rnn_output) # Build the model model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs], outputs=[Y_proba]) model.compile(loss="sparse_categorical_crossentropy", optimizer=keras.optimizers.Nadam(), metrics=["accuracy"]) # Train and validate the model # - Notice: We register sampler's schedule update as a callback triggering each epoch history = model.fit( [X_train, X_train_decoder], Y_train, epochs=n_epochs, validation_data=([X_valid, X_valid_decoder], Y_valid), callbacks=[keras.callbacks.LambdaCallback(on_epoch_begin=update_sampling_probability)], ) ###Output Epoch 1/20
TEMA-2/Ejercicio_de_clase15.ipynb
###Markdown Ejercicio de clase ###Code import numpy as np from functools import reduce import time import matplotlib.pyplot as plt import scipy.stats as st # Librería estadística import pandas as pd from scipy import optimize # Función que grafica subplots para cada señal de distribución Erlang def histograma_vs_densidad(signal:'variable con muestras aleatorias de la distribución generada', f:'función de distribución de probablidad f(x) de la variable aleatoria'): plt.figure(figsize=(8,3)) count, x, _ = plt.hist(signal,100,density=True) y = f(x) plt.plot(x, y, linewidth=2,color='k') plt.ylabel('Probabilidad') plt.xlabel('Muestras') # plt.legend() plt.show() def Gen_distr_discreta(U:'vector de números aleatorios', p_acum: 'P.Acumulada de la distribución a generar'): '''Tener en cuenta que este arreglo cuenta números empezando del 0''' v = np.array(list(map(lambda j:sum(1 for i in p_acum if i<U[j]),range(len(U))))) return v def plot_histogram_discrete(distribucion:'distribución a graficar histograma', label:'label del legend'): # len(set(distribucion)) cuenta la cantidad de elementos distintos de la variable 'distribucion' plt.figure(figsize=[8,4]) y,x = np.histogram(distribucion,density = True,bins = len(set(distribucion))) plt.bar(list(set(distribucion)),y,label=label) plt.legend() plt.show() ###Output _____no_output_____ ###Markdown ![image.png](attachment:image.png) ###Code num_vent = [2,3,4,5,6] num_dias = [4,7,8,5,1] data ###Output _____no_output_____ ###Markdown Transformada inversa ###Code np.random.seed(55) N = 100 plot_histogram_discrete(m1,'transformada inversa') ###Output _____no_output_____ ###Markdown Estimar media ###Code media_teo = np.sum(np.array(num_vent)*data['probability']) media_teo ###Output _____no_output_____ ###Markdown a) Montecarlo b) Muestreo estratificado- 30% de las muestras entre 0 y 0.2- 40% de las muestras entre 0.2 y 0.8- 30% de las muestras entre 0.8 y 1 ###Code r1 = np.random.uniform(0,0.2,int(0.3*N)) r2 = np.random.uniform(0.2,0.8,int(0.4*N)) r3 = np.random.uniform(0.8,1,int(0.3*N)) r = [r1,r2,r3] w = [3/2,2/3,3/2] m2 = list(map(lambda ri:Gen_distr_discreta(ri,data['p_acumulada'])+2,r)) m2 = list(map(lambda xi,wi:xi/wi,m2,w)) print('Estratificado 1 =',np.concatenate(m2).mean()) ###Output Estratificado 1 = 3.6516666666666664 ###Markdown c) Estratificado 2 ###Code def estra(B): U2 = np.random.rand(B) i = np.arange(0,B) estra = (U2+i)/B return estra print('Estratificado 2 =',np.mean(m3)) ###Output Estratificado 2 = 3.68 ###Markdown d) complementario ###Code # len(Uc) print('Complementario =',np.mean(m4)) ###Output Complementario = 3.705 ###Markdown Ejercicio 2Distribución geométrica (Método de aceptación y rechazo distribuciones discretas)$$ f(x) = p(1-p)^{x-1}, \quad x\in 1,2,3,4,5,\cdots$$ ###Code # Función de aceptación y rechazo usando compresión de listas def Acep_rechazo(N:'Cantidad de variables a generar', Dom_f:'Dominio de la función f como tupla (a,b)', f:'función objetivo a generar', max_f:'máximo valor de f'): X = np.zeros(N) i = 0 while i<N: R1 = np.random.randint(Dom_f[0],Dom_f[1]) R2 = np.random.uniform(0,max_f) if R2<= f(R1): X[i] = R1 i+=1 return X def plot_histogram_discrete(distribucion:'señal de varibles aleatorias de un distribución DISCRETA dada', label:'label del legend a aparecer en el gráfica', densidad:'por defecto regresa el histograma en densidad'= True): # len(set(distribucion)) cuenta la cantidad de elementos distintos de la variable 'distribucion' plt.figure(figsize=[8,4]) y,x = np.histogram(distribucion,bins = len(set(distribucion)),density = densidad) plt.bar(x[1:],y,label=label,width=.5) plt.legend() # plt.show() N = 1000 p = 0.5 f_x = lambda x: p*(1-p)**(x-1) max_f = 1 acep_r = Acep_rechazo(N,(0,15),f_x,max_f) plot_histogram_discrete(acep_r,'aceptación y rechazo') x = np.arange(1,15) plt.stem(x,f_x(x),'r',label='distribución teórica',use_line_collection=True) plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Ejercicio ![image.png](attachment:image.png) ###Code f_x = lambda x: 1/x**2 if x>=1 else 0 ###Output _____no_output_____ ###Markdown a) Montecarlo ###Code N=10 ###Output 25.05989612364478 ###Markdown b) Muestreo estratificado ###Code np.random.seed(100) muestras2 np.concatenate(estra1).mean() ###Output _____no_output_____ ###Markdown c) Estratificado 2 ###Code def estra(B): U2 = np.random.rand(B) i = np.arange(0,B) estra = (U2+i)/B return estra rand = estra(10) np.mean(muestras3) ###Output _____no_output_____
Introduction_to_Deep_Learning/week_6/notebooks/week6_final_project_image_captioning_clean.ipynb
###Markdown Image Captioning Final ProjectIn this final project you will define and train an image-to-caption model, that can produce descriptions for real world images!Model architecture: CNN encoder and RNN decoder. (https://research.googleblog.com/2014/11/a-picture-is-worth-thousand-coherent.html) Import stuff ###Code import sys sys.path.append("../../utils") import grading import download_utils import tensorflow.compat.v1 as tf import tensorflow.compat.v1.keras as keras import numpy as np %matplotlib inline import matplotlib.pyplot as plt L = keras.layers K = keras.backend import utils import time import zipfile import json from collections import defaultdict import re import random from random import choice import grading_utils import os from keras_utils import reset_tf_session import tqdm_utils tf.disable_v2_behavior() ###Output _____no_output_____ ###Markdown Fill in your Coursera token and emailTo successfully submit your answers to our grader, please fill in your Coursera submission token and email ###Code grader = grading.Grader(assignment_key="NEDBg6CgEee8nQ6uE8a7OA", all_parts=["19Wpv", "uJh73", "yiJkt", "rbpnH", "E2OIL", "YJR7z"]) # token expires every 30 min COURSERA_TOKEN = ### YOUR TOKEN HERE COURSERA_EMAIL = ### YOUR EMAIL HERE ###Output _____no_output_____ ###Markdown Download dataTakes 10 hours and 20 GB. We've downloaded necessary files for you.Relevant links (just in case):- train images http://msvocds.blob.core.windows.net/coco2014/train2014.zip- validation images http://msvocds.blob.core.windows.net/coco2014/val2014.zip- captions for both train and validation http://msvocds.blob.core.windows.net/annotations-1-0-3/captions_train-val2014.zip Extract image featuresWe will use pre-trained InceptionV3 model for CNN encoder (https://research.googleblog.com/2016/03/train-your-own-image-classifier-with.html) and extract its last hidden layer as an embedding: ###Code IMG_SIZE = 299 # we take the last hidden layer of IncetionV3 as an image embedding def get_cnn_encoder(): K.set_learning_phase(False) model = keras.applications.InceptionV3(include_top=False) preprocess_for_model = keras.applications.inception_v3.preprocess_input model = keras.models.Model(model.inputs, keras.layers.GlobalAveragePooling2D()(model.output)) return model, preprocess_for_model ###Output _____no_output_____ ###Markdown Features extraction takes too much time on CPU:- Takes 16 minutes on GPU.- 25x slower (InceptionV3) on CPU and takes 7 hours.- 10x slower (MobileNet) on CPU and takes 3 hours.So we've done it for you with the following code:```python load pre-trained modelreset_tf_session()encoder, preprocess_for_model = get_cnn_encoder() extract train featurestrain_img_embeds, train_img_fns = utils.apply_model( "train2014.zip", encoder, preprocess_for_model, input_shape=(IMG_SIZE, IMG_SIZE))utils.save_pickle(train_img_embeds, "train_img_embeds.pickle")utils.save_pickle(train_img_fns, "train_img_fns.pickle") extract validation featuresval_img_embeds, val_img_fns = utils.apply_model( "val2014.zip", encoder, preprocess_for_model, input_shape=(IMG_SIZE, IMG_SIZE))utils.save_pickle(val_img_embeds, "val_img_embeds.pickle")utils.save_pickle(val_img_fns, "val_img_fns.pickle") sample images for learnersdef sample_zip(fn_in, fn_out, rate=0.01, seed=42): np.random.seed(seed) with zipfile.ZipFile(fn_in) as fin, zipfile.ZipFile(fn_out, "w") as fout: sampled = filter(lambda _: np.random.rand() < rate, fin.filelist) for zInfo in sampled: fout.writestr(zInfo, fin.read(zInfo)) sample_zip("train2014.zip", "train2014_sample.zip")sample_zip("val2014.zip", "val2014_sample.zip")``` ###Code # load prepared embeddings train_img_embeds = utils.read_pickle("train_img_embeds.pickle") train_img_fns = utils.read_pickle("train_img_fns.pickle") val_img_embeds = utils.read_pickle("val_img_embeds.pickle") val_img_fns = utils.read_pickle("val_img_fns.pickle") # check shapes print(train_img_embeds.shape, len(train_img_fns)) print(val_img_embeds.shape, len(val_img_fns)) # check prepared samples of images list(filter(lambda x: x.endswith("_sample.zip"), os.listdir("."))) ###Output _____no_output_____ ###Markdown Extract captions for images ###Code # extract captions from zip def get_captions_for_fns(fns, zip_fn, zip_json_path): zf = zipfile.ZipFile(zip_fn) j = json.loads(zf.read(zip_json_path).decode("utf8")) id_to_fn = {img["id"]: img["file_name"] for img in j["images"]} fn_to_caps = defaultdict(list) for cap in j['annotations']: fn_to_caps[id_to_fn[cap['image_id']]].append(cap['caption']) fn_to_caps = dict(fn_to_caps) return list(map(lambda x: fn_to_caps[x], fns)) train_captions = get_captions_for_fns(train_img_fns, "captions_train-val2014.zip", "annotations/captions_train2014.json") val_captions = get_captions_for_fns(val_img_fns, "captions_train-val2014.zip", "annotations/captions_val2014.json") # check shape print(len(train_img_fns), len(train_captions)) print(len(val_img_fns), len(val_captions)) # look at training example (each has 5 captions) def show_trainig_example(train_img_fns, train_captions, example_idx=0): """ You can change example_idx and see different images """ zf = zipfile.ZipFile("train2014_sample.zip") captions_by_file = dict(zip(train_img_fns, train_captions)) all_files = set(train_img_fns) found_files = list(filter(lambda x: x.filename.rsplit("/")[-1] in all_files, zf.filelist)) example = found_files[example_idx] img = utils.decode_image_from_buf(zf.read(example)) plt.imshow(utils.image_center_crop(img)) plt.title("\n".join(captions_by_file[example.filename.rsplit("/")[-1]])) plt.show() show_trainig_example(train_img_fns, train_captions, example_idx=142) ###Output _____no_output_____ ###Markdown Prepare captions for training ###Code # preview captions data train_captions[:2] # special tokens PAD = "#PAD#" UNK = "#UNK#" START = "#START#" END = "#END#" # split sentence into tokens (split into lowercased words) def split_sentence(sentence): return list(filter(lambda x: len(x) > 0, re.split('\W+', sentence.lower()))) def generate_vocabulary(train_captions): """ Return {token: index} for all train tokens (words) that occur 5 times or more, `index` should be from 0 to N, where N is a number of unique tokens in the resulting dictionary. Use `split_sentence` function to split sentence into tokens. Also, add PAD (for batch padding), UNK (unknown, out of vocabulary), START (start of sentence) and END (end of sentence) tokens into the vocabulary. """ from collections import Counter word_counts = Counter() for text in train_captions: for sentence in text: word_counts.update(split_sentence(sentence)) vocab = [START] + [w for w, c in word_counts.items() if c >= 5] + [END, PAD, UNK] return {token: index for index, token in enumerate(sorted(vocab))} def caption_tokens_to_indices(captions, vocab): """ `captions` argument is an array of arrays: [ [ "image1 caption1", "image1 caption2", ... ], [ "image2 caption1", "image2 caption2", ... ], ... ] Use `split_sentence` function to split sentence into tokens. Replace all tokens with vocabulary indices, use UNK for unknown words (out of vocabulary). Add START and END tokens to start and end of each sentence respectively. For the example above you should produce the following: [ [ [vocab[START], vocab["image1"], vocab["caption1"], vocab[END]], [vocab[START], vocab["image1"], vocab["caption2"], vocab[END]], ... ], ... ] """ res = [ [ [vocab[START]] + \ [vocab[w] if w in vocab else vocab[UNK] for w in split_sentence(sentence)] + \ [vocab[END]] for sentence in caption ] for caption in captions ] return res # prepare vocabulary vocab = generate_vocabulary(train_captions) vocab_inverse = {idx: w for w, idx in vocab.items()} print(len(vocab)) import _pickle as pickle with open('file.txt', 'rb') as file: vocab = pickle.load(file) vocab_inverse = {idx: w for w, idx in vocab.items()} print(len(vocab)) # replace tokens with indices train_captions_indexed = caption_tokens_to_indices(train_captions, vocab) val_captions_indexed = caption_tokens_to_indices(val_captions, vocab) ###Output _____no_output_____ ###Markdown Captions have different length, but we need to batch them, that's why we will add PAD tokens so that all sentences have an equal length. We will crunch LSTM through all the tokens, but we will ignore padding tokens during loss calculation. ###Code # we will use this during training def batch_captions_to_matrix(batch_captions, pad_idx, max_len=None): """ `batch_captions` is an array of arrays: [ [vocab[START], ..., vocab[END]], [vocab[START], ..., vocab[END]], ... ] Put vocabulary indexed captions into np.array of shape (len(batch_captions), columns), where "columns" is max(map(len, batch_captions)) when max_len is None and "columns" = min(max_len, max(map(len, batch_captions))) otherwise. Add padding with pad_idx where necessary. Input example: [[1, 2, 3], [4, 5]] Output example: np.array([[1, 2, 3], [4, 5, pad_idx]]) if max_len=None Output example: np.array([[1, 2], [4, 5]]) if max_len=2 Output example: np.array([[1, 2, 3], [4, 5, pad_idx]]) if max_len=100 Try to use numpy, we need this function to be fast! """ lens = np.array([min(len(a), max_len) if max_len is not None else len(a) for a in batch_captions]) columns = np.max(lens) batches = np.zeroes((len(batch_captions), columns)) + pad_idx for i, l in enumerate(lens): batches[i][:l] = batch_captions[i][:l] return batches.reshape(len(batch_captions), columns) ## GRADED PART, DO NOT CHANGE! # Vocabulary creation grader.set_answer("19Wpv", grading_utils.test_vocab(vocab, PAD, UNK, START, END)) # Captions indexing grader.set_answer("uJh73", grading_utils.test_captions_indexing(train_captions_indexed, vocab, UNK)) # Captions batching grader.set_answer("yiJkt", grading_utils.test_captions_batching(batch_captions_to_matrix)) # you can make submission with answers so far to check yourself at this stage grader.submit(COURSERA_EMAIL, COURSERA_TOKEN) # make sure you use correct argument in caption_tokens_to_indices assert len(caption_tokens_to_indices(train_captions[:10], vocab)) == 10 assert len(caption_tokens_to_indices(train_captions[:5], vocab)) == 5 ###Output _____no_output_____ ###Markdown Training Define architecture Since our problem is to generate image captions, RNN text generator should be conditioned on image. The idea is to use image features as an initial state for RNN instead of zeros. Remember that you should transform image feature vector to RNN hidden state size by fully-connected layer and then pass it to RNN.During training we will feed ground truth tokens into the lstm to get predictions of next tokens. Notice that we don't need to feed last token (END) as input (http://cs.stanford.edu/people/karpathy/): ###Code IMG_EMBED_SIZE = 2048 # train_img_embeds.shape[1] IMG_EMBED_BOTTLENECK = 120 WORD_EMBED_SIZE = 100 LSTM_UNITS = 300 LOGIT_BOTTLENECK = 120 pad_idx = vocab[PAD] # remember to reset your graph if you want to start building it from scratch! s = reset_tf_session() tf.set_random_seed(42) ###Output _____no_output_____ ###Markdown Here we define decoder graph.We use Keras layers where possible because we can use them in functional style with weights reuse like this:```pythondense_layer = L.Dense(42, input_shape=(None, 100) activation='relu')a = tf.placeholder('float32', [None, 100])b = tf.placeholder('float32', [None, 100])dense_layer(a) that's how we applied dense layer!dense_layer(b) and again``` Here's a figure to help you with flattening in decoder: ###Code class decoder: # [batch_size, IMG_EMBED_SIZE] of CNN image features img_embeds = tf.placeholder('float32', [None, IMG_EMBED_SIZE]) # [batch_size, time steps] of word ids sentences = tf.placeholder('int32', [None, None]) # we use bottleneck here to reduce the number of parameters # image embedding -> bottleneck img_embed_to_bottleneck = L.Dense(IMG_EMBED_BOTTLENECK, input_shape=(None, IMG_EMBED_SIZE), activation='elu') # image embedding bottleneck -> lstm initial state img_embed_bottleneck_to_h0 = L.Dense(LSTM_UNITS, input_shape=(None, IMG_EMBED_BOTTLENECK), activation='elu') # word -> embedding word_embed = L.Embedding(len(vocab), WORD_EMBED_SIZE) # lstm cell (from tensorflow) lstm = tf.nn.rnn_cell.LSTMCell(LSTM_UNITS) # we use bottleneck here to reduce model complexity # lstm output -> logits bottleneck token_logits_bottleneck = L.Dense(LOGIT_BOTTLENECK, input_shape=(None, LSTM_UNITS), activation="elu") # logits bottleneck -> logits for next token prediction token_logits = L.Dense(len(vocab), input_shape=(None, LOGIT_BOTTLENECK)) # initial lstm cell state of shape (None, LSTM_UNITS), # we need to condition it on `img_embeds` placeholder. c0 = h0 = img_embed_bottleneck_to_h0(img_embed_to_bottleneck(img_embeds)) # embed all tokens but the last for lstm input, # remember that L.Embedding is callable, # use `sentences` placeholder as input. word_embeds = word_embed(sentences[:,:-1]) # during training we use ground truth tokens `word_embeds` as context for next token prediction. # that means that we know all the inputs for our lstm and can get # all the hidden states with one tensorflow operation (tf.nn.dynamic_rnn). # `hidden_states` has a shape of [batch_size, time steps, LSTM_UNITS]. hidden_states, _ = tf.nn.dynamic_rnn(lstm, word_embeds, initial_state=tf.nn.rnn_cell.LSTMStateTuple(c0, h0)) # now we need to calculate token logits for all the hidden states # first, we reshape `hidden_states` to [-1, LSTM_UNITS] flat_hidden_states = tf.reshape(hidden_states,[-1, LSTM_UNITS]) # then, we calculate logits for next tokens using `token_logits_bottleneck` and `token_logits` layers flat_token_logits = token_logits(token_logits_bottleneck(flat_hidden_states)) # then, we flatten the ground truth token ids. # remember, that we predict next tokens for each time step, # use `sentences` placeholder. flat_ground_truth = tf.reshape(sentences[:,1:], [-1]) ### YOUR CODE HERE ### # we need to know where we have real tokens (not padding) in `flat_ground_truth`, # we don't want to propagate the loss for padded output tokens, # fill `flat_loss_mask` with 1.0 for real tokens (not pad_idx) and 0.0 otherwise. flat_loss_mask = tf.not_equal(flat_ground_truth, pad_idx)### YOUR CODE HERE ### # compute cross-entropy between `flat_ground_truth` and `flat_token_logits` predicted by lstm xent = tf.nn.sparse_softmax_cross_entropy_with_logits( labels=flat_ground_truth, logits=flat_token_logits ) # compute average `xent` over tokens with nonzero `flat_loss_mask`. # we don't want to account misclassification of PAD tokens, because that doesn't make sense, # we have PAD tokens for batching purposes only! loss = tf.reduce_mean(tf.boolean_mask(xent, flat_loss_mask)) # define optimizer operation to minimize the loss optimizer = tf.train.AdamOptimizer(learning_rate=0.001) train_step = optimizer.minimize(decoder.loss) # will be used to save/load network weights. # you need to reset your default graph and define it in the same way to be able to load the saved weights! saver = tf.train.Saver() # intialize all variables s.run(tf.global_variables_initializer()) ## GRADED PART, DO NOT CHANGE! # Decoder shapes test grader.set_answer("rbpnH", grading_utils.test_decoder_shapes(decoder, IMG_EMBED_SIZE, vocab, s)) # Decoder random loss test grader.set_answer("E2OIL", grading_utils.test_random_decoder_loss(decoder, IMG_EMBED_SIZE, vocab, s)) # you can make submission with answers so far to check yourself at this stage grader.submit(COURSERA_EMAIL, COURSERA_TOKEN) ###Output _____no_output_____ ###Markdown Training loopEvaluate train and validation metrics through training and log them. Ensure that loss decreases. ###Code train_captions_indexed = np.array(train_captions_indexed) val_captions_indexed = np.array(val_captions_indexed) # generate batch via random sampling of images and captions for them, # we use `max_len` parameter to control the length of the captions (truncating long captions) def generate_batch(images_embeddings, indexed_captions, batch_size, max_len=None): """ `images_embeddings` is a np.array of shape [number of images, IMG_EMBED_SIZE]. `indexed_captions` holds 5 vocabulary indexed captions for each image: [ [ [vocab[START], vocab["image1"], vocab["caption1"], vocab[END]], [vocab[START], vocab["image1"], vocab["caption2"], vocab[END]], ... ], ... ] Generate a random batch of size `batch_size`. Take random images and choose one random caption for each image. Remember to use `batch_captions_to_matrix` for padding and respect `max_len` parameter. Return feed dict {decoder.img_embeds: ..., decoder.sentences: ...}. """ sample_image_emb_idx = np.random.choice(np.arange(images_embeddings.shape[0]), size=batch_size, replace=False) batch_image_embeddings = images_embeddings[sample_image_emb_idx] def sampleCaption(image_idx: int, captions: np.ndarray) -> np.ndarray: sampled_caption_idx = np.random.choice(np.arange(len(captions)), replace=False) return captions[sampled_caption_idx] batch_captions_matrix = np.array([sampleCaption(i, indexed_captions[i]) for i in sample_image_emb_idx]) batch_captions_matrix = batch_captions_to_matrix(batch_captions_matrix, pad_idx, max_len) return {decoder.img_embeds: batch_image_embeddings, decoder.sentences: batch_captions_matrix} batch_size = 64 n_epochs = 12 n_batches_per_epoch = 1000 n_validation_batches = 100 # how many batches are used for validation after each epoch start_epoch = 17 # you can load trained weights here # you can load "weights_{epoch}" and continue training # uncomment the next line if you need to load weights saver.restore(s, os.path.abspath("weights")) ###Output _____no_output_____ ###Markdown Look at the training and validation loss, they should be decreasing! ###Code # actual training loop MAX_LEN = 20 # truncate long captions to speed up training # to make training reproducible np.random.seed(42) random.seed(42) for epoch in range(start_epoch, start_epoch + n_epochs): train_loss = 0 pbar = tqdm_utils.tqdm_notebook_failsafe(range(n_batches_per_epoch)) counter = 0 for _ in pbar: train_loss += s.run([decoder.loss, train_step], generate_batch(train_img_embeds, train_captions_indexed, batch_size, MAX_LEN))[0] counter += 1 pbar.set_description("Training loss: %f" % (train_loss / counter)) train_loss /= n_batches_per_epoch val_loss = 0 for _ in range(n_validation_batches): val_loss += s.run(decoder.loss, generate_batch(val_img_embeds, val_captions_indexed, batch_size, MAX_LEN)) val_loss /= n_validation_batches print('Epoch: {}, train loss: {}, val loss: {}'.format(epoch, train_loss, val_loss)) # save weights after finishing epoch saver.save(s, os.path.abspath("weights_{}".format(epoch))) print("Finished!") ## GRADED PART, DO NOT CHANGE! # Validation loss grader.set_answer("YJR7z", grading_utils.test_validation_loss( decoder, s, generate_batch, val_img_embeds, val_captions_indexed)) # you can make submission with answers so far to check yourself at this stage grader.submit(COURSERA_EMAIL, COURSERA_TOKEN) # check that it's learnt something, outputs accuracy of next word prediction (should be around 0.5) from sklearn.metrics import accuracy_score, log_loss def decode_sentence(sentence_indices): return " ".join(list(map(vocab_inverse.get, sentence_indices))) def check_after_training(n_examples): fd = generate_batch(train_img_embeds, train_captions_indexed, batch_size) logits = decoder.flat_token_logits.eval(fd) truth = decoder.flat_ground_truth.eval(fd) mask = decoder.flat_loss_mask.eval(fd).astype(bool) print("Loss:", decoder.loss.eval(fd)) print("Accuracy:", accuracy_score(logits.argmax(axis=1)[mask], truth[mask])) for example_idx in range(n_examples): print("Example", example_idx) print("Predicted:", decode_sentence(logits.argmax(axis=1).reshape((batch_size, -1))[example_idx])) print("Truth:", decode_sentence(truth.reshape((batch_size, -1))[example_idx])) print("") check_after_training(3) # save graph weights to file! saver.save(s, os.path.abspath("weights")) ###Output _____no_output_____ ###Markdown Applying modelHere we construct a graph for our final model.It will work as follows:- take an image as an input and embed it- condition lstm on that embedding- predict the next token given a START input token- use predicted token as an input at next time step- iterate until you predict an END token ###Code class final_model: # CNN encoder encoder, preprocess_for_model = get_cnn_encoder() saver.restore(s, os.path.abspath("weights")) # keras applications corrupt our graph, so we restore trained weights # containers for current lstm state lstm_c = tf.Variable(tf.zeros([1, LSTM_UNITS]), name="cell") lstm_h = tf.Variable(tf.zeros([1, LSTM_UNITS]), name="hidden") # input images input_images = tf.placeholder('float32', [1, IMG_SIZE, IMG_SIZE, 3], name='images') # get image embeddings img_embeds = encoder(input_images) # initialize lstm state conditioned on image init_c = init_h = decoder.img_embed_bottleneck_to_h0(decoder.img_embed_to_bottleneck(img_embeds)) init_lstm = tf.assign(lstm_c, init_c), tf.assign(lstm_h, init_h) # current word index current_word = tf.placeholder('int32', [1], name='current_input') # embedding for current word word_embed = decoder.word_embed(current_word) # apply lstm cell, get new lstm states new_c, new_h = decoder.lstm(word_embed, tf.nn.rnn_cell.LSTMStateTuple(lstm_c, lstm_h))[1] # compute logits for next token new_logits = decoder.token_logits(decoder.token_logits_bottleneck(new_h)) # compute probabilities for next token new_probs = tf.nn.softmax(new_logits) # `one_step` outputs probabilities of next token and updates lstm hidden state one_step = new_probs, tf.assign(lstm_c, new_c), tf.assign(lstm_h, new_h) # look at how temperature works for probability distributions # for high temperature we have more uniform distribution _ = np.array([0.5, 0.4, 0.1]) for t in [0.01, 0.1, 1, 10, 100]: print(" ".join(map(str, _**(1/t) / np.sum(_**(1/t)))), "with temperature", t) # this is an actual prediction loop def generate_caption(image, t=0.1, sample=False, max_len=20): """ Generate caption for given image. if `sample` is True, we will sample next token from predicted probability distribution. `t` is a temperature during that sampling, higher `t` causes more uniform-like distribution = more chaos. """ # condition lstm on the image s.run(final_model.init_lstm, {final_model.input_images: [image]}) # current caption # start with only START token caption = [vocab[START]] for _ in range(max_len): next_word_probs = s.run(final_model.one_step, {final_model.current_word: [caption[-1]]})[0] next_word_probs = next_word_probs.ravel() # apply temperature next_word_probs = next_word_probs**(1/t) / np.sum(next_word_probs**(1/t)) if sample: next_word = np.random.choice(range(len(vocab)), p=next_word_probs) else: next_word = np.argmax(next_word_probs) caption.append(next_word) if next_word == vocab[END]: break return list(map(vocab_inverse.get, caption)) # look at validation prediction example def apply_model_to_image_raw_bytes(raw): img = utils.decode_image_from_buf(raw) fig = plt.figure(figsize=(7, 7)) plt.grid('off') plt.axis('off') plt.imshow(img) img = utils.crop_and_preprocess(img, (IMG_SIZE, IMG_SIZE), final_model.preprocess_for_model) print(' '.join(generate_caption(img)[1:-1])) plt.show() # look at validation prediction example def apply_model_to_image_raw_bytes(raw): img = utils.decode_image_from_buf(raw) fig = plt.figure(figsize=(7, 7)) plt.grid('off') plt.axis('off') plt.imshow(img) img = utils.crop_and_preprocess(img, (IMG_SIZE, IMG_SIZE), final_model.preprocess_for_model) print(' '.join(generate_caption(img)[1:-1])) plt.show() def show_valid_example(val_img_fns, example_idx=0): zf = zipfile.ZipFile("val2014_sample.zip") all_files = set(val_img_fns) found_files = list(filter(lambda x: x.filename.rsplit("/")[-1] in all_files, zf.filelist)) example = found_files[example_idx] apply_model_to_image_raw_bytes(zf.read(example)) show_valid_example(val_img_fns, example_idx=100) # sample more images from validation for idx in np.random.choice(range(len(zipfile.ZipFile("val2014_sample.zip").filelist) - 1), 10): show_valid_example(val_img_fns, example_idx=idx) time.sleep(1) ###Output _____no_output_____ ###Markdown You can download any image from the Internet and appply your model to it! ###Code download_utils.download_file( "http://www.bijouxandbits.com/wp-content/uploads/2016/06/portal-cake-10.jpg", "portal-cake-10.jpg" ) !wget "http://www.bijouxandbits.com/wp-content/uploads/2016/06/portal-cake-10.jpg" apply_model_to_image_raw_bytes(open("portal-cake-10.jpg", "rb").read()) ###Output _____no_output_____ ###Markdown Now it's time to find 10 examples where your model works good and 10 examples where it fails! You can use images from validation set as follows:```pythonshow_valid_example(val_img_fns, example_idx=...)```You can use images from the Internet as follows:```python! wget ...apply_model_to_image_raw_bytes(open("...", "rb").read())```If you use these functions, the output will be embedded into your notebook and will be visible during peer review!When you're done, download your noteboook using "File" -> "Download as" -> "Notebook" and prepare that file for peer review! Good examples ###Code !wget "https://www.lepoint.fr/images/2020/05/21/20377715lpw-20377719-article-jpg_7124864_1250x625.jpg" -O "airbus-380.jpg" with open('airbus-380.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[SUCCESS] Although it is not totally accurate the model was able to generate a good caption for this image.** ###Code !wget "https://www.sciencemag.org/sites/default/files/styles/inline__450w__no_aspect/public/dogs_1280p_0.jpg?itok=4t_1_fSJ" -O "dogs.jpg" with open('dogs.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[SUCCESS] Thus we have the dogs and the fieds but unfortunately there is no bench.** ###Code !wget "https://cdn.britannica.com/25/93825-050-D1300547/collection-newspapers.jpg" -O "newspaper.jpg" with open('newspaper.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[SUCCESS] Though the caption for this one is very vague I think it describes the image correctly** ###Code !wget "http://sbessavannah.weebly.com/uploads/2/6/1/3/26133807/9964196_orig.jpg" -O "zebra_savana.jpg" with open('zebra_savana.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[SUCCESS] Again there is this information of fence that is added in the caption that must have been present in the embedding.** ###Code !wget "https://www.salomon.com/sites/default/files/styles/crop_link_ratio/public/httr/2019-10/Header-how-to-choose-on-piste-skis.jpeg?itok=Ht2z9PTd" -O "ski.jpg" with open('ski.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[SUCCESS] This is perhaps the most accurate one** ###Code !wget "https://france3-regions.francetvinfo.fr/image/Ww6GysJSGgkqbZzJvNp98YpFO0g/600x400/regions/2020/06/09/5edf665a4a404_04d29887-3d4e-4de8-a5f5-c4031a9ee85b-3840156.jpeg" -O "train.jpg" with open('train.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[SUCCESS] This one worked really well** ###Code !wget "https://blogs.letemps.ch/benoit-gaillard/wp-content/uploads/sites/36/2016/04/maxresdefault-750x410.jpg" -O "surfer.jpg" with open('surfer.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[SUCCESS] Once again this is correct** ###Code !wget "https://www.letelegramme.fr/ar/imgproxy.php/images/2020/04/13/kylian-mbappe-et-les-parisiens-sont-largement-en-tete-de-la_5130847_676x443p.jpg?article=20200413-1012538756&aaaammjj=20200413" -O "football.jpg" with open('football.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[SUCCESS] Very good** ###Code !wget "https://i.eurosport.com/2021/02/09/2989469-61345108-2560-1440.jpg" -O "tennis.jpg" with open('tennis.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[SUCCESS] This one is good also** ###Code !wget "https://m9r7v9m6.rocketcdn.me/wp-content/uploads/2019/07/musculation-Football-US-ball.jpg" -O "american_football.jpg" with open('american_football.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[HALF-SUCCESS] The model seems to favor european football to american football** ###Code !wget "https://images.sudouest.fr/2019/01/02/5c2d351166a4bd72756a4920/widescreen/1000x500/hockey-sur-glacenbsp.jpg?v1" -O "hockey.jpg" with open('hockey.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[HALF-SUCCESS] We have the group of peaple and the concept of snow/ice** ###Code !wget "https://cdn.radiofrance.fr/s3/cruiser-production/2016/10/4b181979-1380-4098-941d-113777c8340e/1136_libertemodif.jpg" -O "delacroix.jpg" with open('delacroix.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[HALF-SUCCESS] I guess the model is not a fan of French painter Eugene Delacroix. Though it is a painting and not a real picture the model was still able to identify a man and the woman (Liberty). The 'bed' part may come from the texture of the painting and/or the flag. Vive la France** ###Code !wget "https://img.webmd.com/dtmcms/live/webmd/consumer_assets/site_images/article_thumbnails/other/cat_relaxing_on_patio_other/1800x1200_cat_relaxing_on_patio_other.jpg?resize=750px:*" -O "cat_relax.jpg" with open('cat_relax.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[HALF-SUCCESS] The model manage to identify a cat indeed. However the caption says the cat is laying next to a cat. The model seems to have learned that a caption should be around 10 words when sometimes it is not necessary.** ###Code !wget "https://ak.picdn.net/shutterstock/videos/20455609/thumb/1.jpg" -O "bench_man.jpg" with open('bench_man.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[HALF-SUCCESS] It was starting great but went south quickly with the cat on the head.** Bad examples ###Code !wget "https://www.francetvinfo.fr/pictures/E6HVWhqBvpsAmkL3pc2UnKQ5JSg/750x750/2019/12/23/phpORxWfM.jpg" -O "cats_musical.jpg" with open('cats_musical.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[FAILURE] Since this a picture from the upcomming movie musical of Cats I guess this is a rare sight for the model** ###Code !wget "https://i.pinimg.com/originals/c4/a9/75/c4a97517c9f67755eb29a8da5332bdd3.jpg" -O "cute_dog.jpg" with open('cute_dog.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[FAILED] Other than the fact that it is not a cat, the model was able to identified a bed here. We can again see unecessary part of the caption here with the "next to a pile of clotes". From what I can see, most of the time when there is only on subject in the picture, the RNN will add positional information.** ###Code !wget "https://i.pinimg.com/originals/54/e0/bf/54e0bf396d1cc9dc73f387fdd9c3a9da.jpg" -O "garden.jpg" with open('garden.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[FAILED] There are a lot of elements in this picture with no true subject. The captioning failed.** ###Code !wget "https://pianos-schaeffer.com/2092-large_default/piano-a-queue-feurich-218-noir.jpg" -O "piano.jpg" with open('piano.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[FAILED] The training set may not contain music instrument pictures. Everything here is wrong. I am still looking for the cat.** ###Code !wget "https://pianosgaetanleroux.fr/wp-content/uploads/2019/07/piano-queue-erard01.jpg" -O "piano2.jpg" with open('piano2.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[FAILED] I wanted to check that the previous failure was not due to the angle of the view** ###Code !wget "https://img.olympicchannel.com/images/image/private/t_16-9_3200/primary/piultz6nngltq541xmju" -O "swimmer.jpg" with open('swimmer.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[FAILED] The model was able to identify the man and water.** ###Code !wget "https://media.bateaux.com/src/images/news/articles/ima-image-27298.jpg" -O "lighthouse.jpg" with open('lighthouse.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[FAILED] It seems that sea and water is often associated with a surboard** ###Code !wget "https://cdn.paris.fr/paris/2020/05/12/huge-67a65318e89c13e2b63ddbe2bb89cc3c.jpg" -O "eiffel.jpg" with open('eiffel.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[FAILED] I am afraid that this is not correct** ###Code !wget "https://img.lemde.fr/2020/11/22/214/0/1866/933/1440/720/60/0/2f45b75_none-rugby-union-autumncup-sco-fra-1122-1c.JPG" -O "rugby.jpg" with open('rugby.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____ ###Markdown **[FAILED] This is not tennis** ###Code !wget "https://skyandtelescope.org/wp-content/uploads/2015-04-15_552ec785e77b6_download.jpg" -O "astro.jpg" with open('astro.jpg', "rb") as file: apply_model_to_image_raw_bytes(file.read()) ###Output _____no_output_____
assignment4/Assignment4.ipynb
###Markdown **Submission deadline:*** **Regular problems: last lab session before or on Monday, 9.13.2020b*** **Bonus problems: Last lab during semester****Points: 5 + 9 bonus points**Please note: some of the assignments are tedious or boring if you are already a NumPy ninja. The bonus problems were designed to give you a more satisfying alternative. Heads Up!This assignment comes with starter code, but you are not forced to use it, as long as you execute all analysis demanded in the problems. A note about plots!Plots are a way of communication. Just lke text, they can be paraphrased. You do not have to exactly reproducy my plots, but you must try to make suer yourp plots tell a similar story:- label axis- add titles- choose plot type properly- choose a color scale, limits, ticksso that you can describe what is happening! Bugs?!Please submit Github PRs or email us about any problems with the notebook - we will try to correct them quickly. ###Code # Standard IPython notebook imports %matplotlib inline import os from io import StringIO import itertools import httpimport import matplotlib.pyplot as plt import numpy as np import pandas as pd from tqdm import tqdm_notebook import scipy.stats as sstats import scipy.optimize as sopt import seaborn as sns import sklearn.datasets import sklearn.ensemble import sklearn.svm import sklearn.tree import cvxopt # In this way we can import functions straight from github with httpimport.github_repo('janchorowski', 'nn_assignments', module='common', branch='nn18'): from common.plotting import plot_mat sns.set_style('whitegrid') ###Output _____no_output_____ ###Markdown SVM TheoryA linear SVM assigns points $x^{(i)}\in\mathbb{R}^n$ to one of twoclasses, $y^{(i)}\in\{-1,1\}$ using the decision rule:\begin{equation}y = \text{signum}(w^T x + b).\end{equation}SVM training consists of finding weights $w\in\mathbb{R}^n$and bias $b\in\mathbb{R}$ that maximize the separation margin. Thiscorresponds to solving the following quadratic optimization problem:\begin{equation}\begin{split} \min_{w,b,\xi} &\frac{1}{2}w^Tw + C\sum_{i=1}^m \xi_i \\ \text{s.t. } & y^{(i)}(w^T x^{(i)} + b) \geq 1- \xi_i\;\; \forall_i \\ & \xi_i \geq 0 \;\; \forall_i.\end{split}\end{equation} Problem 1 [2p]Load the iris dataset. 1. [1p] Using the `sklearn.svm.SVC` library train a linear SVM thatseparates the Virginica from the Versicolor class using thepetal length and petal width features. Plot the obtained decision boundary andthe support vectors (their locations and weights - coefficients $\alpha$).2. [.5p] Now train a nonlinear SVM using the Gaussian kernel. Tune the parameetrs `C` and `gamma` (for the kernel) to reach maximum training accurracy. Plot the decision boundary and supprt vectors.3. [.5p] Answer the following questions: - When the SVM is forced to maximally accurate on the train set, roughly how many support vectors do we get ?\ ans: 80% - what is the relationship between the regularization constant `C` and the support vector weights `alpha`?\ ans: The bigger C, the bigger difference between weights ###Code # load iris, extract petal_length and petal_width of versicolors and virginicas iris = sklearn.datasets.load_iris() print('Features: ', iris.feature_names) print('Targets: ', iris.target_names) petal_length = iris.data[:,iris.feature_names.index('petal length (cm)')] petal_width = iris.data[:, iris.feature_names.index('petal width (cm)')] IrisX = np.array(iris.data.T) IrisX = IrisX[:, iris.target!=0] IrisX2F = np.vstack([petal_length, petal_width]) IrisX2F = IrisX2F[:, iris.target!=0] # Set versicolor=0 and virginia=1 IrisY = (iris.target[iris.target!=0]-1).reshape(1,-1).astype(np.float64) plt.scatter(IrisX2F[0,:], IrisX2F[1,:], c=IrisY.ravel(), cmap='spring', edgecolors='k') plt.xlabel('petal_length') plt.ylabel('petal_width') # # Fit a linear SVM using libsvm # from sklearn.svm import SVC svm_model = SVC(kernel="linear") svm_model.fit(IrisX2F.T, IrisY.ravel()) print("libsvm error rate: %f" % ((svm_model.predict(IrisX2F.T)!=IrisY).mean(),)) # # Plot the decision boundary # petal_lengths, petal_widths = np.meshgrid(np.linspace(IrisX2F[0,:].min(), IrisX2F[0,:].max(), 100), np.linspace(IrisX2F[1,:].min(), IrisX2F[1,:].max(), 100)) IrisXGrid = np.vstack([petal_lengths.ravel(), petal_widths.ravel()]) predictions_Grid = svm_model.predict(IrisXGrid.T) plt.contourf(petal_lengths, petal_widths, predictions_Grid.reshape(petal_lengths.shape), cmap='spring') plt.scatter(IrisX2F[0,:], IrisX2F[1,:], c=IrisY.ravel(), cmap='spring', edgecolors='k') plt.xlabel('petal_length') plt.ylabel('petal_width') plt.title('Decision boundary found by libsvm') # # Plot the decision boundary and the support vectors. # # You can extract the indices of support vectors and their weights from fielfs of the # svm object. Display the loaction of support vectors and their weights (by changing the # size in the scatterplot) # # TODO # support_vector_indices = svm_model.support_ support_vector_coefficients = svm_model.dual_coef_ plt.contourf(petal_lengths, petal_widths, predictions_Grid.reshape(petal_lengths.shape), cmap='spring') plt.scatter( IrisX2F[0,support_vector_indices], IrisX2F[1,support_vector_indices], c=IrisY.ravel()[support_vector_indices], s=(np.abs(support_vector_coefficients)*10)**2, cmap='spring', edgecolors='k') plt.xlabel('petal_length') plt.ylabel('petal_width') plt.title('Decision boundary found by libsvm') # # Fit a nonlinear SVM with a Gaussian kernel using libsvm. # Optimize the SVM to make # svm_gauss_model = SVC(C=3, gamma=100) svm_gauss_model.fit(IrisX2F.T, IrisY.ravel()) print("libsvm error rate: %f" % ((svm_gauss_model.predict(IrisX2F.T)!=IrisY).mean(),)) petal_lengths, petal_widths = np.meshgrid(np.linspace(IrisX2F[0,:].min(), IrisX2F[0,:].max(), 100), np.linspace(IrisX2F[1,:].min(), IrisX2F[1,:].max(), 100)) IrisXGrid = np.vstack([petal_lengths.ravel(), petal_widths.ravel()]) predictions_Grid = svm_gauss_model.predict(IrisXGrid.T) plt.contourf(petal_lengths, petal_widths, predictions_Grid.reshape(petal_lengths.shape), cmap='spring') sizes = np.zeros(IrisY.shape[-1]) sizes[svm_gauss_model.support_] = np.abs(svm_gauss_model.dual_coef_) **2 *10 # sizes[svm_gauss_model.support_[np.abs(svm_gauss_model.dual_coef_).argsort()[:, -3:]]]*=10 plt.scatter(IrisX2F[0,:], IrisX2F[1,:], c=IrisY.ravel(), s=sizes, cmap='spring', edgecolors='k') print(IrisY.ravel().shape, svm_gauss_model.dual_coef_.shape) plt.xlabel('petal_length') plt.ylabel('petal_width') plt.title('Decision boundary found by libsvm') ###Output libsvm error rate: 0.010000 (100,) (1, 81) ###Markdown Problem 2 [1p]Reimplement the linear SVM using the use `cvxopt.solvers.qp`Quadratic Programming (QP) solver. You will need to define the matricesthat define the problem. Compare the obtained solutions. Extract thesupport vectors from the LIBSVM solution and plot the support vectors.The `cvxopt.solvers.qp` solves the following optimization problem: \begin{align}\text{minimize over } x \text{: }& \frac{1}{2} x^T P x + q^T x \\\text{subject to: } & Gx \leq h \\& Ax = b\end{align}\begin{equation}\begin{split} \min_{w,b,\xi} &\frac{1}{2}w^Tw + C\sum_{i=1}^m \xi_i \\ \text{s.t. } & - y^{(i)} * w^T x^{(i)} + -y^{(i)} * b - \xi_i \leq -1\;\; \forall_i \\ & \xi_i \geq 0 \;\; \forall_i.\end{split}\end{equation}To solve the SVM problem you need to encode the weights $W$, biases $b$, and slack variables $\xi$ as elements of the vector $x$, then properly fill the matrices and vectors $P$, $q$, $G$, $h$. We can ignore setting the $A$ and $b$ parametrs, since there are no linear constraints. ###Code IrisX2F.shape # # Now solve the SVM using the QP solver # n, m = IrisX2F.shape C=10.0 #x: w | b | ksi P = np.zeros((n+1+m, n+1+m)) #w, bias, xi q = np.zeros((n+1+m,1)) G = np.zeros((2*m, n+1+m)) # we have two constrains for each data point: # that the margin is equal to 1-xi # and that xi is nonnegative h = np.zeros((2*m,1)) # # TODO: fill in P, q, G, h # P[:n, :n] = np.eye(n) q[n+1:] = np.ones((m,1)) * C G[:m,:n+1] = -np.ones((m,n+1)) G[:m,:n+1] *= IrisY.T * 2 - 1 G[:m,:n] *= IrisX2F.T G[:m,n+1:] = -np.eye(m) G[m:,n+1:] = -np.eye(m) h[:m,:] = -np.ones((m, 1)) # # Now run the solver # ret = cvxopt.solvers.qp(cvxopt.matrix(P), cvxopt.matrix(q), cvxopt.matrix(G), cvxopt.matrix(h), ) ret = np.array(ret['x']) # # extract the weights and biases # W = ret[:n].reshape(-1,1) b = ret[n] # # Extract the weight and bias from libsvm for comparison # Wlibsvm = svm_model.coef_ blibsvm = svm_model.intercept_ print() print('W', W.T, 'Wlibsvm', Wlibsvm) print('b', b, 'blibsvm', blibsvm) petal_lengths, petal_widths = np.meshgrid(np.linspace(IrisX2F[0,:].min(), IrisX2F[0,:].max(), 100), np.linspace(IrisX2F[1,:].min(), IrisX2F[1,:].max(), 100)) IrisXGrid = np.vstack([petal_lengths.ravel(), petal_widths.ravel()]) # predictions_Grid = svm_model.predict(IrisXGrid.T) plt.contourf(petal_lengths, petal_widths, (W.T @ IrisXGrid + b >= 0).astype(int).reshape(petal_lengths.shape), cmap='spring') plt.scatter(IrisX2F[0,:], IrisX2F[1,:], c=IrisY.ravel(), cmap='spring', edgecolors='k') plt.xlabel('petal_length') plt.ylabel('petal_width') plt.title('Decision boundary found by QP solver') None ###Output pcost dcost gap pres dres 0: -8.2045e+03 4.6654e+03 2e+04 2e+01 3e+00 1: 5.3426e+01 -5.8951e+02 3e+03 2e+00 4e-01 2: 1.7058e+02 -1.9137e+01 4e+02 3e-01 5e-02 3: 1.4063e+02 7.5166e+01 1e+02 7e-02 1e-02 4: 1.3820e+02 9.8722e+01 6e+01 3e-02 5e-03 5: 1.4016e+02 1.1207e+02 3e+01 4e-03 8e-04 6: 1.3507e+02 1.1797e+02 2e+01 1e-03 2e-04 7: 1.2551e+02 1.2356e+02 2e+00 2e-04 3e-05 8: 1.2445e+02 1.2440e+02 5e-02 4e-06 6e-07 9: 1.2442e+02 1.2442e+02 1e-03 9e-08 2e-08 10: 1.2442e+02 1.2442e+02 2e-05 1e-09 2e-10 Optimal solution found. W [[2.75844069 4.827271 ]] Wlibsvm [[2.1829247 2.25365588]] b [-21.2054472] blibsvm [-14.41486828] ###Markdown Problem 3 [2p]Repeat 100 bootstrap experiments to establish the effect of constant $C$ on SVM.For each experiment do the following:1. Sample (with replacement) a bootstrap dataset equal in size to the training dataset. This will be this experiment's training dataset.2. Prepare the experiment's testing dataset by using samples not inluded in the bootstrap dataset.3. For all $C$ from the set $\{10^{-4}, 10^{-3.5}, 10^{-3.}, \ldots, 10^{6}\}$ fit a nonlinear SVM (Gaussian kernel, called \texttt{rbf} in LIBSVM using the default $\gamma$) and record the training and testing errors.Analyze a box plot of errors as a function of $C$. Can you see itsinfluence on the training and testing error, as well as on thetesting error variability? **Indicate regions of overfitting and underfitting.** ###Code res = [] for rep in range(100): bootstrap_sel = np.random.randint(0, IrisY.shape[1], IrisY.shape[1]) test_sel = np.setdiff1d(np.arange(IrisY.shape[1]), np.unique(bootstrap_sel)) bootstrap_IrisX = IrisX[:,bootstrap_sel] bootstrap_IrisY = IrisY[:,bootstrap_sel] test_IrisX = IrisX[:,test_sel] test_IrisY = IrisY[:,test_sel] # # TODO: Loop over a list of exponents. # for Cexponent in np.arange(-4, 6.5, 0.5): C = 10.0**Cexponent svm_model = SVC(C=C, gamma='auto') svm_model.fit(bootstrap_IrisX.T, bootstrap_IrisY.ravel()) train_acc = (svm_model.predict(bootstrap_IrisX.T)==bootstrap_IrisY).mean() test_acc = (svm_model.predict(test_IrisX.T)==test_IrisY).mean() res.append(dict(Cexponent=Cexponent, err=1-test_acc, subset='test')) res.append(dict(Cexponent=Cexponent, err=1-train_acc, subset='train')) res = pd.DataFrame(res) chart = sns.catplot(kind='box', x='Cexponent', y='err', col='subset', color='blue', data=res) chart.set_xticklabels(rotation=45) None ###Output _____no_output_____
recommender/rest-api/Content Based PySpark.ipynb
###Markdown Load datasets----------------- ###Code from pyspark.sql import SparkSession from pyspark.sql.types import ArrayType, IntegerType from pyspark.sql.functions import col, udf spark = SparkSession.builder.appName("Recommendation ALS").getOrCreate() # do something to prove it works movies_df = spark.read.option("header", "true").csv("data/movies.csv", inferSchema=True) links_df = spark.read.option("header", "true").csv("data/links.csv", inferSchema=True).cache() movies_df = movies_df.join(links_df, on = ['movieId']).cache() ratings_df = spark.read.option("header", "true").csv("data/ratings.csv", inferSchema=True).cache() tags_df = spark.read.option("header", "true").csv("data/tags.csv", inferSchema=True).cache() genresList = ["Crime", "Romance", "Thriller", "Adventure", "Drama", "War", "Documentary", "Fantasy", "Mystery", \ "Musical", "Animation", "Film-Noir", "(no genres listed)", "IMAX", "Horror", "Western", \ "Comedy", "Children", "Action", "Sci-Fi"] udf_parse_genres = udf(lambda str: setGenresMatrix(str), ArrayType(IntegerType())) def setGenresMatrix(genres): movieGenresMatrix = [] movieGenresList = genres.split('|') for x in genresList: if (x in movieGenresList): movieGenresMatrix.append(1) else: movieGenresMatrix.append(0) return movieGenresMatrix movies_df = movies_df.withColumn("genresMatrix", udf_parse_genres(col("genres"))) movies_df.show() ###Output +-------+--------------------+--------------------+------+------+--------------------+ |movieId| title| genres|imdbId|tmdbId| genresMatrix| +-------+--------------------+--------------------+------+------+--------------------+ | 1| Toy Story (1995)|Adventure|Animati...|114709| 862|[0, 0, 0, 1, 0, 0...| | 2| Jumanji (1995)|Adventure|Childre...|113497| 8844|[0, 0, 0, 1, 0, 0...| | 3|Grumpier Old Men ...| Comedy|Romance|113228| 15602|[0, 1, 0, 0, 0, 0...| | 4|Waiting to Exhale...|Comedy|Drama|Romance|114885| 31357|[0, 1, 0, 0, 1, 0...| | 5|Father of the Bri...| Comedy|113041| 11862|[0, 0, 0, 0, 0, 0...| | 6| Heat (1995)|Action|Crime|Thri...|113277| 949|[1, 0, 1, 0, 0, 0...| | 7| Sabrina (1995)| Comedy|Romance|114319| 11860|[0, 1, 0, 0, 0, 0...| | 8| Tom and Huck (1995)| Adventure|Children|112302| 45325|[0, 0, 0, 1, 0, 0...| | 9| Sudden Death (1995)| Action|114576| 9091|[0, 0, 0, 0, 0, 0...| | 10| GoldenEye (1995)|Action|Adventure|...|113189| 710|[0, 0, 1, 1, 0, 0...| | 11|American Presiden...|Comedy|Drama|Romance|112346| 9087|[0, 1, 0, 0, 1, 0...| | 12|Dracula: Dead and...| Comedy|Horror|112896| 12110|[0, 0, 0, 0, 0, 0...| | 13| Balto (1995)|Adventure|Animati...|112453| 21032|[0, 0, 0, 1, 0, 0...| | 14| Nixon (1995)| Drama|113987| 10858|[0, 0, 0, 0, 1, 0...| | 15|Cutthroat Island ...|Action|Adventure|...|112760| 1408|[0, 1, 0, 1, 0, 0...| | 16| Casino (1995)| Crime|Drama|112641| 524|[1, 0, 0, 0, 1, 0...| | 17|Sense and Sensibi...| Drama|Romance|114388| 4584|[0, 1, 0, 0, 1, 0...| | 18| Four Rooms (1995)| Comedy|113101| 5|[0, 0, 0, 0, 0, 0...| | 19|Ace Ventura: When...| Comedy|112281| 9273|[0, 0, 0, 0, 0, 0...| | 20| Money Train (1995)|Action|Comedy|Cri...|113845| 11517|[1, 0, 1, 0, 1, 0...| +-------+--------------------+--------------------+------+------+--------------------+ only showing top 20 rows ###Markdown Compute the item feature vector------ ###Code from pyspark.sql.functions import log10 from pyspark.sql.functions import col import math tf = tags_df.groupBy(["movieId", "tag"]).count().selectExpr("movieId", "tag","count AS tag_count_tf") tags_distinct = tags_df.selectExpr("movieId", "tag").dropDuplicates() df = tags_distinct.groupBy("tag").count().selectExpr("tag", "count AS tag_count_df") idf = math.log10(tags_df.select("movieId").distinct().count()) df = df.withColumn("idf", idf - log10("tag_count_df")) tf = tf.join(df, on = "tag", how = "left") tf = tf.withColumn("tf-idf", col("tag_count_tf") * col("idf")) # show TF-IDF values for each movie # tf.select("movieId", "tag", "tf-idf").show() ###Output _____no_output_____ ###Markdown Calculate unit length vector of TF-IDF for normalization------ ###Code from pyspark.sql.functions import col from pyspark.sql.functions import sqrt vect_len = tf.select("movieId","tf-idf") vect_len = vect_len.withColumn("tf-idf-sq", col("tf-idf")**2) vect_len = vect_len.groupby("movieId").sum().withColumnRenamed("sum(tf-idf)", "tf-idf-sum")\ .withColumnRenamed("sum(tf-idf-sq)", "tf-idf-sq-sum") vect_len = vect_len.withColumn("vect_length", sqrt("tf-idf-sq-sum")) tf = tf.join(vect_len,on = "movieId", how = "left") tf = tf.withColumn("tag_vec", col("tf-idf")/col("vect_length")) # display the feature unit length vector of each movie: 'tag_vec' # tf.filter(tf["movieId"] == 60756).select("movieId","tag","tf-idf","vect_length", "tag_vec").show() ###Output _____no_output_____ ###Markdown Let’s implement the same and calculate user profile for each user. ###Code from pyspark.sql.functions import lit ratings_filter = ratings_df.filter(ratings_df["rating"] > 3) #enter user ID for analysis userId = 65 user_data= ratings_filter.filter(ratings_filter["userId"] == userId) user_data = tf.join(user_data, on = "movieId", how = "inner") user_tag_pref = user_data.groupby("tag").sum().withColumnRenamed("sum(tag_vec)", "tag_pref")\ .select("tag","tag_pref") user_tag_pref = user_tag_pref.withColumn("user", lit(userId)) user_tag_pref.filter(user_tag_pref["tag"] == "Boxing story").show() ###Output +------------+------------------+----+ | tag| tag_pref|user| +------------+------------------+----+ |Boxing story|0.5954367951274172| 65| +------------+------------------+----+ ###Markdown Step 4. Compute the cosine similarities and predict item ratings-------- ###Code from pyspark.sql.functions import col from pyspark.sql import functions as F import math movieId = 123 tf_movies = tf.filter(tf["movieId"] == movieId) print(tf.count()) tag_merge = tf_movies.join(user_tag_pref, on = "tag", how = "left") tag_merge.fillna({"tag_pref": 0}) tag_merge.withColumn("tag_value", col("tag_vec") * col("tag_pref")) tag_merge.show() tag_merge.agg(F.sum("tag_vec")).show() # tag_vec_val = math.sqrt(tag_merge.agg(F.sum("tag_vec"))) # print("Movie id {} tag_vec {}".format(movieId[0], tag_vec_val)) # tag_vec_val = np.sqrt(np.sum(np.square(tag_merge['tag_vec']), axis=0)) # tag_pref_val = np.sqrt(np.sum(np.square(user_tag_pref_all['tag_pref']), axis=0)) # tag_merge_final = tag_merge.groupby(['user','movieId'])[['tag_value']]\ # .sum()\ # .rename(columns = {'tag_value': 'rating'})\ # .reset_index() # tag_merge_final['rating']=tag_merge_final['rating']/(tag_vec_val*tag_pref_val) # tag_merge_all = tag_merge_all.append(tag_merge_final, ignore_index=True) ###Output 3579 +---+-------+------------+------------+---+------+------------+----------+-------------+-----------+-------+--------+----+ |tag|movieId|tag_count_tf|tag_count_df|idf|tf-idf|sum(movieId)|tf-idf-sum|tf-idf-sq-sum|vect_length|tag_vec|tag_pref|user| +---+-------+------------+------------+---+------+------------+----------+-------------+-----------+-------+--------+----+ +---+-------+------------+------------+---+------+------------+----------+-------------+-----------+-------+--------+----+ +------------+ |sum(tag_vec)| +------------+ | null| +------------+
BBDC_M.ipynb
###Markdown cm.shape() ###Code cm list(predictions) #Bayes from sklearn.naive_bayes import GaussianNB gb = GaussianNB(priors=None, var_smoothing=1e-09) gb.fit(x_train, y_train) y_pred_g = rf.predict(x_test) accuracy_score(y_test, y_pred_g) #Neural network It didn't work np.random.seed(42) weights = np.random.rand(19,1) bias = np.random.rand(1) lr = 0.05 def sigmoid(a): return 1/(1+np.exp(-a)) #Derivative def sigmoid_der(a): return sigmoid(a)*(1-sigmoid(a)) for epoch in range(20000): inputs = x XW = np.dot(x, weights) + bias z = sigmoid(XW) err = z - y print(err.sum()) dcost_dpred = err dpred_dz = sigmoid_der(z) z_delta = dcost_dpred * dpred_dz inputs = x.T weights -= lr * np.dot(inputs, z_delta) for num in z_delta: bias -= lr * num #sklearn from sklearn.metrics import mean_squared_error,confusion_matrix, precision_score, recall_score, auc,roc_curve from sklearn import ensemble, linear_model, neighbors, svm, tree, neural_network from sklearn.pipeline import make_pipeline from sklearn.linear_model import Ridge from sklearn.preprocessing import PolynomialFeatures from sklearn import svm,model_selection, tree, linear_model, neighbors, naive_bayes, ensemble, discriminant_analysis, gaussian_process #Machine Learning Algorithms MLA = [ #Ensemble Methods ensemble.AdaBoostClassifier(), ensemble.BaggingClassifier(), ensemble.ExtraTreesClassifier(), ensemble.GradientBoostingClassifier(), ensemble.RandomForestClassifier(), ] MLA_columns = [] MLA_compare = pd.DataFrame(columns = MLA_columns) row_index = 0 for alg in MLA: predicted = alg.fit(x_train, y_train).predict(x_test) MLA_name = alg.__class__.__name__ MLA_compare.loc[row_index,'MLA Name'] = MLA_name MLA_compare.loc[row_index, 'MLA Train Accuracy'] = round(alg.score(x_train, y_train), 4) MLA_compare.loc[row_index, 'MLA Test Accuracy'] = round(alg.score(x_test, y_test), 4) row_index+=1 MLA_compare.sort_values(by = ['MLA Test Accuracy'], ascending = False, inplace = True) MLA_compare ###Output _____no_output_____
tutorials/W3D2_BasicReinforcementLearning/student/W3D2_Tutorial1.ipynb
###Markdown Tutorial 1: Introduction to Reinforcement Learning**Week 3, Day 2: Basic Reinforcement Learning (RL)****By Neuromatch Academy**__Content creators:__ Matthew Sargent, Anoop Kulkarni, Sowmya Parthiban, Feryal Behbahani, Jane Wang__Content reviewers:__ Ezekiel Williams, Mehul Rastogi, Lily Cheng, Roberto Guidotti, Arush Tagade__Content editors:__ Spiros Chavlis __Production editors:__ Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesBy the end of the tutorial, you should be able to:1. Within the RL framework, be able to identify the different components: environment, agent, states, and actions. 2. Understand the Bellman equation and components involved. 3. Implement tabular value-based model-free learning (Q-learning and SARSA).4. Run a DQN agent and experiment with different hyperparameters.5. Have a high-level understanding of other (nonvalue-based) RL methods.6. Discuss real-world applications and ethical issues of RL.**Note:** There is an issue with some images not showing up if you're using a Safari browser. Please switch to Chrome if this is the case. ###Code # @title Tutorial slides # @markdown These are the slides for the videos in this tutorial from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/m3kqy/?direct%26mode=render%26action=download%26mode=render", width=854, height=480) ###Output _____no_output_____ ###Markdown --- SetupRun the following 5 cells in order to set up needed functions. Don't worry about the code for now! ###Code # @title Install requirements from IPython.display import clear_output # @markdown we install the acme library, see [here](https://github.com/deepmind/acme) for more info # @markdown WARNING: There may be errors and warnings reported during the installation. # @markdown However, they should be ignored. !apt-get install -y xvfb ffmpeg --quiet !pip install --upgrade pip --quiet !pip install imageio --quiet !pip install imageio-ffmpeg !pip install gym --quiet !pip install enum34 --quiet !pip install dm-env --quiet !pip install pandas --quiet !pip install keras-nightly==2.5.0.dev2021020510 --quiet !pip install grpcio==1.34.0 --quiet !pip install tensorflow --quiet !pip install typing --quiet !pip install einops --quiet !pip install dm-acme --quiet !pip install dm-acme[reverb] --quiet !pip install dm-acme[tf] --quiet !pip install dm-acme[envs] --quiet !pip install dm-env --quiet clear_output() # Import modules import gym import enum import copy import time import acme import torch import base64 import dm_env import IPython import imageio import warnings import itertools import collections import numpy as np import pandas as pd import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import matplotlib as mpl import matplotlib.pyplot as plt import tensorflow.compat.v2 as tf from acme import specs from acme import wrappers from acme.utils import tree_utils from acme.utils import loggers from torch.autograd import Variable from torch.distributions import Categorical from typing import Callable, Sequence tf.enable_v2_behavior() warnings.filterwarnings('ignore') np.set_printoptions(precision=3, suppress=1) # @title Figure settings import ipywidgets as widgets # interactive display %matplotlib inline %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") mpl.rc('image', cmap='Blues') # @title Helper Functions # @markdown Implement helpers for value visualisation map_from_action_to_subplot = lambda a: (2, 6, 8, 4)[a] map_from_action_to_name = lambda a: ("up", "right", "down", "left")[a] def plot_values(values, colormap='pink', vmin=-1, vmax=10): plt.imshow(values, interpolation="nearest", cmap=colormap, vmin=vmin, vmax=vmax) plt.yticks([]) plt.xticks([]) plt.colorbar(ticks=[vmin, vmax]) def plot_state_value(action_values, epsilon=0.1): q = action_values fig = plt.figure(figsize=(4, 4)) vmin = np.min(action_values) vmax = np.max(action_values) v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1) plot_values(v, colormap='summer', vmin=vmin, vmax=vmax) plt.title("$v(s)$") def plot_action_values(action_values, epsilon=0.1): q = action_values fig = plt.figure(figsize=(8, 8)) fig.subplots_adjust(wspace=0.3, hspace=0.3) vmin = np.min(action_values) vmax = np.max(action_values) dif = vmax - vmin for a in [0, 1, 2, 3]: plt.subplot(3, 3, map_from_action_to_subplot(a)) plot_values(q[..., a], vmin=vmin - 0.05*dif, vmax=vmax + 0.05*dif) action_name = map_from_action_to_name(a) plt.title(r"$q(s, \mathrm{" + action_name + r"})$") plt.subplot(3, 3, 5) v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1) plot_values(v, colormap='summer', vmin=vmin, vmax=vmax) plt.title("$v(s)$") def plot_stats(stats, window=10): plt.figure(figsize=(16,4)) plt.subplot(121) xline = range(0, len(stats.episode_lengths), window) plt.plot(xline, smooth(stats.episode_lengths, window=window)) plt.ylabel('Episode Length') plt.xlabel('Episode Count') plt.subplot(122) plt.plot(xline, smooth(stats.episode_rewards, window=window)) plt.ylabel('Episode Return') plt.xlabel('Episode Count') # @title Helper functions def smooth(x, window=10): return x[:window*(len(x)//window)].reshape(len(x)//window, window).mean(axis=1) # @title Set random seed # @markdown Executing `set_seed(seed=seed)` you are setting the seed # for DL its critical to set the random seed so that students can have a # baseline to compare their results to expected results. # Read more here: https://pytorch.org/docs/stable/notes/randomness.html # Call `set_seed` function in the exercises to ensure reproducibility. import random import torch def set_seed(seed=None, seed_torch=True): if seed is None: seed = np.random.choice(2 ** 32) random.seed(seed) np.random.seed(seed) if seed_torch: torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True print(f'Random seed {seed} has been set.') # In case that `DataLoader` is used def seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 np.random.seed(worker_seed) random.seed(worker_seed) # @title Set device (GPU or CPU). Execute `set_device()` # especially if torch modules used. # inform the user if the notebook uses GPU or CPU. def set_device(): device = "cuda" if torch.cuda.is_available() else "cpu" if device != "cuda": print("WARNING: For this notebook to perform best, " "if possible, in the menu under `Runtime` -> " "`Change runtime type.` select `GPU` ") else: print("GPU is enabled in this notebook.") return device SEED = 2021 set_seed(seed=SEED) DEVICE = set_device() ###Output _____no_output_____ ###Markdown --- Section 1: Introduction to Reinforcement Learning ###Code # @title Video 1: Introduction to RL from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV18V411p7iK", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"BWz3scQN50M", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Acme: a research framework for reinforcement learning**Acme** is a library of reinforcement learning (RL) agents and agent building blocks by Google DeepMind. Acme strives to expose simple, efficient, and readable agents, that serve both as reference implementations of popular algorithms and as strong baselines, while still providing enough flexibility to do novel research. The design of Acme also attempts to provide multiple points of entry to the RL problem at differing levels of complexity.For more information see [github repository](https://github.com/deepmind/acme). --- Section 2: General Formulation of RL Problems and Gridworlds ###Code # @title Video 2: General Formulation and MDPs from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1k54y1E7Zn", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"h6TxAALY5Fc", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown The agent interacts with the environment in a loop corresponding to the following diagram. The environment defines a set of **actions** that an agent can take. The agent takes an action informed by the **observations** it receives, and will get a **reward** from the environment after each action. The goal in RL is to find an agent whose actions maximize the total accumulation of rewards obtained from the environment. Section 2.1: The Environment For this practical session we will focus on a **simple grid world** environment,which consists of a 9 x 10 grid of either wall or empty cells, depicted in black and white, respectively. The smiling agent starts from an initial location and needs to navigate to reach the goal square.Below you will find an implementation of this Gridworld as a ```dm_env.Environment```.There is no coding in this section, but if you want, you can look over the provided code so that you can familiarize yourself with an example of how to set up a **grid world** environment. ###Code # @title Implement GridWorld { form-width: "30%" } # @markdown *Double-click* to inspect the contents of this cell. class ObservationType(enum.IntEnum): STATE_INDEX = enum.auto() AGENT_ONEHOT = enum.auto() GRID = enum.auto() AGENT_GOAL_POS = enum.auto() class GridWorld(dm_env.Environment): def __init__(self, layout, start_state, goal_state=None, observation_type=ObservationType.STATE_INDEX, discount=0.9, penalty_for_walls=-5, reward_goal=10, max_episode_length=None, randomize_goals=False): """Build a grid environment. Simple gridworld defined by a map layout, a start and a goal state. Layout should be a NxN grid, containing: * 0: empty * -1: wall * Any other positive value: value indicates reward; episode will terminate Args: layout: NxN array of numbers, indicating the layout of the environment. start_state: Tuple (y, x) of starting location. goal_state: Optional tuple (y, x) of goal location. Will be randomly sampled once if None. observation_type: Enum observation type to use. One of: * ObservationType.STATE_INDEX: int32 index of agent occupied tile. * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the agent is and 0 elsewhere. * ObservationType.GRID: NxNx3 float32 grid of feature channels. First channel contains walls (1 if wall, 0 otherwise), second the agent position (1 if agent, 0 otherwise) and third goal position (1 if goal, 0 otherwise) * ObservationType.AGENT_GOAL_POS: float32 tuple with (agent_y, agent_x, goal_y, goal_x) discount: Discounting factor included in all Timesteps. penalty_for_walls: Reward added when hitting a wall (should be negative). reward_goal: Reward added when finding the goal (should be positive). max_episode_length: If set, will terminate an episode after this many steps. randomize_goals: If true, randomize goal at every episode. """ if observation_type not in ObservationType: raise ValueError('observation_type should be a ObservationType instace.') self._layout = np.array(layout) self._start_state = start_state self._state = self._start_state self._number_of_states = np.prod(np.shape(self._layout)) self._discount = discount self._penalty_for_walls = penalty_for_walls self._reward_goal = reward_goal self._observation_type = observation_type self._layout_dims = self._layout.shape self._max_episode_length = max_episode_length self._num_episode_steps = 0 self._randomize_goals = randomize_goals if goal_state is None: # Randomly sample goal_state if not provided goal_state = self._sample_goal() self.goal_state = goal_state def _sample_goal(self): """Randomly sample reachable non-starting state.""" # Sample a new goal n = 0 max_tries = 1e5 while n < max_tries: goal_state = tuple(np.random.randint(d) for d in self._layout_dims) if goal_state != self._state and self._layout[goal_state] == 0: # Reachable state found! return goal_state n += 1 raise ValueError('Failed to sample a goal state.') @property def layout(self): return self._layout @property def number_of_states(self): return self._number_of_states @property def goal_state(self): return self._goal_state @property def start_state(self): return self._start_state @property def state(self): return self._state def set_state(self, x, y): self._state = (y, x) @goal_state.setter def goal_state(self, new_goal): if new_goal == self._state or self._layout[new_goal] < 0: raise ValueError('This is not a valid goal!') # Zero out any other goal self._layout[self._layout > 0] = 0 # Setup new goal location self._layout[new_goal] = self._reward_goal self._goal_state = new_goal def observation_spec(self): if self._observation_type is ObservationType.AGENT_ONEHOT: return specs.Array( shape=self._layout_dims, dtype=np.float32, name='observation_agent_onehot') elif self._observation_type is ObservationType.GRID: return specs.Array( shape=self._layout_dims + (3,), dtype=np.float32, name='observation_grid') elif self._observation_type is ObservationType.AGENT_GOAL_POS: return specs.Array( shape=(4,), dtype=np.float32, name='observation_agent_goal_pos') elif self._observation_type is ObservationType.STATE_INDEX: return specs.DiscreteArray( self._number_of_states, dtype=int, name='observation_state_index') def action_spec(self): return specs.DiscreteArray(4, dtype=int, name='action') def get_obs(self): if self._observation_type is ObservationType.AGENT_ONEHOT: obs = np.zeros(self._layout.shape, dtype=np.float32) # Place agent obs[self._state] = 1 return obs elif self._observation_type is ObservationType.GRID: obs = np.zeros(self._layout.shape + (3,), dtype=np.float32) obs[..., 0] = self._layout < 0 obs[self._state[0], self._state[1], 1] = 1 obs[self._goal_state[0], self._goal_state[1], 2] = 1 return obs elif self._observation_type is ObservationType.AGENT_GOAL_POS: return np.array(self._state + self._goal_state, dtype=np.float32) elif self._observation_type is ObservationType.STATE_INDEX: y, x = self._state return y * self._layout.shape[1] + x def reset(self): self._state = self._start_state self._num_episode_steps = 0 if self._randomize_goals: self.goal_state = self._sample_goal() return dm_env.TimeStep( step_type=dm_env.StepType.FIRST, reward=None, discount=None, observation=self.get_obs()) def step(self, action): y, x = self._state if action == 0: # up new_state = (y - 1, x) elif action == 1: # right new_state = (y, x + 1) elif action == 2: # down new_state = (y + 1, x) elif action == 3: # left new_state = (y, x - 1) else: raise ValueError( 'Invalid action: {} is not 0, 1, 2, or 3.'.format(action)) new_y, new_x = new_state step_type = dm_env.StepType.MID if self._layout[new_y, new_x] == -1: # wall reward = self._penalty_for_walls discount = self._discount new_state = (y, x) elif self._layout[new_y, new_x] == 0: # empty cell reward = 0. discount = self._discount else: # a goal reward = self._layout[new_y, new_x] discount = 0. new_state = self._start_state step_type = dm_env.StepType.LAST self._state = new_state self._num_episode_steps += 1 if (self._max_episode_length is not None and self._num_episode_steps >= self._max_episode_length): step_type = dm_env.StepType.LAST return dm_env.TimeStep( step_type=step_type, reward=np.float32(reward), discount=discount, observation=self.get_obs()) def plot_grid(self, add_start=True): plt.figure(figsize=(4, 4)) plt.imshow(self._layout <= -1, interpolation='nearest') ax = plt.gca() ax.grid(0) plt.xticks([]) plt.yticks([]) # Add start/goal if add_start: plt.text( self._start_state[1], self._start_state[0], r'$\mathbf{S}$', fontsize=16, ha='center', va='center') plt.text( self._goal_state[1], self._goal_state[0], r'$\mathbf{G}$', fontsize=16, ha='center', va='center') h, w = self._layout.shape for y in range(h - 1): plt.plot([-0.5, w - 0.5], [y + 0.5, y + 0.5], '-w', lw=2) for x in range(w - 1): plt.plot([x + 0.5, x + 0.5], [-0.5, h - 0.5], '-w', lw=2) def plot_state(self, return_rgb=False): self.plot_grid(add_start=False) # Add the agent location plt.text( self._state[1], self._state[0], u'😃', # fontname='symbola', fontsize=18, ha='center', va='center', ) if return_rgb: fig = plt.gcf() plt.axis('tight') plt.subplots_adjust(0, 0, 1, 1, 0, 0) fig.canvas.draw() data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') w, h = fig.canvas.get_width_height() data = data.reshape((h, w, 3)) plt.close(fig) return data def plot_policy(self, policy): action_names = [ r'$\uparrow$', r'$\rightarrow$', r'$\downarrow$', r'$\leftarrow$' ] self.plot_grid() plt.title('Policy Visualization') h, w = self._layout.shape for y in range(h): for x in range(w): # if ((y, x) != self._start_state) and ((y, x) != self._goal_state): if (y, x) != self._goal_state: action_name = action_names[policy[y, x]] plt.text(x, y, action_name, ha='center', va='center') def plot_greedy_policy(self, q): greedy_actions = np.argmax(q, axis=2) self.plot_policy(greedy_actions) def build_gridworld_task(task, discount=0.9, penalty_for_walls=-5, observation_type=ObservationType.STATE_INDEX, max_episode_length=200): """Construct a particular Gridworld layout with start/goal states. Args: task: string name of the task to use. One of {'simple', 'obstacle', 'random_goal'}. discount: Discounting factor included in all Timesteps. penalty_for_walls: Reward added when hitting a wall (should be negative). observation_type: Enum observation type to use. One of: * ObservationType.STATE_INDEX: int32 index of agent occupied tile. * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the agent is and 0 elsewhere. * ObservationType.GRID: NxNx3 float32 grid of feature channels. First channel contains walls (1 if wall, 0 otherwise), second the agent position (1 if agent, 0 otherwise) and third goal position (1 if goal, 0 otherwise) * ObservationType.AGENT_GOAL_POS: float32 tuple with (agent_y, agent_x, goal_y, goal_x). max_episode_length: If set, will terminate an episode after this many steps. """ tasks_specifications = { 'simple': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), 'goal_state': (7, 2) }, 'obstacle': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, -1, 0, 0, -1], [-1, 0, 0, 0, -1, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), 'goal_state': (2, 8) }, 'random_goal': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), # 'randomize_goals': True }, } return GridWorld( discount=discount, penalty_for_walls=penalty_for_walls, observation_type=observation_type, max_episode_length=max_episode_length, **tasks_specifications[task]) def setup_environment(environment): """Returns the environment and its spec.""" # Make sure the environment outputs single-precision floats. environment = wrappers.SinglePrecisionWrapper(environment) # Grab the spec of the environment. environment_spec = specs.make_environment_spec(environment) return environment, environment_spec ###Output _____no_output_____ ###Markdown We will use two distinct tabular GridWorlds:* `simple` where the goal is at the bottom left of the grid, little navigation required.* `obstacle` where the goal is behind an obstacle the agent must avoid.You can visualize the grid worlds by running the cell below. Note that **S** indicates the start state and **G** indicates the goal. ###Code # Visualise GridWorlds # Instantiate two tabular environments, a simple task, and one that involves # the avoidance of an obstacle. simple_grid = build_gridworld_task( task='simple', observation_type=ObservationType.GRID) obstacle_grid = build_gridworld_task( task='obstacle', observation_type=ObservationType.GRID) # Plot them. simple_grid.plot_grid() plt.title('Simple') obstacle_grid.plot_grid() plt.title('Obstacle') ###Output _____no_output_____ ###Markdown In this environment, the agent has four possible **actions**: `up`, `right`, `down`, and `left`. The **reward** is `-5` for bumping into a wall, `+10` for reaching the goal, and `0` otherwise. The episode ends when the agent reaches the goal, and otherwise continues. The **discount** on continuing steps, is $\gamma = 0.9$. Before we start building an agent to interact with this environment, let's first look at the types of objects the environment either returns (e.g., **observations**) or consumes (e.g., **actions**). The `environment_spec` will show you the form of the **observations**, **rewards** and **discounts** that the environment exposes and the form of the **actions** that can be taken. ###Code # @title Look at environment_spec { form-width: "30%" } # Note: setup_environment is implemented in the same cell as GridWorld. environment, environment_spec = setup_environment(simple_grid) print('actions:\n', environment_spec.actions, '\n') print('observations:\n', environment_spec.observations, '\n') print('rewards:\n', environment_spec.rewards, '\n') print('discounts:\n', environment_spec.discounts, '\n') ###Output _____no_output_____ ###Markdown We first set the environment to its initial state by calling the `reset()` method which returns the first observation and resets the agent to the starting location. ###Code environment.reset() environment.plot_state() ###Output _____no_output_____ ###Markdown Now we want to take an action to interact with the environment. We do this by passing a valid action to the `dm_env.Environment.step()` method which returns a `dm_env.TimeStep` namedtuple with fields `(step_type, reward, discount, observation)`.Let's take an action and visualise the resulting state of the grid-world. (You'll need to rerun the cell if you pick a new action.) **Note for kaggle users:** As kaggle does not render the forms automatically the students should be careful to notice the various instructions and manually play around with the values for the variables ###Code # @title Pick an action and see the state changing action = "left" #@param ["up", "right", "down", "left"] {type:"string"} action_int = {'up': 0, 'right': 1, 'down': 2, 'left':3 } action = int(action_int[action]) timestep = environment.step(action) # pytype: dm_env.TimeStep environment.plot_state() # @title Run loop { form-width: "30%" } # @markdown This function runs an agent in the environment for a number of # @markdown episodes, allowing it to learn. # @markdown *Double-click* to inspect the `run_loop` function. def run_loop(environment, agent, num_episodes=None, num_steps=None, logger_time_delta=1., label='training_loop', log_loss=False, ): """Perform the run loop. We are following the Acme run loop. Run the environment loop for `num_episodes` episodes. Each episode is itself a loop which interacts first with the environment to get an observation and then give that observation to the agent in order to retrieve an action. Upon termination of an episode a new episode will be started. If the number of episodes is not given then this will interact with the environment infinitely. Args: environment: dm_env used to generate trajectories. agent: acme.Actor for selecting actions in the run loop. num_steps: number of steps to run the loop for. If `None` (default), runs without limit. num_episodes: number of episodes to run the loop for. If `None` (default), runs without limit. logger_time_delta: time interval (in seconds) between consecutive logging steps. label: optional label used at logging steps. """ logger = loggers.TerminalLogger(label=label, time_delta=logger_time_delta) iterator = range(num_episodes) if num_episodes else itertools.count() all_returns = [] num_total_steps = 0 for episode in iterator: # Reset any counts and start the environment. start_time = time.time() episode_steps = 0 episode_return = 0 episode_loss = 0 timestep = environment.reset() # Make the first observation. agent.observe_first(timestep) # Run an episode. while not timestep.last(): # Generate an action from the agent's policy and step the environment. action = agent.select_action(timestep.observation) timestep = environment.step(action) # Have the agent observe the timestep and let the agent update itself. agent.observe(action, next_timestep=timestep) agent.update() # Book-keeping. episode_steps += 1 num_total_steps += 1 episode_return += timestep.reward if log_loss: episode_loss += agent.last_loss if num_steps is not None and num_total_steps >= num_steps: break # Collect the results and combine with counts. steps_per_second = episode_steps / (time.time() - start_time) result = { 'episode': episode, 'episode_length': episode_steps, 'episode_return': episode_return, } if log_loss: result['loss_avg'] = episode_loss/episode_steps all_returns.append(episode_return) # Log the given results. logger.write(result) if num_steps is not None and num_total_steps >= num_steps: break return all_returns # @title Implement the evaluation loop { form-width: "30%" } # @markdown This function runs the agent in the environment for a number of # @markdown episodes, without allowing it to learn, in order to evaluate it. # @markdown *Double-click* to inspect the `evaluate` function. def evaluate(environment: dm_env.Environment, agent: acme.Actor, evaluation_episodes: int): frames = [] for episode in range(evaluation_episodes): timestep = environment.reset() episode_return = 0 steps = 0 while not timestep.last(): frames.append(environment.plot_state(return_rgb=True)) action = agent.select_action(timestep.observation) timestep = environment.step(action) steps += 1 episode_return += timestep.reward print( f'Episode {episode} ended with reward {episode_return} in {steps} steps' ) return frames def display_video(frames: Sequence[np.ndarray], filename: str = 'temp.mp4', frame_rate: int = 12): """Save and display video.""" # Write the frames to a video. with imageio.get_writer(filename, fps=frame_rate) as video: for frame in frames: video.append_data(frame) # Read video and display the video. video = open(filename, 'rb').read() b64_video = base64.b64encode(video) video_tag = ('<video width="320" height="240" controls alt="test" ' 'src="data:video/mp4;base64,{0}">').format(b64_video.decode()) return IPython.display.HTML(video_tag) ###Output _____no_output_____ ###Markdown Section 2.2: The AgentWe will be implementing Tabular & Function Approximation agents. Tabular agents are purely in Python.All agents will share the same interface from the Acme `Actor`. Here we borrow a figure from Acme to show how this interaction occurs: Agent interfaceEach agent implements the following functions:```pythonclass Agent(acme.Actor): def __init__(self, number_of_actions, number_of_states, ...): """Provides the agent the number of actions and number of states.""" def select_action(self, observation): """Generates actions from observations.""" def observe_first(self, timestep): """Records the initial timestep in a trajectory.""" def observe(self, action, next_timestep): """Records the transition which occurred from taking an action.""" def update(self): """Updates the agent's internals to potentially change its behavior."""```Remarks on the `observe()` function:1. In the last method, the `next_timestep` provides the `reward`, `discount`, and `observation` that resulted from selecting `action`.2. The `next_timestep.step_type` will be either `MID` or `LAST` and should be used to determine whether this is the last observation in the episode.3. The `next_timestep.step_type` cannot be `FIRST`; such a timestep should only ever be given to `observe_first()`. Coding Exercise 2.1: Random AgentBelow is a partially complete implemention of an agent that follows a random (non-learning) policy. Fill in the ```select_action``` method.The ```select_action``` method should return a random **integer** between 0 and ```self._num_actions``` (not a tensor or an array!) ###Code class RandomAgent(acme.Actor): def __init__(self, environment_spec): """Gets the number of available actions from the environment spec.""" self._num_actions = environment_spec.actions.num_values def select_action(self, observation): """Selects an action uniformly at random.""" ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the select action method") ################################################# # TODO return a random integer beween 0 and self._num_actions. # HINT: see the reference for how to sample a random integer in numpy: # https://numpy.org/doc/1.16/reference/routines.random.html action = ... return action def observe_first(self, timestep): """Does not record as the RandomAgent has no use for data.""" pass def observe(self, action, next_timestep): """Does not record as the RandomAgent has no use for data.""" pass def update(self): """Does not update as the RandomAgent does not learn from data.""" pass ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_7eaa84d6.py) ###Code # @title Visualisation of a random agent in GridWorld { form-width: "30%" } # Create the agent by giving it the action space specification. agent = RandomAgent(environment_spec) # Run the agent in the evaluation loop, which returns the frames. frames = evaluate(environment, agent, evaluation_episodes=1) # Visualize the random agent's episode. display_video(frames) ###Output _____no_output_____ ###Markdown --- Section 3: The Bellman Equation ###Code # @title Video 3: The Bellman Equation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Lv411E7CB", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"cLCoNBmYUns", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown In this tutorial we focus mainly on **value based methods**, where agents maintain a value for all state-action pairs and use those estimates to choose actions that maximize that **value** (instead of maintaining a policy directly, like in **policy gradient methods**). We represent the **action-value function** (otherwise known as $\color{green}Q$-function associated with following/employing a policy $\pi$ in a given MDP as:\begin{equation}\color{green}Q^{\color{blue}{\pi}}(\color{red}{s},\color{blue}{a}) = \mathbb{E}_{\tau \sim P^{\color{blue}{\pi}}} \left[ \sum_t \gamma^t \color{green}{r_t}| s_0=\color{red}s,a_0=\color{blue}{a} \right]\end{equation}where $\tau = \{\color{red}{s_0}, \color{blue}{a_0}, \color{green}{r_0}, \color{red}{s_1}, \color{blue}{a_1}, \color{green}{r_1}, \cdots \}$Recall that efficient value estimations are based on the famous **_Bellman Expectation Equation_**:\begin{equation}\color{green}Q^\color{blue}{\pi}(\color{red}{s},\color{blue}{a}) = \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\left( \color{green}{R}(\color{red}{s},\color{blue}{a}, \color{red}{s'}) + \gamma \color{green}V^\color{blue}{\pi}(\color{red}{s'}) \right)\end{equation}where $\color{green}V^\color{blue}{\pi}$ is the expected $\color{green}Q^\color{blue}{\pi}$ value for a particular state, i.e. $\color{green}V^\color{blue}{\pi}(\color{red}{s}) = \sum_{\color{blue}{a} \in \color{blue}{\mathcal{A}}} \color{blue}{\pi}(\color{blue}{a} |\color{red}{s}) \color{green}Q^\color{blue}{\pi}(\color{red}{s},\color{blue}{a})$. --- Section 4: Policy Evaluation ###Code # @title Video 4: Policy Evaluation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV15f4y157zA", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"HAxR4SuaZs4", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Lecture footnotes: **Episodic vs non-episodic environments:** Up until now, we've mainly been talking about episodic environments, or environments that terminate and reset (resampled) after a finite number of steps. However, there are also *non-episodic* environments, in which an agent cannot count on the environment resetting. Thus, they are forced to learn in a *continual* fashion.**Policy iteration vs value iteration:** Compare the two equations below, noting that the only difference is that in value iteration, the second sum is replaced by a max.*Policy iteration (using Bellman expectation equation)*\begin{equation}\color{green}Q_\color{green}{k}(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}{R}(\color{red}{s},\color{blue}{a}) +\gamma \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\sum_{\color{blue}{a'} \in \color{blue}{\mathcal{A}}} \color{blue}{\pi_{k-1}}(\color{blue}{a'} |\color{red}{s'}) \color{green}{Q_{k-1}}(\color{red}{s'},\color{blue}{a'})\end{equation}*Value iteration (using Bellman optimality equation)*\begin{equation}\color{green}Q_\color{green}{k}(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}{R}(\color{red}{s},\color{blue}{a}) +\gamma \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\max_{\color{blue}{a'}} \color{green}{Q_{k-1}}(\color{red}{s'},\color{blue}{a'})\end{equation} Coding Exercise 4.1 Policy Evaluation Agent Tabular agents implement a function `q_values()` returning a matrix of Q valuesof shape: (`number_of_states`, `number_of_actions`)In this section, we will implement a `PolicyEvalAgent` as an ACME actor: given an `evaluation_policy` $\pi_e$ and a `behaviour_policy` $\pi_b$, it will use the `behaviour_policy` to choose actions, and it will use the corresponding trajectory data to evaluate the `evaluation_policy` (i.e. compute the Q-values as if you were following the `evaluation_policy`). Algorithm:**Initialize** $Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s}$ ∈ $\mathcal{\color{red}S}$ and $\color{blue}a$ ∈ $\mathcal{\color{blue}A}(\color{red}s)$**Loop forever**:1. $\color{red}{s} \gets{}$current (nonterminal) state 2. $\color{blue}{a} \gets{} \text{behaviour_policy }\pi_b(\color{red}s)$ 3. Take action $\color{blue}{a}$; observe resulting reward $\color{green}{r}$, discount $\gamma$, and state, $\color{red}{s'}$4. Compute TD-error: $\delta = \color{green}R + \gamma Q(\color{red}{s'}, \underbrace{\pi_e(\color{red}{s'}}_{\color{blue}{a'}})) − Q(\color{red}s, \color{blue}a)$4. Update Q-value with a small $\alpha$ step: $Q(\color{red}s, \color{blue}a) \gets Q(\color{red}s, \color{blue}a) + \alpha \delta$We will use a uniform `random policy` as our `evaluation policy` here, but you could replace this with any policy you want, such as a greedy one. ###Code # Uniform random policy def random_policy(q): return np.random.randint(4) class PolicyEvalAgent(acme.Actor): def __init__(self, environment_spec, evaluated_policy, behaviour_policy=random_policy, step_size=0.1): self._state = None # Get number of states and actions from the environment spec. self._number_of_states = environment_spec.observations.num_values self._number_of_actions = environment_spec.actions.num_values self._step_size = step_size self._behaviour_policy = behaviour_policy self._evaluated_policy = evaluated_policy ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Initialize your Q-values!") ################################################# # TODO Initialize the Q-values to be all zeros. # (Note: can also be random, but we use zeros here for reproducibility) # HINT: This is a table of state and action pairs, so needs to be a 2-D # array. See the reference for how to create this in numpy: # https://numpy.org/doc/stable/reference/generated/numpy.zeros.html self._q = ... self._action = None self._next_state = None @property def q_values(self): # return the Q values return self._q def select_action(self, observation): # Select an action return self._behaviour_policy(self._q[observation]) def observe_first(self, timestep): self._state = timestep.observation def observe(self, action, next_timestep): s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute TD-Error. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Need to select the next action") ################################################# # TODO Select the next action from the evaluation policy # HINT: Refer to step 4 of the algorithm above. next_a = ... self._td_error = r + g * self._q[next_s, next_a] - self._q[s, a] def update(self): # Updates s = self._state a = self._action # Q-value table update. self._q[s, a] += self._step_size * self._td_error # Update the state self._state = self._next_state ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_7b3f830c.py) ###Code # @title Perform policy evaluation { form-width: "30%" } # @markdown Here you can visualize the state value and action-value functions for the "simple" task. num_steps = 1e3 # Create the environment grid = build_gridworld_task(task='simple') environment, environment_spec = setup_environment(grid) # Create the policy evaluation agent to evaluate a random policy. agent = PolicyEvalAgent(environment_spec, evaluated_policy=random_policy) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=int(num_steps)) # get the q-values q = agent.q_values.reshape(grid._layout.shape + (4, )) # visualize value functions print('AFTER {} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=1.) ###Output _____no_output_____ ###Markdown --- Section 5: Tabular Value-Based Model-Free Learning ###Code # @title Video 5: Model-Free Learning from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1iU4y1n7M6", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"Y4TweUYnexU", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Lecture footnotes: **On-policy (SARSA) vs off-policy (Q-learning) TD control:** Compare the two equations below and see that the only difference is that for Q-learning, the update is performed assuming that a greedy policy is followed, which is not the one used to collect the data, hence the name *off-policy*. *SARSA*\begin{equation}\color{green}Q(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}Q(\color{red}{s},\color{blue}{a}) +\alpha(\color{green}{r} + \gamma\color{green}{Q}(\color{red}{s'},\color{blue}{a'}) - \color{green}{Q}(\color{red}{s},\color{blue}{a}))\end{equation}*Q-learning*\begin{equation}\color{green}Q(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}Q(\color{red}{s},\color{blue}{a}) +\alpha(\color{green}{r} + \gamma\max_{\color{blue}{a'}} \color{green}{Q}(\color{red}{s'},\color{blue}{a'}) - \color{green}{Q}(\color{red}{s},\color{blue}{a}))\end{equation} Section 5.1: On-policy control: SARSA AgentIn this section, we are focusing on control RL algorithms, which perform the **evaluation** and **improvement** of the policy synchronously. That is, the policy that is being evaluated improves as the agent is using it to interact with the environent.The first algorithm we are going to be looking at is SARSA. This is an **on-policy algorithm** -- i.e: the data collection is done by leveraging the policy we're trying to optimize. As discussed during lectures, a greedy policy with respect to a given $\color{Green}Q$ fails to explore the environment as needed; we will use instead an $\epsilon$-greedy policy with respect to $\color{Green}Q$. SARSA Algorithm**Input:**- $\epsilon \in (0, 1)$ the probability of taking a random action , and- $\alpha > 0$ the step size, also known as learning rate.**Initialize:** $\color{green}Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s}$ ∈ $\mathcal{\color{red}S}$ and $\color{blue}a$ ∈ $\mathcal{\color{blue}A}$**Loop forever:**1. Get $\color{red}s \gets{}$current (non-terminal) state 2. Select $\color{blue}a \gets{} \text{epsilon_greedy}(\color{green}Q(\color{red}s, \cdot))$ 3. Step in the environment by passing the selected action $\color{blue}a$4. Observe resulting reward $\color{green}r$, discount $\gamma$, and state $\color{red}{s'}$5. Compute TD error: $\Delta \color{green}Q \gets \color{green}r + \gamma \color{green}Q(\color{red}{s'}, \color{blue}{a'}) − \color{green}Q(\color{red}s, \color{blue}a)$, where $\color{blue}{a'} \gets \text{epsilon_greedy}(\color{green}Q(\color{red}{s'}, \cdot))$5. Update $\color{green}Q(\color{red}s, \color{blue}a) \gets \color{green}Q(\color{red}s, \color{blue}a) + \alpha \Delta \color{green}Q$ Coding Exercise 5.1: Implement $\epsilon$-greedyBelow you will find incomplete code for sampling from an $\epsilon$-greedy policy, to be used later when we implement an agent that learns values according to the SARSA algorithm. ###Code def epsilon_greedy( q_values_at_s: np.ndarray, # Q-values in state s: Q(s, a). epsilon: float = 0.1 # Probability of taking a random action. ): """Return an epsilon-greedy action sample.""" ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete epsilon greedy policy function") ################################################# # TODO generate a uniform random number and compare it to epsilon to decide if # the action should be greedy or not # HINT: Use np.random.random() to generate a random float from 0 to 1. if ...: #TODO Greedy: Pick action with the largest Q-value. action = ... else: # Get the number of actions from the size of the given vector of Q-values. num_actions = np.array(q_values_at_s).shape[-1] # TODO else return a random action # HINT: Use np.random.randint() to generate a random integer. action = ... return action ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_524ce08f.py) ###Code # @title Sample action from $\epsilon$-greedy { form-width: "30%" } # @markdown With $\epsilon=0.5$, you should see that about half the time, you will get back the optimal # @markdown action 3, but half the time, it will be random. # Create fake q-values q_values = np.array([0, 0, 0, 1]) # Set epsilon = 0.5 epsilon = 0.5 action = epsilon_greedy(q_values, epsilon=epsilon) print(action) ###Output _____no_output_____ ###Markdown Coding Exercise 5.2: Run your SARSA agent on the `obstacle` environmentThis environment is similar to the Cliff-walking example from [Sutton & Barto](http://incompleteideas.net/book/RLbook2018.pdf) and allows us to see the different policies learned by on-policy vs off-policy methods. Try varying the number of steps. ###Code class SarsaAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, epsilon: float, step_size: float = 0.1 ): # Get number of states and actions from the environment spec. self._num_states = environment_spec.observations.num_values self._num_actions = environment_spec.actions.num_values # Create the table of Q-values, all initialized at zero. self._q = np.zeros((self._num_states, self._num_actions)) # Store algorithm hyper-parameters. self._step_size = step_size self._epsilon = epsilon # Containers you may find useful. self._state = None self._action = None self._next_state = None @property def q_values(self): return self._q def select_action(self, observation): return epsilon_greedy(self._q[observation], self._epsilon) def observe_first(self, timestep): # Set current state. self._state = timestep.observation def observe(self, action, next_timestep): # Unpacking the timestep to lighten notation. s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute the action that would be taken from the next state. next_a = self.select_action(next_s) # Compute the on-policy Q-value update. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the on-policy Q-value update") ################################################# # TODO complete the line below to compute the temporal difference error # HINT: see step 5 in the pseudocode above. self._td_error = ... def update(self): # Optional unpacking to lighten notation. s = self._state a = self._action ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete value update") ################################################# # Update the Q-value table value at (s, a). # TODO: Update the Q-value table value at (s, a). # HINT: see step 6 in the pseudocode above, remember that alpha = step_size! self._q[s, a] += ... # Update the current state. self._state = self._next_state ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_4f341a18.py) ###Code # @title Run SARSA agent and visualize value function num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # Create the environment. grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # Create the agent. agent = SarsaAgent(environment_spec, epsilon=0.1, step_size=0.1) # Run the experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) print('AFTER {0:,} STEPS ...'.format(num_steps)) # Get the Q-values and reshape them to recover grid-like structure of states. q_values = agent.q_values grid_shape = grid.layout.shape q_values = q_values.reshape([*grid_shape, -1]) # Visualize the value and Q-value tables. plot_action_values(q_values, epsilon=1.) # Visualize the greedy policy. environment.plot_greedy_policy(q_values) ###Output _____no_output_____ ###Markdown Section 5.2 Off-policy control: Q-learning AgentReminder: $\color{green}Q$-learning is a very powerful and general algorithm, that enables control (figuring out the optimal policy/value function) both on and off-policy.**Initialize** $\color{green}Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s} \in \color{red}{\mathcal{S}}$ and $\color{blue}{a} \in \color{blue}{\mathcal{A}}$**Loop forever**:1. Get $\color{red}{s} \gets{}$current (non-terminal) state 2. Select $\color{blue}{a} \gets{} \text{behaviour_policy}(\color{red}{s})$ 3. Step in the environment by passing the selected action $\color{blue}{a}$4. Observe resulting reward $\color{green}{r}$, discount $\gamma$, and state, $\color{red}{s'}$5. Compute the TD error: $\Delta \color{green}Q \gets \color{green}{r} + \gamma \color{green}Q(\color{red}{s'}, \color{blue}{a'}) − \color{green}Q(\color{red}{s}, \color{blue}{a})$, where $\color{blue}{a'} \gets \arg\max_{\color{blue}{\mathcal A}} \color{green}Q(\color{red}{s'}, \cdot)$6. Update $\color{green}Q(\color{red}{s}, \color{blue}{a}) \gets \color{green}Q(\color{red}{s}, \color{blue}{a}) + \alpha \Delta \color{green}Q$Notice that the actions $\color{blue}{a}$ and $\color{blue}{a'}$ are not selected using the same policy, hence this algorithm being **off-policy**. Coding Exercise 5.3: Implement Q-Learning ###Code QValues = np.ndarray Action = int # A value-based policy takes the Q-values at a state and returns an action. ValueBasedPolicy = Callable[[QValues], Action] class QLearningAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, behaviour_policy: ValueBasedPolicy, step_size: float = 0.1): # Get number of states and actions from the environment spec. self._num_states = environment_spec.observations.num_values self._num_actions = environment_spec.actions.num_values # Create the table of Q-values, all initialized at zero. self._q = np.zeros((self._num_states, self._num_actions)) # Store algorithm hyper-parameters. self._step_size = step_size # Store behavior policy. self._behaviour_policy = behaviour_policy # Containers you may find useful. self._state = None self._action = None self._next_state = None @property def q_values(self): return self._q def select_action(self, observation): return self._behaviour_policy(self._q[observation]) def observe_first(self, timestep): # Set current state. self._state = timestep.observation def observe(self, action, next_timestep): # Unpacking the timestep to lighten notation. s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute the TD error. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the off-policy Q-value update") ################################################# # TODO complete the line below to compute the temporal difference error # HINT: This is very similar to what we did for SARSA, except keep in mind # that we're now taking a max over the q-values (see lecture footnotes above). # You will find the function np.max() useful. self._td_error = ... def update(self): # Optional unpacking to lighten notation. s = self._state a = self._action ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete value update") ################################################# # Update the Q-value table value at (s, a). # TODO: Update the Q-value table value at (s, a). # HINT: see step 6 in the pseudocode above, remember that alpha = step_size! self._q[...] += ... # Update the current state. self._state = self._next_state ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_0f0ff9d8.py) Run your Q-learning agent on the `obstacle` environment ###Code # @title Run your Q-learning epsilon = 1. # @param {type:"number"} num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # environment grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # behavior policy behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon) # agent agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) # get the q-values q = agent.q_values.reshape(grid.layout.shape + (4,)) # visualize value functions print('AFTER {:,} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=0) # visualise the greedy policy grid.plot_greedy_policy(q) plt.show() ###Output _____no_output_____ ###Markdown Experiment with different levels of greediness* The default was $\epsilon=1.$, what does this correspond to?* Try also $\epsilon =0.1, 0.5$. What do you observe? Does the behaviour policy affect the training in any way? ###Code # @title Run the cell epsilon = 0.1 # @param {type:"number"} num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # environment grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # behavior policy behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon) # agent agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) # get the q-values q = agent.q_values.reshape(grid.layout.shape + (4,)) # visualize value functions print('AFTER {:,} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=epsilon) # visualise the greedy policy grid.plot_greedy_policy(q) plt.show() ###Output _____no_output_____ ###Markdown --- Section 6: Function Approximation ###Code # @title Video 6: Function approximation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1sg411M7cn", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"7_MYePyYhrM", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown So far we only considered look-up tables for value-functions. In all previous cases every state and action pair $(\color{red}{s}, \color{blue}{a})$, had an entry in our $\color{green}Q$-table. Again, this is possible in this environment as the number of states is equal to the number of cells in the grid. But this is not scalable to situations where, say, the goal location changes or the obstacles are in different locations at every episode (consider how big the table could be in this situation?).An example (not covered in this tutorial) is ATARI from pixels, where the number of possible frames an agent can see is exponential in the number of pixels on the screen.But what we **really** want is just to be able to *compute* the Q-value, when fed with a particular $(\color{red}{s}, \color{blue}{a})$ pair. So if we had a way to get a function to do this work instead of keeping a big table, we'd get around this problem.To address this, we can use **function approximation** as a way to generalize Q-values over some representation of the very large state space, and **train** them to output the values they should. In this section, we will explore $\color{green}Q$-learning with function approximation, which (although it has been theoretically proven to diverge for some degenerate MDPs) can yield impressive results in very large environments. In particular, we will look at [Neural Fitted Q (NFQ) Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) and [Deep Q-Networks (DQN)](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf). Section 6.1 Replay BuffersAn important property of off-policy methods like $\color{green}Q$-learning is that they involve two policies: one for exploration and one that is being optimized (via the $\color{green}Q$-function updates). This means that we can generate data from the **behavior** policy and insert that data into some form of data storage---usually referred to as **replay**.In order to optimize the $\color{green}Q$-function we can then sample data from the replay **dataset** and use that data to perform an update. An illustration of this learning loop is shown below. In the next cell we will show how to implement a simple replay buffer. This can be as simple as a python list containing transition data. In more complicated scenarios we might want to have a more performance-tuned variant, we might have to be more concerned about how large replay is and what to do when its full, and we might want to sample from replay in different ways. But a simple python list can go a surprisingly long way. ###Code # Simple replay buffer # Create a convenient container for the SARS tuples required by deep RL agents. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) class ReplayBuffer(object): """A simple Python replay buffer.""" def __init__(self, capacity: int = None): self.buffer = collections.deque(maxlen=capacity) self._prev_state = None def add_first(self, initial_timestep: dm_env.TimeStep): self._prev_state = initial_timestep.observation def add(self, action: int, timestep: dm_env.TimeStep): transition = Transitions( state=self._prev_state, action=action, reward=timestep.reward, discount=timestep.discount, next_state=timestep.observation, ) self.buffer.append(transition) self._prev_state = timestep.observation def sample(self, batch_size: int) -> Transitions: # Sample a random batch of Transitions as a list. batch_as_list = random.sample(self.buffer, batch_size) # Convert the list of `batch_size` Transitions into a single Transitions # object where each field has `batch_size` stacked fields. return tree_utils.stack_sequence_fields(batch_as_list) def flush(self) -> Transitions: entire_buffer = tree_utils.stack_sequence_fields(self.buffer) self.buffer.clear() return entire_buffer def is_ready(self, batch_size: int) -> bool: return batch_size <= len(self.buffer) ###Output _____no_output_____ ###Markdown Section 6.2: NFQ Agent[Neural Fitted Q Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) was one of the first papers to demonstrate how to leverage recent advances in Deep Learning to approximate the Q-value by a neural network.$^1$In other words, the value $\color{green}Q(\color{red}{s}, \color{blue}{a})$ are approximated by the output of a neural network $\color{green}{Q_w}(\color{red}{s}, \color{blue}{a})$ for each possible action $\color{blue}{a} \in \color{blue}{\mathcal{A}}$.$^2$When introducing function approximations, and neural networks in particular, we need to have a loss to optimize. But looking back at the tabular setting above, you can see that we already have some notion of error: the **TD error**.By training our neural network to output values such that the *TD error is minimized*, we will also satisfy the Bellman Optimality Equation, which is a good sufficient condition to enforce, to obtain an optimal policy.Thanks to automatic differentiation, we can just write the TD error as a loss, e.g., with an $\ell^2$ loss, but others would work too:\begin{equation}L(\color{green}w) = \mathbb{E}\left[ \left( \color{green}{r} + \gamma \max_\color{blue}{a'} \color{green}{Q_w}(\color{red}{s'}, \color{blue}{a'}) − \color{green}{Q_w}(\color{red}{s}, \color{blue}{a}) \right)^2\right].\end{equation}Then we can compute the gradient with respect to the parameters of the neural network and improve our Q-value approximation incrementally.NFQ builds on $\color{green}Q$-learning, but if one were to update the Q-values online directly, the training can be unstable and very slow.Instead, NFQ uses a replay buffer, similar to what we see implemented above (Section 6.1), to update the Q-value in a batched setting.When it was introduced, it also was entirely off-policy using a uniformly random policy to collect data, which was prone to instability when applied to more complex environments (e.g. when the input are pixels or the tasks are longer and more complicated).But it is a good stepping stone to the more complex agents used today. Here, we will look at a slightly different and modernised implementation of NFQ.Below you will find an incomplete NFQ agent that takes in observations from a gridworld. Instead of receiving a tabular state, it receives an observation in the form of its (x,y) coordinates in the gridworld, and the (x,y) coordinates of the goal.The goal of this coding exercise is to complete this agent by implementing the loss, using mean squared error.---$^1$ If you read the NFQ paper, they use a "control" notation, where there is a "cost to minimize", instead of "rewards to maximize", so don't be surprised if signs/max/min do not correspond.$^2$ We could feed it $\color{blue}{a}$ as well and ask $Q_w$ for a single scalar value, but given we have a fixed number of actions and we usually need to take an $argmax$ over them, it's easiest to just output them all in one pass. Coding Exercise 6.1: Implement NFQ ###Code # Create a convenient container for the SARS tuples required by NFQ. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) class NeuralFittedQAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, q_network: nn.Module, replay_capacity: int = 100_000, epsilon: float = 0.1, batch_size: int = 1, learning_rate: float = 3e-4): # Store agent hyperparameters and network. self._num_actions = environment_spec.actions.num_values self._epsilon = epsilon self._batch_size = batch_size self._q_network = q_network # Container for the computed loss (see run_loop implementation above). self.last_loss = 0.0 # Create the replay buffer. self._replay_buffer = ReplayBuffer(replay_capacity) # Setup optimizer that will train the network to minimize the loss. self._optimizer = torch.optim.Adam(self._q_network.parameters(),lr = learning_rate) self._loss_fn = nn.MSELoss() def select_action(self, observation): # Compute Q-values. q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) # Adds batch dimension. q_values = q_values.squeeze(0) # Removes batch dimension # Select epsilon-greedy action. if self._epsilon < torch.rand(1): action = q_values.argmax(axis=-1) else: action = torch.randint(low=0, high=self._num_actions , size=(1,), dtype=torch.int64) return action def q_values(self, observation): q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) return q_values.squeeze(0).detach() def update(self): if not self._replay_buffer.is_ready(self._batch_size): # If the replay buffer is not ready to sample from, do nothing. return # Sample a minibatch of transitions from experience replay. transitions = self._replay_buffer.sample(self._batch_size) # Note: each of these tensors will be of shape [batch_size, ...]. s = torch.tensor(transitions.state) a = torch.tensor(transitions.action,dtype=torch.int64) r = torch.tensor(transitions.reward) d = torch.tensor(transitions.discount) next_s = torch.tensor(transitions.next_state) # Compute the Q-values at next states in the transitions. with torch.no_grad(): q_next_s = self._q_network(next_s) # Shape [batch_size, num_actions]. max_q_next_s = q_next_s.max(axis=-1)[0] # Compute the TD error and then the losses. target_q_value = r + d * max_q_next_s # Compute the Q-values at original state. q_s = self._q_network(s) # Gather the Q-value corresponding to each action in the batch. q_s_a = q_s.gather(1, a.view(-1,1)).squeeze(0) ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the NFQ Agent") ################################################# # TODO Average the squared TD errors over the entire batch using # self._loss_fn, which is defined above as nn.MSELoss() # HINT: Take a look at the reference for nn.MSELoss here: # https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html # What should you put for the input and the target? loss = ... # Compute the gradients of the loss with respect to the q_network variables. self._optimizer.zero_grad() loss.backward() # Apply the gradient update. self._optimizer.step() # Store the loss for logging purposes (see run_loop implementation above). self.last_loss = loss.detach().numpy() def observe_first(self, timestep: dm_env.TimeStep): self._replay_buffer.add_first(timestep) def observe(self, action: int, next_timestep: dm_env.TimeStep): self._replay_buffer.add(action, next_timestep) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_f42d1415.py) Train and Evaluate the NFQ Agent ###Code # @title Training the NFQ Agent epsilon = 0.4 # @param {type:"number"} max_episode_length = 200 # Create the environment. grid = build_gridworld_task( task='simple', observation_type=ObservationType.AGENT_GOAL_POS, max_episode_length=max_episode_length) environment, environment_spec = setup_environment(grid) # Define the neural function approximator (aka Q network). q_network = nn.Sequential(nn.Linear(4, 50), nn.ReLU(), nn.Linear(50, 50), nn.ReLU(), nn.Linear(50, environment_spec.actions.num_values)) # Build the trainable Q-learning agent agent = NeuralFittedQAgent( environment_spec, q_network, epsilon=epsilon, replay_capacity=100_000, batch_size=10, learning_rate=1e-3) returns = run_loop( environment=environment, agent=agent, num_episodes=500, logger_time_delta=1., log_loss=True) # @title Evaluating the agent (set $\epsilon=0$) # Temporarily change epsilon to be more greedy; remember to change it back. agent._epsilon = 0.0 # Record a few episodes. frames = evaluate(environment, agent, evaluation_episodes=5) # Change epsilon back. agent._epsilon = epsilon # Display the video of the episodes. display_video(frames, frame_rate=6) # @title Visualise the learned $Q$ values # Evaluate the policy for every state, similar to tabular agents above. environment.reset() pi = np.zeros(grid._layout_dims, dtype=np.int32) q = np.zeros(grid._layout_dims + (4, )) for y in range(grid._layout_dims[0]): for x in range(grid._layout_dims[1]): # Hack observation to see what the Q-network would output at that point. environment.set_state(x, y) obs = environment.get_obs() q[y, x] = np.asarray(agent.q_values(obs)) pi[y, x] = np.asarray(agent.select_action(obs)) plot_action_values(q) ###Output _____no_output_____ ###Markdown Compare the Q-values approximated with the neural network with the tabular case in **Section 5.3**. Notice how the neural network is generalizing from the visited states to the unvisited similar states, while in the tabular case we updated the value of each state only when we visited that state. Compare the greedy and behaviour ($\epsilon$-greedy) policies ###Code # @title Compare the greedy policy with the agent's policy # @markdown Notice that the agent's behavior policy has a lot more randomness, # @markdown due to the high $\epsilon$. However, the greedy policy that's learned # @markdown is optimal. environment.plot_greedy_policy(q) plt.figtext(-.08, .95, 'Greedy policy using the learnt Q-values') plt.title('') plt.show() environment.plot_policy(pi) plt.figtext(-.08, .95, "Policy using the agent's behavior policy") plt.title('') plt.show() ###Output _____no_output_____ ###Markdown --- Section 7: DQN ###Code #@title Video 7: Deep Q-Networks (DQN) from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Mo4y1Q7yD", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"HEDoNtV1y-w", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown --> In this section, we will look at an advanced deep RL Agent based on the following publication, [Playing Atari with Deep Reinforcement Learning](https://deepmind.com/research/publications/playing-atari-deep-reinforcement-learning), which introduced the first deep learning model to successfully learn control policies directly from high-dimensional pixel inputs using RL.Here the agent will act directly on a pixel representation of the gridworld. You can find an incomplete implementation below. Coding Exercise 7.1: Run a DQN Agent ###Code class DQN(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, network: nn.Module, replay_capacity: int = 100_000, epsilon: float = 0.1, batch_size: int = 1, learning_rate: float = 5e-4, target_update_frequency: int = 10): # Store agent hyperparameters and network. self._num_actions = environment_spec.actions.num_values self._epsilon = epsilon self._batch_size = batch_size self._q_network = q_network # create a second q net with the same structure and initial values, which # we'll be updating separately from the learned q-network. self._target_network = copy.deepcopy(self._q_network) # Container for the computed loss (see run_loop implementation above). self.last_loss = 0.0 # Create the replay buffer. self._replay_buffer = ReplayBuffer(replay_capacity) # Keep an internal tracker of steps self._current_step = 0 # How often to update the target network self._target_update_frequency = target_update_frequency # Setup optimizer that will train the network to minimize the loss. self._optimizer = torch.optim.Adam(self._q_network.parameters(), lr=learning_rate) self._loss_fn = nn.MSELoss() def select_action(self, observation): # Compute Q-values. # Sonnet requires a batch dimension, which we squeeze out right after. q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) # Adds batch dimension. q_values = q_values.squeeze(0) # Removes batch dimension # Select epsilon-greedy action. if self._epsilon < torch.rand(1): action = q_values.argmax(axis=-1) else: action = torch.randint(low=0, high=self._num_actions , size=(1,), dtype=torch.int64) return action def q_values(self, observation): q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) return q_values.squeeze(0).detach() def update(self): self._current_step += 1 if not self._replay_buffer.is_ready(self._batch_size): # If the replay buffer is not ready to sample from, do nothing. return # Sample a minibatch of transitions from experience replay. transitions = self._replay_buffer.sample(self._batch_size) # Optionally unpack the transitions to lighten notation. # Note: each of these tensors will be of shape [batch_size, ...]. s = torch.tensor(transitions.state) a = torch.tensor(transitions.action,dtype=torch.int64) r = torch.tensor(transitions.reward) d = torch.tensor(transitions.discount) next_s = torch.tensor(transitions.next_state) # Compute the Q-values at next states in the transitions. with torch.no_grad(): ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the DQN Agent") ################################################# #TODO get the value of the next states evaluated by the target network #HINT: use self._target_network, defined above. q_next_s = ... # Shape [batch_size, num_actions]. max_q_next_s = q_next_s.max(axis=-1)[0] # Compute the TD error and then the losses. target_q_value = r + d * max_q_next_s # Compute the Q-values at original state. q_s = self._q_network(s) # Gather the Q-value corresponding to each action in the batch. q_s_a = q_s.gather(1, a.view(-1,1)).squeeze(0) # Average the squared TD errors over the entire batch loss = self._loss_fn(target_q_value, q_s_a) # Compute the gradients of the loss with respect to the q_network variables. self._optimizer.zero_grad() loss.backward() # Apply the gradient update. self._optimizer.step() if self._current_step % self._target_update_frequency == 0: self._target_network.load_state_dict(self._q_network.state_dict()) # Store the loss for logging purposes (see run_loop implementation above). self.last_loss = loss.detach().numpy() def observe_first(self, timestep: dm_env.TimeStep): self._replay_buffer.add_first(timestep) def observe(self, action: int, next_timestep: dm_env.TimeStep): self._replay_buffer.add(action, next_timestep) # Create a convenient container for the SARS tuples required by NFQ. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_d6d1b1d0.py) ###Code # @title Train and evaluate the DQN agent epsilon = 0.25 # @param {type: "number"} num_episodes = 500 # @param {type: "integer"} max_episode_length = 50 # @param {type: "integer"} grid = build_gridworld_task( task='simple', observation_type=ObservationType.GRID, max_episode_length=max_episode_length) environment, environment_spec = setup_environment(grid) # Build the agent's network. class Permute(nn.Module): def __init__(self, order: list): super(Permute,self).__init__() self.order = order def forward(self, x): return x.permute(self.order) q_network = nn.Sequential(Permute([0, 3, 1, 2]), nn.Conv2d(3, 32, kernel_size=4, stride=2,padding=1), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(3, 1), nn.Flatten(), nn.Linear(384, 50), nn.ReLU(), nn.Linear(50, environment_spec.actions.num_values) ) agent = DQN( environment_spec=environment_spec, network=q_network, batch_size=10, epsilon=epsilon, target_update_frequency=25) returns = run_loop( environment=environment, agent=agent, num_episodes=num_episodes, num_steps=100000) # @title Visualise the learned $Q$ values # Evaluate the policy for every state, similar to tabular agents above. pi = np.zeros(grid._layout_dims, dtype=np.int32) q = np.zeros(grid._layout_dims + (4,)) for y in range(grid._layout_dims[0]): for x in range(grid._layout_dims[1]): # Hack observation to see what the Q-network would output at that point. environment.set_state(x, y) obs = environment.get_obs() q[y, x] = np.asarray(agent.q_values(obs)) pi[y, x] = np.asarray(agent.select_action(obs)) plot_action_values(q) # @title Compare the greedy policy with the agent's policy environment.plot_greedy_policy(q) plt.figtext(-.08, .95, "Greedy policy using the learnt Q-values") plt.title('') plt.show() environment.plot_policy(pi) plt.figtext(-.08, .95, "Policy using the agent's epsilon-greedy policy") plt.title('') plt.show() ###Output _____no_output_____ ###Markdown **Note:** You’ll get a better estimate of the value functions if you increase `num_episodes` and `max_episode_length`, but this will take longer to train. Feel free to play around after the day! --- Section 8: Beyond Value Based Model-Free Methods ###Code # @title Video 8: Other RL Methods from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV14w411977Y", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"1N4Jm9loJx4", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Cartpole taskHere we switch to training on a different kind of task, which has a continuous action space: Cartpole in [Gym](https://gym.openai.com/). As you recall from the video, policy-based methods are particularly well-suited for these kinds of tasks. We will be exploring two of those methods below. ###Code # @title Make a CartPole environment, `gym.make('CartPole-v1')` env = gym.make('CartPole-v1') # Set seeds env.seed(SEED) set_seed(SEED) ###Output _____no_output_____ ###Markdown Section 8.1: Policy gradientNow we will turn to policy gradient methods. Rather than defining the policy in terms of a value function, i.e. $\color{blue}\pi(\color{red}s) = \arg\max_{\color{blue}a}\color{green}Q(\color{red}s, \color{blue}a)$, we will directly parameterize the policy and write it as the distribution\begin{equation}\color{blue}a_t \sim \color{blue}\pi_{\theta}(\color{blue}a_t|\color{red}s_t).\end{equation}Here $\theta$ represent the parameters of the policy. We will update the policy parameters using gradient ascent to **maximize** expected future reward.One convenient way to represent the conditional distribution above is as a function that takes a state $\color{red}s$ and returns a distribution over actions $\color{blue}a$.Defined below is an agent which implements the REINFORCE algorithm. REINFORCE (Williams 1992) is the simplest model-free general reinforcement learning technique.The **basic idea** is to use probabilistic action choice. If the reward at the end turns out to be high, we make **all** actions in this sequence **more likely** (otherwise, we do the opposite).This strategy could reinforce "bad" actions as well, however they will turn out to be part of trajectories with low reward and will likely not get accentuated.From the lectures, we know that we need to compute\begin{equation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} G_t \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}where $\color{green} G_t$ is the sum over future rewards from time $t$, defined as\begin{equation}\color{green} G_t = \sum_{n=t}^T \gamma^{n-t} \color{green} R(\color{red}{s_t}, \color{blue}{a_t}, \color{red}{s_{t+1}}).\end{equation}The algorithm below will collect the state, action, and reward data in its buffer until it reaches a full trajectory. It will then update its policy given the above gradient (and the Adam optimizer).A policy gradient trains an agent without explicitly mapping the value for every state-action pair in an environment by taking small steps and updating the policy based on the reward associated with that step. In this section, we will build a small network that trains using policy gradient using PyTorch.The agent can receive a reward immediately for an action or it can receive the award at a later time such as the end of the episode. The policy function our agent will try to learn is $\pi_\theta(a,s)$, where $\theta$ is the parameter vector, $s$ is a particular state, and $a$ is an action.Monte-Carlo Policy Gradient approach will be used, which means the agent will run through an entire episode and then update policy based on the rewards obtained. ###Code # @title Set the hyperparameters for Policy Gradient num_steps = 300 learning_rate = 0.01 # @param {type:"number"} gamma = 0.99 # @param {type:"number"} dropout = 0.6 # @param {type:"number"} # @markdown Only used in Policy Gradient Method: hidden_neurons = 128 # @param {type:"integer"} ###Output _____no_output_____ ###Markdown Coding Exercise 8.1: Creating a simple neural networkBelow you will find some incomplete code. Fill in the missing code to construct the specified neural network.Let us define a simple feed forward neural network with one hidden layer of 128 neurons and a dropout of 0.6. Let's use Adam as our optimizer and a learning rate of 0.01. Use the hyperparameters already defined rather than using explicit values.Using dropout will significantly improve the performance of the policy. Do compare your results with and without dropout and experiment with other hyper-parameter values as well. ###Code class PolicyGradientNet(nn.Module): def __init__(self): super(PolicyGradientNet, self).__init__() self.state_space = env.observation_space.shape[0] self.action_space = env.action_space.n ################################################# ## TODO for students: Define two linear layers ## from the first expression raise NotImplementedError("Student exercise: Create FF neural network.") ################################################# # HINT: you can construct linear layers using nn.Linear(); what are the # sizes of the inputs and outputs of each of the layers? Also remember # that you need to use hidden_neurons (see hyperparameters section above). # https://pytorch.org/docs/stable/generated/torch.nn.Linear.html self.l1 = ... self.l2 = ... self.gamma = ... # Episode policy and past rewards self.past_policy = Variable(torch.Tensor()) self.reward_episode = [] # Overall reward and past loss self.past_reward = [] self.past_loss = [] def forward(self, x): model = torch.nn.Sequential( self.l1, nn.Dropout(p=dropout), nn.ReLU(), self.l2, nn.Softmax(dim=-1) ) return model(x) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_9aaf4a83.py) Now let's create an instance of the network we have defined and use Adam as the optimizer using the learning_rate as hyperparameter already defined above. ###Code policy = PolicyGradientNet() pg_optimizer = optim.Adam(policy.parameters(), lr=learning_rate) ###Output _____no_output_____ ###Markdown Select ActionThe `select_action()` function chooses an action based on our policy probability distribution using the PyTorch distributions package. Our policy returns a probability for each possible action in our action space (move left or move right) as an array of length two such as [0.7, 0.3]. We then choose an action based on these probabilities, record our history, and return our action. ###Code def select_action(state): #Select an action (0 or 1) by running policy model and choosing based on the probabilities in state state = torch.from_numpy(state).type(torch.FloatTensor) state = policy(Variable(state)) c = Categorical(state) action = c.sample() # Add log probability of chosen action if policy.past_policy.dim() != 0: policy.past_policy = torch.cat([policy.past_policy, c.log_prob(action).reshape(1)]) else: policy.past_policy = (c.log_prob(action).reshape(1)) return action ###Output _____no_output_____ ###Markdown Update policyThis function updates the policy. Reward $G_t$We update our policy by taking a sample of the action value function $Q^{\pi_\theta} (s_t,a_t)$ by playing through episodes of the game. $Q^{\pi_\theta} (s_t,a_t)$ is defined as the expected return by taking action $a$ in state $s$ following policy $\pi$.We know that for every step the simulation continues we receive a reward of 1. We can use this to calculate the policy gradient at each time step, where $r$ is the reward for a particular state-action pair. Rather than using the instantaneous reward, $r$, we instead use a long term reward $ v_{t} $ where $v_t$ is the discounted sum of all future rewards for the length of the episode. $v_{t}$ is then,\begin{equation}\color{green} G_t = \sum_{n=t}^T \gamma^{n-t} \color{green} R(\color{red}{s_t}, \color{blue}{a_t}, \color{red}{s_{t+1}}).\end{equation}where $\gamma$ is the discount factor (0.99). For example, if an episode lasts 5 steps, the reward for each step will be [4.90, 3.94, 2.97, 1.99, 1].Next we scale our reward vector by substracting the mean from each element and scaling to unit variance by dividing by the standard deviation. This practice is common for machine learning applications and the same operation as Scikit Learn's __[StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)__. It also has the effect of compensating for future uncertainty. Update Policy: equationAfter each episode we apply Monte-Carlo Policy Gradient to improve our policy according to the equation:\begin{equation}\Delta\theta_t = \alpha\nabla_\theta \, \log \pi_\theta (s_t,a_t)G_t\end{equation}We will then feed our policy history multiplied by our rewards to our optimizer and update the weights of our neural network using stochastic gradient **ascent**. This should increase the likelihood of actions that got our agent a larger reward. The following function ```update_policy``` updates the network weights and therefore the policy. ###Code def update_policy(): R = 0 rewards = [] # Discount future rewards back to the present using gamma for r in policy.reward_episode[::-1]: R = r + policy.gamma * R rewards.insert(0, R) # Scale rewards rewards = torch.FloatTensor(rewards) rewards = (rewards - rewards.mean()) / (rewards.std() + np.finfo(np.float32).eps) # Calculate loss pg_loss = (torch.sum(torch.mul(policy.past_policy, Variable(rewards)).mul(-1), -1)) # Update network weights # Use zero_grad(), backward() and step() methods of the optimizer instance. pg_optimizer.zero_grad() pg_loss.backward() # Update the weights for param in policy.parameters(): param.grad.data.clamp_(-1, 1) pg_optimizer.step() # Save and intialize episode past counters policy.past_loss.append(pg_loss.item()) policy.past_reward.append(np.sum(policy.reward_episode)) policy.past_policy = Variable(torch.Tensor()) policy.reward_episode= [] ###Output _____no_output_____ ###Markdown TrainingThis is our main policy training loop. For each step in a training episode, we choose an action, take a step through the environment, and record the resulting new state and reward. We call update_policy() at the end of each episode to feed the episode history to our neural network and improve our policy. ###Code def policy_gradient_train(episodes): running_reward = 10 for episode in range(episodes): state = env.reset() done = False for time in range(1000): action = select_action(state) # Step through environment using chosen action state, reward, done, _ = env.step(action.item()) # Save reward policy.reward_episode.append(reward) if done: break # Used to determine when the environment is solved. running_reward = (running_reward * gamma) + (time * (1 - gamma)) update_policy() if episode % 50 == 0: print(f"Episode {episode}\tLast length: {time:5.0f}" f"\tAverage length: {running_reward:.2f}") if running_reward > env.spec.reward_threshold: print(f"Solved! Running reward is now {running_reward} " f"and the last episode runs to {time} time steps!") break ###Output _____no_output_____ ###Markdown Run the model ###Code episodes = 500 #@param {type:"integer"} policy_gradient_train(episodes) ###Output _____no_output_____ ###Markdown Plot the results ###Code # @title Plot the training performance for policy gradient def plot_policy_gradient_training(): window = int(episodes / 20) fig, ((ax1), (ax2)) = plt.subplots(1, 2, sharey=True, figsize=[15, 4]); rolling_mean = pd.Series(policy.past_reward).rolling(window).mean() std = pd.Series(policy.past_reward).rolling(window).std() ax1.plot(rolling_mean) ax1.fill_between(range(len(policy.past_reward)), rolling_mean-std, rolling_mean+std, color='orange', alpha=0.2) ax1.set_title(f"Episode Length Moving Average ({window}-episode window)") ax1.set_xlabel('Episode'); ax1.set_ylabel('Episode Length') ax2.plot(policy.past_reward) ax2.set_title('Episode Length') ax2.set_xlabel('Episode') ax2.set_ylabel('Episode Length') fig.tight_layout(pad=2) plt.show() plot_policy_gradient_training() ###Output _____no_output_____ ###Markdown Exercise 8.1: Explore different hyperparameters.Try running the model again, by modifying the hyperparameters and observe the outputs. Be sure to rerun the function definition cells in order to pick up on the updated values.What do you see when you 1. increase learning rate2. decrease learning rate3. decrease gamma ($\gamma$)4. increase number of hidden neurons in the network Section 8.2: Actor-criticRecall the policy gradient\begin{equation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} G_t \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}The policy parameters are updated using Monte Carlo technique and uses random samples. This introduces high variability in log probabilities and cumulative reward values. This leads to noisy gradients and can cause unstable learning.One way to reduce variance and increase stability is subtracting the cumulative reward by a baseline:\begin{equation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} (G_t - b) \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}Intuitively, reducing cumulative reward will make smaller gradients and thus smaller and more stable (hopefully) updates.From the lecture slides, we know that in Actor Critic Method:1. The “Critic” estimates the value function. This could be the action-value (the Q value) or state-value (the V value).2. The “Actor” updates the policy distribution in the direction suggested by the Critic (such as with policy gradients).Both the Critic and Actor functions are parameterized with neural networks. The "Critic" network parameterizes the Q-value. ###Code # @title Set the hyperparameters for Actor Critic learning_rate = 0.01 # @param {type:"number"} gamma = 0.99 # @param {type:"number"} dropout = 0.6 # Only used in Actor-Critic Method hidden_size = 256 # @param {type:"integer"} num_steps = 300 ###Output _____no_output_____ ###Markdown Actor Critic Network ###Code class ActorCriticNet(nn.Module): def __init__(self, num_inputs, num_actions, hidden_size, learning_rate=3e-4): super(ActorCriticNet, self).__init__() self.num_actions = num_actions self.critic_linear1 = nn.Linear(num_inputs, hidden_size) self.critic_linear2 = nn.Linear(hidden_size, 1) self.actor_linear1 = nn.Linear(num_inputs, hidden_size) self.actor_linear2 = nn.Linear(hidden_size, num_actions) self.all_rewards = [] self.all_lengths = [] self.average_lengths = [] def forward(self, state): state = Variable(torch.from_numpy(state).float().unsqueeze(0)) value = F.relu(self.critic_linear1(state)) value = self.critic_linear2(value) policy_dist = F.relu(self.actor_linear1(state)) policy_dist = F.softmax(self.actor_linear2(policy_dist), dim=1) return value, policy_dist ###Output _____no_output_____ ###Markdown Training ###Code def actor_critic_train(episodes): all_lengths = [] average_lengths = [] all_rewards = [] entropy_term = 0 for episode in range(episodes): log_probs = [] values = [] rewards = [] state = env.reset() for steps in range(num_steps): value, policy_dist = actor_critic.forward(state) value = value.detach().numpy()[0, 0] dist = policy_dist.detach().numpy() action = np.random.choice(num_outputs, p=np.squeeze(dist)) log_prob = torch.log(policy_dist.squeeze(0)[action]) entropy = -np.sum(np.mean(dist) * np.log(dist)) new_state, reward, done, _ = env.step(action) rewards.append(reward) values.append(value) log_probs.append(log_prob) entropy_term += entropy state = new_state if done or steps == num_steps - 1: qval, _ = actor_critic.forward(new_state) qval = qval.detach().numpy()[0, 0] all_rewards.append(np.sum(rewards)) all_lengths.append(steps) average_lengths.append(np.mean(all_lengths[-10:])) if episode % 50 == 0: print(f"episode: {episode},\treward: {np.sum(rewards)}," f"\ttotal length: {steps}," f"\taverage length: {average_lengths[-1]}") break # compute Q values qvals = np.zeros_like(values) for t in reversed(range(len(rewards))): qval = rewards[t] + gamma * qval qvals[t] = qval #update actor critic values = torch.FloatTensor(values) qvals = torch.FloatTensor(qvals) log_probs = torch.stack(log_probs) advantage = qvals - values actor_loss = (-log_probs * advantage).mean() critic_loss = 0.5 * advantage.pow(2).mean() ac_loss = actor_loss + critic_loss + 0.001 * entropy_term ac_optimizer.zero_grad() ac_loss.backward() ac_optimizer.step() # Store results actor_critic.average_lengths = average_lengths actor_critic.all_rewards = all_rewards actor_critic.all_lengths = all_lengths ###Output _____no_output_____ ###Markdown Run the model ###Code episodes = 500 # @param {type:"integer"} env.reset() num_inputs = env.observation_space.shape[0] num_outputs = env.action_space.n actor_critic = ActorCriticNet(num_inputs, num_outputs, hidden_size) ac_optimizer = optim.Adam(actor_critic.parameters()) actor_critic_train(episodes) ###Output _____no_output_____ ###Markdown Plot the results ###Code # @title Plot the training performance for Actor Critic def plot_actor_critic_training(actor_critic, episodes): window = int(episodes / 20) plt.figure(figsize=(15, 4)) plt.subplot(1, 2, 1) smoothed_rewards = pd.Series(actor_critic.all_rewards).rolling(window).mean() std = pd.Series(actor_critic.all_rewards).rolling(window).std() plt.plot(smoothed_rewards, label='Smoothed rewards') plt.fill_between(range(len(smoothed_rewards)), smoothed_rewards - std, smoothed_rewards + std, color='orange', alpha=0.2) plt.xlabel('Episode') plt.ylabel('Reward') plt.subplot(1, 2, 2) plt.plot(actor_critic.all_lengths, label='All lengths') plt.plot(actor_critic.average_lengths, label='Average lengths') plt.xlabel('Episode') plt.ylabel('Episode length') plt.legend() plt.tight_layout() plt.show() plot_actor_critic_training(actor_critic, episodes) ###Output _____no_output_____ ###Markdown Exercise 8.3: Effect of episodes on performanceChange the episodes from 500 to 3000 and observe the performance impact. Exercise 8.4: Effect of learning rate on performanceModify the hyperparameters related to learning_rate and gamma and observe the impact on the performance.Be sure to rerun the function definition cells in order to pick up on the updated values. --- Section 9: RL in the real world ###Code # @title Video 9: Real-world applications and ethics from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Nq4y1X7AF", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"5kBtiW88QVw", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Exercise 9: Group discussionForm a group of 2-3 and have discussions (roughly 3 minutes each) of the following questions:1. **Safety**: what are some safety issues that arise in RL that don’t arise with e.g. supervised learning?2. **Generalization**: What happens if your RL agent is presented with data it hasn’t trained on? (“goes out of distribution”)3. How important do you think **interpretability** is in the ethical and safe deployment of RL agents in the real world? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_99944c89.py) --- Section 10: How to learn more ###Code # @title Video 10: How to learn more from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1WM4y1T7G5", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"dKaOpgor5Ek", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown &nbsp; Tutorial 1: Introduction to Reinforcement Learning**Week 3, Day 2: Basic Reinforcement Learning (RL)****By Neuromatch Academy**__Content creators:__ Matthew Sargent, Anoop Kulkarni, Sowmya Parthiban, Feryal Behbahani, Jane Wang__Content reviewers:__ Ezekiel Williams, Mehul Rastogi, Lily Cheng, Roberto Guidotti, Arush Tagade, Kelson Shilling-Scrivo__Content editors:__ Spiros Chavlis __Production editors:__ Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesBy the end of the tutorial, you should be able to:1. Within the RL framework, be able to identify the different components: environment, agent, states, and actions. 2. Understand the Bellman equation and components involved. 3. Implement tabular value-based model-free learning (Q-learning and SARSA).4. Discuss real-world applications and ethical issues of RL.By completing the Bonus sections, you should be able to:1. Run a DQN agent and experiment with different hyperparameters.2. Have a high-level understanding of other (nonvalue-based) RL methods. ###Code # @title Tutorial slides # @markdown These are the slides for the videos in this tutorial # @markdown If you want to locally download the slides, click [here](https://osf.io/m3kqy/download) from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/m3kqy/?direct%26mode=render%26action=download%26mode=render", width=854, height=480) ###Output _____no_output_____ ###Markdown --- SetupRun the following *Setup* cells in order to set up needed functions. Don't worry about the code for now!**Note:** There is an issue with some images not showing up if you're using a Safari browser. Please switch to Chrome if this is the case. ###Code # @title Install requirements from IPython.display import clear_output # @markdown we install the acme library, see [here](https://github.com/deepmind/acme) for more info # @markdown WARNING: There may be errors and warnings reported during the installation. # @markdown However, they should be ignored. !apt-get install -y xvfb ffmpeg --quiet !pip install --upgrade pip --quiet !pip install imageio --quiet !pip install imageio-ffmpeg !pip install gym --quiet !pip install enum34 --quiet !pip install dm-env --quiet !pip install pandas --quiet !pip install keras-nightly==2.5.0.dev2021020510 --quiet !pip install grpcio==1.34.0 --quiet !pip install tensorflow --quiet !pip install typing --quiet !pip install einops --quiet !pip install dm-acme --quiet !pip install dm-acme[reverb] --quiet !pip install dm-acme[tf] --quiet !pip install dm-acme[envs] --quiet !pip install dm-env --quiet !pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet from evaltools.airtable import AirtableForm # generate airtable form atform = AirtableForm('appn7VdPRseSoMXEG','W3D2_T1','https://portal.neuromatchacademy.org/api/redirect/to/3e77471d-4de0-4e43-a026-9cfb603b5197') clear_output() # Import modules import gym import enum import copy import time import acme import torch import base64 import dm_env import IPython import imageio import warnings import itertools import collections import numpy as np import pandas as pd import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import matplotlib as mpl import matplotlib.pyplot as plt import tensorflow.compat.v2 as tf from acme import specs from acme import wrappers from acme.utils import tree_utils from acme.utils import loggers from torch.autograd import Variable from torch.distributions import Categorical from typing import Callable, Sequence tf.enable_v2_behavior() warnings.filterwarnings('ignore') np.set_printoptions(precision=3, suppress=1) # @title Figure settings import ipywidgets as widgets # interactive display %matplotlib inline %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") mpl.rc('image', cmap='Blues') # @title Helper Functions # @markdown Implement helpers for value visualisation map_from_action_to_subplot = lambda a: (2, 6, 8, 4)[a] map_from_action_to_name = lambda a: ("up", "right", "down", "left")[a] def plot_values(values, colormap='pink', vmin=-1, vmax=10): plt.imshow(values, interpolation="nearest", cmap=colormap, vmin=vmin, vmax=vmax) plt.yticks([]) plt.xticks([]) plt.colorbar(ticks=[vmin, vmax]) def plot_state_value(action_values, epsilon=0.1): q = action_values fig = plt.figure(figsize=(4, 4)) vmin = np.min(action_values) vmax = np.max(action_values) v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1) plot_values(v, colormap='summer', vmin=vmin, vmax=vmax) plt.title("$v(s)$") def plot_action_values(action_values, epsilon=0.1): q = action_values fig = plt.figure(figsize=(8, 8)) fig.subplots_adjust(wspace=0.3, hspace=0.3) vmin = np.min(action_values) vmax = np.max(action_values) dif = vmax - vmin for a in [0, 1, 2, 3]: plt.subplot(3, 3, map_from_action_to_subplot(a)) plot_values(q[..., a], vmin=vmin - 0.05*dif, vmax=vmax + 0.05*dif) action_name = map_from_action_to_name(a) plt.title(r"$q(s, \mathrm{" + action_name + r"})$") plt.subplot(3, 3, 5) v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1) plot_values(v, colormap='summer', vmin=vmin, vmax=vmax) plt.title("$v(s)$") def plot_stats(stats, window=10): plt.figure(figsize=(16,4)) plt.subplot(121) xline = range(0, len(stats.episode_lengths), window) plt.plot(xline, smooth(stats.episode_lengths, window=window)) plt.ylabel('Episode Length') plt.xlabel('Episode Count') plt.subplot(122) plt.plot(xline, smooth(stats.episode_rewards, window=window)) plt.ylabel('Episode Return') plt.xlabel('Episode Count') # @title Helper functions def smooth(x, window=10): return x[:window*(len(x)//window)].reshape(len(x)//window, window).mean(axis=1) # @title Set random seed # @markdown Executing `set_seed(seed=seed)` you are setting the seed # for DL its critical to set the random seed so that students can have a # baseline to compare their results to expected results. # Read more here: https://pytorch.org/docs/stable/notes/randomness.html # Call `set_seed` function in the exercises to ensure reproducibility. import random import torch def set_seed(seed=None, seed_torch=True): if seed is None: seed = np.random.choice(2 ** 32) random.seed(seed) np.random.seed(seed) if seed_torch: torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True print(f'Random seed {seed} has been set.') # In case that `DataLoader` is used def seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 np.random.seed(worker_seed) random.seed(worker_seed) # @title Set device (GPU or CPU). Execute `set_device()` # especially if torch modules used. # inform the user if the notebook uses GPU or CPU. def set_device(): device = "cuda" if torch.cuda.is_available() else "cpu" if device != "cuda": print("WARNING: For this notebook to perform best, " "if possible, in the menu under `Runtime` -> " "`Change runtime type.` select `GPU` ") else: print("GPU is enabled in this notebook.") return device SEED = 2021 set_seed(seed=SEED) DEVICE = set_device() ###Output _____no_output_____ ###Markdown --- Section 1: Introduction to Reinforcement Learning*Time estimate: ~15mins* ###Code # @title Video 1: Introduction to RL from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV18V411p7iK", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"BWz3scQN50M", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 1: Introduction to RL') display(out) ###Output _____no_output_____ ###Markdown Acme: a research framework for reinforcement learning**Acme** is a library of reinforcement learning (RL) agents and agent building blocks by Google DeepMind. Acme strives to expose simple, efficient, and readable agents, that serve both as reference implementations of popular algorithms and as strong baselines, while still providing enough flexibility to do novel research. The design of Acme also attempts to provide multiple points of entry to the RL problem at differing levels of complexity.For more information see the github's repository [deepmind/acme](https://github.com/deepmind/acme). --- Section 2: General Formulation of RL Problems and Gridworlds*Time estimate: ~30mins* ###Code # @title Video 2: General Formulation and MDPs from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1k54y1E7Zn", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"h6TxAALY5Fc", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 2: General Formulation and MDPs') display(out) ###Output _____no_output_____ ###Markdown The agent interacts with the environment in a loop corresponding to the following diagram. The environment defines a set of **actions** that an agent can take. The agent takes an action informed by the **observations** it receives, and will get a **reward** from the environment after each action. The goal in RL is to find an agent whose actions maximize the total accumulation of rewards obtained from the environment. Section 2.1: The Environment For this practical session we will focus on a **simple grid world** environment,which consists of a 9 x 10 grid of either wall or empty cells, depicted in black and white, respectively. The smiling agent starts from an initial location and needs to navigate to reach the goal square.Below you will find an implementation of this Gridworld as a `dm_env.Environment`.There is no coding in this section, but if you want, you can look over the provided code so that you can familiarize yourself with an example of how to set up a **grid world** environment. ###Code # @title Implement GridWorld { form-width: "30%" } # @markdown *Double-click* to inspect the contents of this cell. class ObservationType(enum.IntEnum): STATE_INDEX = enum.auto() AGENT_ONEHOT = enum.auto() GRID = enum.auto() AGENT_GOAL_POS = enum.auto() class GridWorld(dm_env.Environment): def __init__(self, layout, start_state, goal_state=None, observation_type=ObservationType.STATE_INDEX, discount=0.9, penalty_for_walls=-5, reward_goal=10, max_episode_length=None, randomize_goals=False): """Build a grid environment. Simple gridworld defined by a map layout, a start and a goal state. Layout should be a NxN grid, containing: * 0: empty * -1: wall * Any other positive value: value indicates reward; episode will terminate Args: layout: NxN array of numbers, indicating the layout of the environment. start_state: Tuple (y, x) of starting location. goal_state: Optional tuple (y, x) of goal location. Will be randomly sampled once if None. observation_type: Enum observation type to use. One of: * ObservationType.STATE_INDEX: int32 index of agent occupied tile. * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the agent is and 0 elsewhere. * ObservationType.GRID: NxNx3 float32 grid of feature channels. First channel contains walls (1 if wall, 0 otherwise), second the agent position (1 if agent, 0 otherwise) and third goal position (1 if goal, 0 otherwise) * ObservationType.AGENT_GOAL_POS: float32 tuple with (agent_y, agent_x, goal_y, goal_x) discount: Discounting factor included in all Timesteps. penalty_for_walls: Reward added when hitting a wall (should be negative). reward_goal: Reward added when finding the goal (should be positive). max_episode_length: If set, will terminate an episode after this many steps. randomize_goals: If true, randomize goal at every episode. """ if observation_type not in ObservationType: raise ValueError('observation_type should be a ObservationType instace.') self._layout = np.array(layout) self._start_state = start_state self._state = self._start_state self._number_of_states = np.prod(np.shape(self._layout)) self._discount = discount self._penalty_for_walls = penalty_for_walls self._reward_goal = reward_goal self._observation_type = observation_type self._layout_dims = self._layout.shape self._max_episode_length = max_episode_length self._num_episode_steps = 0 self._randomize_goals = randomize_goals if goal_state is None: # Randomly sample goal_state if not provided goal_state = self._sample_goal() self.goal_state = goal_state def _sample_goal(self): """Randomly sample reachable non-starting state.""" # Sample a new goal n = 0 max_tries = 1e5 while n < max_tries: goal_state = tuple(np.random.randint(d) for d in self._layout_dims) if goal_state != self._state and self._layout[goal_state] == 0: # Reachable state found! return goal_state n += 1 raise ValueError('Failed to sample a goal state.') @property def layout(self): return self._layout @property def number_of_states(self): return self._number_of_states @property def goal_state(self): return self._goal_state @property def start_state(self): return self._start_state @property def state(self): return self._state def set_state(self, x, y): self._state = (y, x) @goal_state.setter def goal_state(self, new_goal): if new_goal == self._state or self._layout[new_goal] < 0: raise ValueError('This is not a valid goal!') # Zero out any other goal self._layout[self._layout > 0] = 0 # Setup new goal location self._layout[new_goal] = self._reward_goal self._goal_state = new_goal def observation_spec(self): if self._observation_type is ObservationType.AGENT_ONEHOT: return specs.Array( shape=self._layout_dims, dtype=np.float32, name='observation_agent_onehot') elif self._observation_type is ObservationType.GRID: return specs.Array( shape=self._layout_dims + (3,), dtype=np.float32, name='observation_grid') elif self._observation_type is ObservationType.AGENT_GOAL_POS: return specs.Array( shape=(4,), dtype=np.float32, name='observation_agent_goal_pos') elif self._observation_type is ObservationType.STATE_INDEX: return specs.DiscreteArray( self._number_of_states, dtype=int, name='observation_state_index') def action_spec(self): return specs.DiscreteArray(4, dtype=int, name='action') def get_obs(self): if self._observation_type is ObservationType.AGENT_ONEHOT: obs = np.zeros(self._layout.shape, dtype=np.float32) # Place agent obs[self._state] = 1 return obs elif self._observation_type is ObservationType.GRID: obs = np.zeros(self._layout.shape + (3,), dtype=np.float32) obs[..., 0] = self._layout < 0 obs[self._state[0], self._state[1], 1] = 1 obs[self._goal_state[0], self._goal_state[1], 2] = 1 return obs elif self._observation_type is ObservationType.AGENT_GOAL_POS: return np.array(self._state + self._goal_state, dtype=np.float32) elif self._observation_type is ObservationType.STATE_INDEX: y, x = self._state return y * self._layout.shape[1] + x def reset(self): self._state = self._start_state self._num_episode_steps = 0 if self._randomize_goals: self.goal_state = self._sample_goal() return dm_env.TimeStep( step_type=dm_env.StepType.FIRST, reward=None, discount=None, observation=self.get_obs()) def step(self, action): y, x = self._state if action == 0: # up new_state = (y - 1, x) elif action == 1: # right new_state = (y, x + 1) elif action == 2: # down new_state = (y + 1, x) elif action == 3: # left new_state = (y, x - 1) else: raise ValueError( 'Invalid action: {} is not 0, 1, 2, or 3.'.format(action)) new_y, new_x = new_state step_type = dm_env.StepType.MID if self._layout[new_y, new_x] == -1: # wall reward = self._penalty_for_walls discount = self._discount new_state = (y, x) elif self._layout[new_y, new_x] == 0: # empty cell reward = 0. discount = self._discount else: # a goal reward = self._layout[new_y, new_x] discount = 0. new_state = self._start_state step_type = dm_env.StepType.LAST self._state = new_state self._num_episode_steps += 1 if (self._max_episode_length is not None and self._num_episode_steps >= self._max_episode_length): step_type = dm_env.StepType.LAST return dm_env.TimeStep( step_type=step_type, reward=np.float32(reward), discount=discount, observation=self.get_obs()) def plot_grid(self, add_start=True): plt.figure(figsize=(4, 4)) plt.imshow(self._layout <= -1, interpolation='nearest') ax = plt.gca() ax.grid(0) plt.xticks([]) plt.yticks([]) # Add start/goal if add_start: plt.text( self._start_state[1], self._start_state[0], r'$\mathbf{S}$', fontsize=16, ha='center', va='center') plt.text( self._goal_state[1], self._goal_state[0], r'$\mathbf{G}$', fontsize=16, ha='center', va='center') h, w = self._layout.shape for y in range(h - 1): plt.plot([-0.5, w - 0.5], [y + 0.5, y + 0.5], '-w', lw=2) for x in range(w - 1): plt.plot([x + 0.5, x + 0.5], [-0.5, h - 0.5], '-w', lw=2) def plot_state(self, return_rgb=False): self.plot_grid(add_start=False) # Add the agent location plt.text( self._state[1], self._state[0], u'😃', # fontname='symbola', fontsize=18, ha='center', va='center', ) if return_rgb: fig = plt.gcf() plt.axis('tight') plt.subplots_adjust(0, 0, 1, 1, 0, 0) fig.canvas.draw() data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') w, h = fig.canvas.get_width_height() data = data.reshape((h, w, 3)) plt.close(fig) return data def plot_policy(self, policy): action_names = [ r'$\uparrow$', r'$\rightarrow$', r'$\downarrow$', r'$\leftarrow$' ] self.plot_grid() plt.title('Policy Visualization') h, w = self._layout.shape for y in range(h): for x in range(w): # if ((y, x) != self._start_state) and ((y, x) != self._goal_state): if (y, x) != self._goal_state: action_name = action_names[policy[y, x]] plt.text(x, y, action_name, ha='center', va='center') def plot_greedy_policy(self, q): greedy_actions = np.argmax(q, axis=2) self.plot_policy(greedy_actions) def build_gridworld_task(task, discount=0.9, penalty_for_walls=-5, observation_type=ObservationType.STATE_INDEX, max_episode_length=200): """Construct a particular Gridworld layout with start/goal states. Args: task: string name of the task to use. One of {'simple', 'obstacle', 'random_goal'}. discount: Discounting factor included in all Timesteps. penalty_for_walls: Reward added when hitting a wall (should be negative). observation_type: Enum observation type to use. One of: * ObservationType.STATE_INDEX: int32 index of agent occupied tile. * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the agent is and 0 elsewhere. * ObservationType.GRID: NxNx3 float32 grid of feature channels. First channel contains walls (1 if wall, 0 otherwise), second the agent position (1 if agent, 0 otherwise) and third goal position (1 if goal, 0 otherwise) * ObservationType.AGENT_GOAL_POS: float32 tuple with (agent_y, agent_x, goal_y, goal_x). max_episode_length: If set, will terminate an episode after this many steps. """ tasks_specifications = { 'simple': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), 'goal_state': (7, 2) }, 'obstacle': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, -1, 0, 0, -1], [-1, 0, 0, 0, -1, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), 'goal_state': (2, 8) }, 'random_goal': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), # 'randomize_goals': True }, } return GridWorld( discount=discount, penalty_for_walls=penalty_for_walls, observation_type=observation_type, max_episode_length=max_episode_length, **tasks_specifications[task]) def setup_environment(environment): """Returns the environment and its spec.""" # Make sure the environment outputs single-precision floats. environment = wrappers.SinglePrecisionWrapper(environment) # Grab the spec of the environment. environment_spec = specs.make_environment_spec(environment) return environment, environment_spec ###Output _____no_output_____ ###Markdown We will use two distinct tabular GridWorlds:* `simple` where the goal is at the bottom left of the grid, little navigation required.* `obstacle` where the goal is behind an obstacle the agent must avoid.You can visualize the grid worlds by running the cell below. Note that **S** indicates the start state and **G** indicates the goal. ###Code # Visualise GridWorlds # Instantiate two tabular environments, a simple task, and one that involves # the avoidance of an obstacle. simple_grid = build_gridworld_task( task='simple', observation_type=ObservationType.GRID) obstacle_grid = build_gridworld_task( task='obstacle', observation_type=ObservationType.GRID) # Plot them. simple_grid.plot_grid() plt.title('Simple') obstacle_grid.plot_grid() plt.title('Obstacle') ###Output _____no_output_____ ###Markdown In this environment, the agent has four possible **actions**: `up`, `right`, `down`, and `left`. The **reward** is `-5` for bumping into a wall, `+10` for reaching the goal, and `0` otherwise. The episode ends when the agent reaches the goal, and otherwise continues. The **discount** on continuing steps, is $\gamma = 0.9$. Before we start building an agent to interact with this environment, let's first look at the types of objects the environment either returns (e.g., **observations**) or consumes (e.g., **actions**). The `environment_spec` will show you the form of the **observations**, **rewards** and **discounts** that the environment exposes and the form of the **actions** that can be taken. ###Code # @title Look at environment_spec { form-width: "30%" } # Note: setup_environment is implemented in the same cell as GridWorld. environment, environment_spec = setup_environment(simple_grid) print('actions:\n', environment_spec.actions, '\n') print('observations:\n', environment_spec.observations, '\n') print('rewards:\n', environment_spec.rewards, '\n') print('discounts:\n', environment_spec.discounts, '\n') ###Output _____no_output_____ ###Markdown We first set the environment to its initial state by calling the `reset()` method which returns the first observation and resets the agent to the starting location. ###Code environment.reset() environment.plot_state() ###Output _____no_output_____ ###Markdown Now we want to take an action to interact with the environment. We do this by passing a valid action to the `dm_env.Environment.step()` method which returns a `dm_env.TimeStep` namedtuple with fields `(step_type, reward, discount, observation)`.Let's take an action and visualise the resulting state of the grid-world. (You'll need to rerun the cell if you pick a new action.) **Note for kaggle users:** As kaggle does not render the forms automatically the students should be careful to notice the various instructions and manually play around with the values for the variables ###Code # @title Pick an action and see the state changing action = "left" #@param ["up", "right", "down", "left"] {type:"string"} action_int = {'up': 0, 'right': 1, 'down': 2, 'left':3 } action = int(action_int[action]) timestep = environment.step(action) # pytype: dm_env.TimeStep environment.plot_state() # @title Run loop { form-width: "30%" } # @markdown This function runs an agent in the environment for a number of # @markdown episodes, allowing it to learn. # @markdown *Double-click* to inspect the `run_loop` function. def run_loop(environment, agent, num_episodes=None, num_steps=None, logger_time_delta=1., label='training_loop', log_loss=False, ): """Perform the run loop. We are following the Acme run loop. Run the environment loop for `num_episodes` episodes. Each episode is itself a loop which interacts first with the environment to get an observation and then give that observation to the agent in order to retrieve an action. Upon termination of an episode a new episode will be started. If the number of episodes is not given then this will interact with the environment infinitely. Args: environment: dm_env used to generate trajectories. agent: acme.Actor for selecting actions in the run loop. num_steps: number of steps to run the loop for. If `None` (default), runs without limit. num_episodes: number of episodes to run the loop for. If `None` (default), runs without limit. logger_time_delta: time interval (in seconds) between consecutive logging steps. label: optional label used at logging steps. """ logger = loggers.TerminalLogger(label=label, time_delta=logger_time_delta) iterator = range(num_episodes) if num_episodes else itertools.count() all_returns = [] num_total_steps = 0 for episode in iterator: # Reset any counts and start the environment. start_time = time.time() episode_steps = 0 episode_return = 0 episode_loss = 0 timestep = environment.reset() # Make the first observation. agent.observe_first(timestep) # Run an episode. while not timestep.last(): # Generate an action from the agent's policy and step the environment. action = agent.select_action(timestep.observation) timestep = environment.step(action) # Have the agent observe the timestep and let the agent update itself. agent.observe(action, next_timestep=timestep) agent.update() # Book-keeping. episode_steps += 1 num_total_steps += 1 episode_return += timestep.reward if log_loss: episode_loss += agent.last_loss if num_steps is not None and num_total_steps >= num_steps: break # Collect the results and combine with counts. steps_per_second = episode_steps / (time.time() - start_time) result = { 'episode': episode, 'episode_length': episode_steps, 'episode_return': episode_return, } if log_loss: result['loss_avg'] = episode_loss/episode_steps all_returns.append(episode_return) # Log the given results. logger.write(result) if num_steps is not None and num_total_steps >= num_steps: break return all_returns # @title Implement the evaluation loop { form-width: "30%" } # @markdown This function runs the agent in the environment for a number of # @markdown episodes, without allowing it to learn, in order to evaluate it. # @markdown *Double-click* to inspect the `evaluate` function. def evaluate(environment: dm_env.Environment, agent: acme.Actor, evaluation_episodes: int): frames = [] for episode in range(evaluation_episodes): timestep = environment.reset() episode_return = 0 steps = 0 while not timestep.last(): frames.append(environment.plot_state(return_rgb=True)) action = agent.select_action(timestep.observation) timestep = environment.step(action) steps += 1 episode_return += timestep.reward print( f'Episode {episode} ended with reward {episode_return} in {steps} steps' ) return frames def display_video(frames: Sequence[np.ndarray], filename: str = 'temp.mp4', frame_rate: int = 12): """Save and display video.""" # Write the frames to a video. with imageio.get_writer(filename, fps=frame_rate) as video: for frame in frames: video.append_data(frame) # Read video and display the video. video = open(filename, 'rb').read() b64_video = base64.b64encode(video) video_tag = ('<video width="320" height="240" controls alt="test" ' 'src="data:video/mp4;base64,{0}">').format(b64_video.decode()) return IPython.display.HTML(video_tag) ###Output _____no_output_____ ###Markdown Section 2.2: The AgentWe will be implementing Tabular & Function Approximation agents. Tabular agents are purely in Python.All agents will share the same interface from the Acme `Actor`. Here we borrow a figure from Acme to show how this interaction occurs: Agent interfaceEach agent implements the following functions:```pythonclass Agent(acme.Actor): def __init__(self, number_of_actions, number_of_states, ...): """Provides the agent the number of actions and number of states.""" def select_action(self, observation): """Generates actions from observations.""" def observe_first(self, timestep): """Records the initial timestep in a trajectory.""" def observe(self, action, next_timestep): """Records the transition which occurred from taking an action.""" def update(self): """Updates the agent's internals to potentially change its behavior."""```Remarks on the `observe()` function:1. In the last method, the `next_timestep` provides the `reward`, `discount`, and `observation` that resulted from selecting `action`.2. The `next_timestep.step_type` will be either `MID` or `LAST` and should be used to determine whether this is the last observation in the episode.3. The `next_timestep.step_type` cannot be `FIRST`; such a timestep should only ever be given to `observe_first()`. Coding Exercise 2.1: Random AgentBelow is a partially complete implemention of an agent that follows a random (non-learning) policy. Fill in the ```select_action``` method.The ```select_action``` method should return a random **integer** between 0 and ```self._num_actions``` (not a tensor or an array!) ###Code class RandomAgent(acme.Actor): def __init__(self, environment_spec): """Gets the number of available actions from the environment spec.""" self._num_actions = environment_spec.actions.num_values def select_action(self, observation): """Selects an action uniformly at random.""" ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the select action method") ################################################# # TODO return a random integer beween 0 and self._num_actions. # HINT: see the reference for how to sample a random integer in numpy: # https://numpy.org/doc/1.16/reference/routines.random.html action = ... return action def observe_first(self, timestep): """Does not record as the RandomAgent has no use for data.""" pass def observe(self, action, next_timestep): """Does not record as the RandomAgent has no use for data.""" pass def update(self): """Does not update as the RandomAgent does not learn from data.""" pass # add event to airtable atform.add_event('Coding Exercise 2.1: Random Agent') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_23bbdfe0.py) ###Code # @title Visualisation of a random agent in GridWorld { form-width: "30%" } # Create the agent by giving it the action space specification. agent = RandomAgent(environment_spec) # Run the agent in the evaluation loop, which returns the frames. frames = evaluate(environment, agent, evaluation_episodes=1) # Visualize the random agent's episode. display_video(frames) ###Output _____no_output_____ ###Markdown --- Section 3: The Bellman Equation*Time estimate: ~15mins* ###Code # @title Video 3: The Bellman Equation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Lv411E7CB", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"cLCoNBmYUns", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 3: The Bellman Equation') display(out) ###Output _____no_output_____ ###Markdown In this tutorial we focus mainly on **value based methods**, where agents maintain a value for all state-action pairs and use those estimates to choose actions that maximize that **value** (instead of maintaining a policy directly, like in **policy gradient methods**). We represent the **action-value function** (otherwise known as $\color{green}Q$-function associated with following/employing a policy $\pi$ in a given MDP as:\begin{equation}\color{green}Q^{\color{blue}{\pi}}(\color{red}{s},\color{blue}{a}) = \mathbb{E}_{\tau \sim P^{\color{blue}{\pi}}} \left[ \sum_t \gamma^t \color{green}{r_t}| s_0=\color{red}s,a_0=\color{blue}{a} \right]\end{equation}where $\tau = \{\color{red}{s_0}, \color{blue}{a_0}, \color{green}{r_0}, \color{red}{s_1}, \color{blue}{a_1}, \color{green}{r_1}, \cdots \}$Recall that efficient value estimations are based on the famous **_Bellman Expectation Equation_**:\begin{equation}\color{green}Q^\color{blue}{\pi}(\color{red}{s},\color{blue}{a}) = \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\left( \color{green}{R}(\color{red}{s},\color{blue}{a}, \color{red}{s'}) + \gamma \color{green}V^\color{blue}{\pi}(\color{red}{s'}) \right)\end{equation}where $\color{green}V^\color{blue}{\pi}$ is the expected $\color{green}Q^\color{blue}{\pi}$ value for a particular state, i.e. $\color{green}V^\color{blue}{\pi}(\color{red}{s}) = \sum_{\color{blue}{a} \in \color{blue}{\mathcal{A}}} \color{blue}{\pi}(\color{blue}{a} |\color{red}{s}) \color{green}Q^\color{blue}{\pi}(\color{red}{s},\color{blue}{a})$. --- Section 4: Policy Evaluation*Time estimate: ~30mins* ###Code # @title Video 4: Policy Evaluation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV15f4y157zA", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"HAxR4SuaZs4", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 4: Policy Evaluation') display(out) ###Output _____no_output_____ ###Markdown Lecture footnotes: **Episodic vs non-episodic environments:** Up until now, we've mainly been talking about episodic environments, or environments that terminate and reset (resampled) after a finite number of steps. However, there are also *non-episodic* environments, in which an agent cannot count on the environment resetting. Thus, they are forced to learn in a *continual* fashion.**Policy iteration vs value iteration:** Compare the two equations below, noting that the only difference is that in value iteration, the second sum is replaced by a max.*Policy iteration (using Bellman expectation equation)*\begin{equation}\color{green}Q_\color{green}{k}(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}{R}(\color{red}{s},\color{blue}{a}) +\gamma \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\sum_{\color{blue}{a'} \in \color{blue}{\mathcal{A}}} \color{blue}{\pi_{k-1}}(\color{blue}{a'} |\color{red}{s'}) \color{green}{Q_{k-1}}(\color{red}{s'},\color{blue}{a'})\end{equation}*Value iteration (using Bellman optimality equation)*\begin{equation}\color{green}Q_\color{green}{k}(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}{R}(\color{red}{s},\color{blue}{a}) +\gamma \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\max_{\color{blue}{a'}} \color{green}{Q_{k-1}}(\color{red}{s'},\color{blue}{a'})\end{equation} Coding Exercise 4.1 Policy Evaluation Agent Tabular agents implement a function `q_values()` returning a matrix of Q valuesof shape: (`number_of_states`, `number_of_actions`)In this section, we will implement a `PolicyEvalAgent` as an ACME actor: given an `evaluation_policy` $\pi_e$ and a `behaviour_policy` $\pi_b$, it will use the `behaviour_policy` to choose actions, and it will use the corresponding trajectory data to evaluate the `evaluation_policy` (i.e. compute the Q-values as if you were following the `evaluation_policy`). Algorithm:**Initialize** $Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s}$ ∈ $\mathcal{\color{red}S}$ and $\color{blue}a$ ∈ $\mathcal{\color{blue}A}(\color{red}s)$**Loop forever**:1. $\color{red}{s} \gets{}$current (nonterminal) state 2. $\color{blue}{a} \gets{} \text{behaviour_policy }\pi_b(\color{red}s)$ 3. Take action $\color{blue}{a}$; observe resulting reward $\color{green}{r}$, discount $\gamma$, and state, $\color{red}{s'}$4. Compute TD-error: $\delta = \color{green}R + \gamma Q(\color{red}{s'}, \underbrace{\pi_e(\color{red}{s'}}_{\color{blue}{a'}})) − Q(\color{red}s, \color{blue}a)$4. Update Q-value with a small $\alpha$ step: $Q(\color{red}s, \color{blue}a) \gets Q(\color{red}s, \color{blue}a) + \alpha \delta$We will use a uniform `random policy` as our `evaluation policy` here, but you could replace this with any policy you want, such as a greedy one. ###Code # Uniform random policy def random_policy(q): return np.random.randint(4) class PolicyEvalAgent(acme.Actor): def __init__(self, environment_spec, evaluated_policy, behaviour_policy=random_policy, step_size=0.1): self._state = None # Get number of states and actions from the environment spec. self._number_of_states = environment_spec.observations.num_values self._number_of_actions = environment_spec.actions.num_values self._step_size = step_size self._behaviour_policy = behaviour_policy self._evaluated_policy = evaluated_policy ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Initialize your Q-values!") ################################################# # TODO Initialize the Q-values to be all zeros. # (Note: can also be random, but we use zeros here for reproducibility) # HINT: This is a table of state and action pairs, so needs to be a 2-D # array. See the reference for how to create this in numpy: # https://numpy.org/doc/stable/reference/generated/numpy.zeros.html self._q = ... self._action = None self._next_state = None @property def q_values(self): # return the Q values return self._q def select_action(self, observation): # Select an action return self._behaviour_policy(self._q[observation]) def observe_first(self, timestep): self._state = timestep.observation def observe(self, action, next_timestep): s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute TD-Error. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Need to select the next action") ################################################# # TODO Select the next action from the evaluation policy # HINT: Refer to step 4 of the algorithm above. next_a = ... self._td_error = r + g * self._q[next_s, next_a] - self._q[s, a] def update(self): # Updates s = self._state a = self._action # Q-value table update. self._q[s, a] += self._step_size * self._td_error # Update the state self._state = self._next_state # add event to airtable atform.add_event('Coding Exercise 4.1 Policy Evaluation Agent') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_b681200a.py) ###Code # @title Perform policy evaluation { form-width: "30%" } # @markdown Here you can visualize the state value and action-value functions for the "simple" task. num_steps = 1e3 # Create the environment grid = build_gridworld_task(task='simple') environment, environment_spec = setup_environment(grid) # Create the policy evaluation agent to evaluate a random policy. agent = PolicyEvalAgent(environment_spec, evaluated_policy=random_policy) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=int(num_steps)) # get the q-values q = agent.q_values.reshape(grid._layout.shape + (4, )) # visualize value functions print('AFTER {} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=1.) ###Output _____no_output_____ ###Markdown --- Section 5: Tabular Value-Based Model-Free Learning*Time estimate: ~50mins* ###Code # @title Video 5: Model-Free Learning from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1iU4y1n7M6", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"Y4TweUYnexU", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 5: Model-Free Learning') display(out) ###Output _____no_output_____ ###Markdown Lecture footnotes: **On-policy (SARSA) vs off-policy (Q-learning) TD control:** Compare the two equations below and see that the only difference is that for Q-learning, the update is performed assuming that a greedy policy is followed, which is not the one used to collect the data, hence the name *off-policy*. *SARSA*\begin{equation}\color{green}Q(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}Q(\color{red}{s},\color{blue}{a}) +\alpha(\color{green}{r} + \gamma\color{green}{Q}(\color{red}{s'},\color{blue}{a'}) - \color{green}{Q}(\color{red}{s},\color{blue}{a}))\end{equation}*Q-learning*\begin{equation}\color{green}Q(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}Q(\color{red}{s},\color{blue}{a}) +\alpha(\color{green}{r} + \gamma\max_{\color{blue}{a'}} \color{green}{Q}(\color{red}{s'},\color{blue}{a'}) - \color{green}{Q}(\color{red}{s},\color{blue}{a}))\end{equation} Section 5.1: On-policy control: SARSA AgentIn this section, we are focusing on control RL algorithms, which perform the **evaluation** and **improvement** of the policy synchronously. That is, the policy that is being evaluated improves as the agent is using it to interact with the environent.The first algorithm we are going to be looking at is SARSA. This is an **on-policy algorithm** -- i.e: the data collection is done by leveraging the policy we're trying to optimize. As discussed during lectures, a greedy policy with respect to a given $\color{Green}Q$ fails to explore the environment as needed; we will use instead an $\epsilon$-greedy policy with respect to $\color{Green}Q$. SARSA Algorithm**Input:**- $\epsilon \in (0, 1)$ the probability of taking a random action , and- $\alpha > 0$ the step size, also known as learning rate.**Initialize:** $\color{green}Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s}$ ∈ $\mathcal{\color{red}S}$ and $\color{blue}a$ ∈ $\mathcal{\color{blue}A}$**Loop forever:**1. Get $\color{red}s \gets{}$current (non-terminal) state 2. Select $\color{blue}a \gets{} \text{epsilon_greedy}(\color{green}Q(\color{red}s, \cdot))$ 3. Step in the environment by passing the selected action $\color{blue}a$4. Observe resulting reward $\color{green}r$, discount $\gamma$, and state $\color{red}{s'}$5. Compute TD error: $\Delta \color{green}Q \gets \color{green}r + \gamma \color{green}Q(\color{red}{s'}, \color{blue}{a'}) − \color{green}Q(\color{red}s, \color{blue}a)$, where $\color{blue}{a'} \gets \text{epsilon_greedy}(\color{green}Q(\color{red}{s'}, \cdot))$5. Update $\color{green}Q(\color{red}s, \color{blue}a) \gets \color{green}Q(\color{red}s, \color{blue}a) + \alpha \Delta \color{green}Q$ Coding Exercise 5.1: Implement $\epsilon$-greedyBelow you will find incomplete code for sampling from an $\epsilon$-greedy policy, to be used later when we implement an agent that learns values according to the SARSA algorithm. ###Code def epsilon_greedy( q_values_at_s: np.ndarray, # Q-values in state s: Q(s, a). epsilon: float = 0.1 # Probability of taking a random action. ): """Return an epsilon-greedy action sample.""" ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete epsilon greedy policy function") ################################################# # TODO generate a uniform random number and compare it to epsilon to decide if # the action should be greedy or not # HINT: Use np.random.random() to generate a random float from 0 to 1. if ...: #TODO Greedy: Pick action with the largest Q-value. action = ... else: # Get the number of actions from the size of the given vector of Q-values. num_actions = np.array(q_values_at_s).shape[-1] # TODO else return a random action # HINT: Use np.random.randint() to generate a random integer. action = ... return action # add event to airtable atform.add_event('Coding Exercise 5.1: Implement epsilon-greedy') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_7137b538.py) ###Code # @title Sample action from $\epsilon$-greedy { form-width: "30%" } # @markdown With $\epsilon=0.5$, you should see that about half the time, you will get back the optimal # @markdown action 3, but half the time, it will be random. # Create fake q-values q_values = np.array([0, 0, 0, 1]) # Set epsilon = 0.5 epsilon = 0.5 action = epsilon_greedy(q_values, epsilon=epsilon) print(action) ###Output _____no_output_____ ###Markdown Coding Exercise 5.2: Run your SARSA agent on the `obstacle` environmentThis environment is similar to the Cliff-walking example from [Sutton & Barto](http://incompleteideas.net/book/RLbook2018.pdf) and allows us to see the different policies learned by on-policy vs off-policy methods. Try varying the number of steps. ###Code class SarsaAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, epsilon: float, step_size: float = 0.1 ): # Get number of states and actions from the environment spec. self._num_states = environment_spec.observations.num_values self._num_actions = environment_spec.actions.num_values # Create the table of Q-values, all initialized at zero. self._q = np.zeros((self._num_states, self._num_actions)) # Store algorithm hyper-parameters. self._step_size = step_size self._epsilon = epsilon # Containers you may find useful. self._state = None self._action = None self._next_state = None @property def q_values(self): return self._q def select_action(self, observation): return epsilon_greedy(self._q[observation], self._epsilon) def observe_first(self, timestep): # Set current state. self._state = timestep.observation def observe(self, action, next_timestep): # Unpacking the timestep to lighten notation. s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute the action that would be taken from the next state. next_a = self.select_action(next_s) # Compute the on-policy Q-value update. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the on-policy Q-value update") ################################################# # TODO complete the line below to compute the temporal difference error # HINT: see step 5 in the pseudocode above. self._td_error = ... def update(self): # Optional unpacking to lighten notation. s = self._state a = self._action ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete value update") ################################################# # Update the Q-value table value at (s, a). # TODO: Update the Q-value table value at (s, a). # HINT: see step 6 in the pseudocode above, remember that alpha = step_size! self._q[s, a] += ... # Update the current state. self._state = self._next_state # add event to airtable atform.add_event('Coding Exercise 5.2: Run your SARSA agent on the obstacle environment') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_4099088a.py) ###Code # @title Run SARSA agent and visualize value function num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # Create the environment. grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # Create the agent. agent = SarsaAgent(environment_spec, epsilon=0.1, step_size=0.1) # Run the experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) print('AFTER {0:,} STEPS ...'.format(num_steps)) # Get the Q-values and reshape them to recover grid-like structure of states. q_values = agent.q_values grid_shape = grid.layout.shape q_values = q_values.reshape([*grid_shape, -1]) # Visualize the value and Q-value tables. plot_action_values(q_values, epsilon=1.) # Visualize the greedy policy. environment.plot_greedy_policy(q_values) ###Output _____no_output_____ ###Markdown Section 5.2 Off-policy control: Q-learning AgentReminder: $\color{green}Q$-learning is a very powerful and general algorithm, that enables control (figuring out the optimal policy/value function) both on and off-policy.**Initialize** $\color{green}Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s} \in \color{red}{\mathcal{S}}$ and $\color{blue}{a} \in \color{blue}{\mathcal{A}}$**Loop forever**:1. Get $\color{red}{s} \gets{}$current (non-terminal) state 2. Select $\color{blue}{a} \gets{} \text{behaviour_policy}(\color{red}{s})$ 3. Step in the environment by passing the selected action $\color{blue}{a}$4. Observe resulting reward $\color{green}{r}$, discount $\gamma$, and state, $\color{red}{s'}$5. Compute the TD error: $\Delta \color{green}Q \gets \color{green}{r} + \gamma \color{green}Q(\color{red}{s'}, \color{blue}{a'}) − \color{green}Q(\color{red}{s}, \color{blue}{a})$, where $\color{blue}{a'} \gets \arg\max_{\color{blue}{\mathcal A}} \color{green}Q(\color{red}{s'}, \cdot)$6. Update $\color{green}Q(\color{red}{s}, \color{blue}{a}) \gets \color{green}Q(\color{red}{s}, \color{blue}{a}) + \alpha \Delta \color{green}Q$Notice that the actions $\color{blue}{a}$ and $\color{blue}{a'}$ are not selected using the same policy, hence this algorithm being **off-policy**. Coding Exercise 5.3: Implement Q-Learning ###Code QValues = np.ndarray Action = int # A value-based policy takes the Q-values at a state and returns an action. ValueBasedPolicy = Callable[[QValues], Action] class QLearningAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, behaviour_policy: ValueBasedPolicy, step_size: float = 0.1): # Get number of states and actions from the environment spec. self._num_states = environment_spec.observations.num_values self._num_actions = environment_spec.actions.num_values # Create the table of Q-values, all initialized at zero. self._q = np.zeros((self._num_states, self._num_actions)) # Store algorithm hyper-parameters. self._step_size = step_size # Store behavior policy. self._behaviour_policy = behaviour_policy # Containers you may find useful. self._state = None self._action = None self._next_state = None @property def q_values(self): return self._q def select_action(self, observation): return self._behaviour_policy(self._q[observation]) def observe_first(self, timestep): # Set current state. self._state = timestep.observation def observe(self, action, next_timestep): # Unpacking the timestep to lighten notation. s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute the TD error. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the off-policy Q-value update") ################################################# # TODO complete the line below to compute the temporal difference error # HINT: This is very similar to what we did for SARSA, except keep in mind # that we're now taking a max over the q-values (see lecture footnotes above). # You will find the function np.max() useful. self._td_error = ... def update(self): # Optional unpacking to lighten notation. s = self._state a = self._action ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete value update") ################################################# # Update the Q-value table value at (s, a). # TODO: Update the Q-value table value at (s, a). # HINT: see step 6 in the pseudocode above, remember that alpha = step_size! self._q[...] += ... # Update the current state. self._state = self._next_state # add event to airtable atform.add_event('Coding Exercise 5.3: Implement Q-Learning') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_8c430935.py) Run your Q-learning agent on the `obstacle` environment ###Code # @title Run your Q-learning epsilon = 1. # @param {type:"number"} num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # environment grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # behavior policy behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon) # agent agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) # get the q-values q = agent.q_values.reshape(grid.layout.shape + (4,)) # visualize value functions print('AFTER {:,} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=0) # visualise the greedy policy grid.plot_greedy_policy(q) plt.show() ###Output _____no_output_____ ###Markdown Experiment with different levels of greediness* The default was $\epsilon=1.$, what does this correspond to?* Try also $\epsilon =0.1, 0.5$. What do you observe? Does the behaviour policy affect the training in any way? ###Code # @title Run the cell epsilon = 0.1 # @param {type:"number"} num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # environment grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # behavior policy behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon) # agent agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) # get the q-values q = agent.q_values.reshape(grid.layout.shape + (4,)) # visualize value functions print('AFTER {:,} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=epsilon) # visualise the greedy policy grid.plot_greedy_policy(q) plt.show() ###Output _____no_output_____ ###Markdown --- Section 6: Function Approximation*Time estimate: ~25mins* ###Code # @title Video 6: Function approximation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1sg411M7cn", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"7_MYePyYhrM", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 6: Function approximation') display(out) ###Output _____no_output_____ ###Markdown So far we only considered look-up tables for value-functions. In all previous cases every state and action pair $(\color{red}{s}, \color{blue}{a})$, had an entry in our $\color{green}Q$-table. Again, this is possible in this environment as the number of states is equal to the number of cells in the grid. But this is not scalable to situations where, say, the goal location changes or the obstacles are in different locations at every episode (consider how big the table could be in this situation?).An example (not covered in this tutorial) is ATARI from pixels, where the number of possible frames an agent can see is exponential in the number of pixels on the screen.But what we **really** want is just to be able to *compute* the Q-value, when fed with a particular $(\color{red}{s}, \color{blue}{a})$ pair. So if we had a way to get a function to do this work instead of keeping a big table, we'd get around this problem.To address this, we can use **function approximation** as a way to generalize Q-values over some representation of the very large state space, and **train** them to output the values they should. In this section, we will explore $\color{green}Q$-learning with function approximation, which (although it has been theoretically proven to diverge for some degenerate MDPs) can yield impressive results in very large environments. In particular, we will look at [Neural Fitted Q (NFQ) Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) and [Deep Q-Networks (DQN)](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf). Section 6.1 Replay BuffersAn important property of off-policy methods like $\color{green}Q$-learning is that they involve two policies: one for exploration and one that is being optimized (via the $\color{green}Q$-function updates). This means that we can generate data from the **behavior** policy and insert that data into some form of data storage---usually referred to as **replay**.In order to optimize the $\color{green}Q$-function we can then sample data from the replay **dataset** and use that data to perform an update. An illustration of this learning loop is shown below. In the next cell we will show how to implement a simple replay buffer. This can be as simple as a python list containing transition data. In more complicated scenarios we might want to have a more performance-tuned variant, we might have to be more concerned about how large replay is and what to do when its full, and we might want to sample from replay in different ways. But a simple python list can go a surprisingly long way. ###Code # Simple replay buffer # Create a convenient container for the SARS tuples required by deep RL agents. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) class ReplayBuffer(object): """A simple Python replay buffer.""" def __init__(self, capacity: int = None): self.buffer = collections.deque(maxlen=capacity) self._prev_state = None def add_first(self, initial_timestep: dm_env.TimeStep): self._prev_state = initial_timestep.observation def add(self, action: int, timestep: dm_env.TimeStep): transition = Transitions( state=self._prev_state, action=action, reward=timestep.reward, discount=timestep.discount, next_state=timestep.observation, ) self.buffer.append(transition) self._prev_state = timestep.observation def sample(self, batch_size: int) -> Transitions: # Sample a random batch of Transitions as a list. batch_as_list = random.sample(self.buffer, batch_size) # Convert the list of `batch_size` Transitions into a single Transitions # object where each field has `batch_size` stacked fields. return tree_utils.stack_sequence_fields(batch_as_list) def flush(self) -> Transitions: entire_buffer = tree_utils.stack_sequence_fields(self.buffer) self.buffer.clear() return entire_buffer def is_ready(self, batch_size: int) -> bool: return batch_size <= len(self.buffer) ###Output _____no_output_____ ###Markdown Section 6.2: NFQ Agent[Neural Fitted Q Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) was one of the first papers to demonstrate how to leverage recent advances in Deep Learning to approximate the Q-value by a neural network.$^1$In other words, the value $\color{green}Q(\color{red}{s}, \color{blue}{a})$ are approximated by the output of a neural network $\color{green}{Q_w}(\color{red}{s}, \color{blue}{a})$ for each possible action $\color{blue}{a} \in \color{blue}{\mathcal{A}}$.$^2$When introducing function approximations, and neural networks in particular, we need to have a loss to optimize. But looking back at the tabular setting above, you can see that we already have some notion of error: the **TD error**.By training our neural network to output values such that the *TD error is minimized*, we will also satisfy the Bellman Optimality Equation, which is a good sufficient condition to enforce, to obtain an optimal policy.Thanks to automatic differentiation, we can just write the TD error as a loss, e.g., with an $\ell^2$ loss, but others would work too:\begin{equation}L(\color{green}w) = \mathbb{E}\left[ \left( \color{green}{r} + \gamma \max_\color{blue}{a'} \color{green}{Q_w}(\color{red}{s'}, \color{blue}{a'}) − \color{green}{Q_w}(\color{red}{s}, \color{blue}{a}) \right)^2\right].\end{equation}Then we can compute the gradient with respect to the parameters of the neural network and improve our Q-value approximation incrementally.NFQ builds on $\color{green}Q$-learning, but if one were to update the Q-values online directly, the training can be unstable and very slow.Instead, NFQ uses a replay buffer, similar to what we see implemented above (Section 6.1), to update the Q-value in a batched setting.When it was introduced, it also was entirely off-policy using a uniformly random policy to collect data, which was prone to instability when applied to more complex environments (e.g. when the input are pixels or the tasks are longer and more complicated).But it is a good stepping stone to the more complex agents used today. Here, we will look at a slightly different and modernised implementation of NFQ.Below you will find an incomplete NFQ agent that takes in observations from a gridworld. Instead of receiving a tabular state, it receives an observation in the form of its (x,y) coordinates in the gridworld, and the (x,y) coordinates of the goal.The goal of this coding exercise is to complete this agent by implementing the loss, using mean squared error.---$^1$ If you read the NFQ paper, they use a "control" notation, where there is a "cost to minimize", instead of "rewards to maximize", so don't be surprised if signs/max/min do not correspond.$^2$ We could feed it $\color{blue}{a}$ as well and ask $Q_w$ for a single scalar value, but given we have a fixed number of actions and we usually need to take an $argmax$ over them, it's easiest to just output them all in one pass. Coding Exercise 6.1: Implement NFQ ###Code # Create a convenient container for the SARS tuples required by NFQ. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) class NeuralFittedQAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, q_network: nn.Module, replay_capacity: int = 100_000, epsilon: float = 0.1, batch_size: int = 1, learning_rate: float = 3e-4): # Store agent hyperparameters and network. self._num_actions = environment_spec.actions.num_values self._epsilon = epsilon self._batch_size = batch_size self._q_network = q_network # Container for the computed loss (see run_loop implementation above). self.last_loss = 0.0 # Create the replay buffer. self._replay_buffer = ReplayBuffer(replay_capacity) # Setup optimizer that will train the network to minimize the loss. self._optimizer = torch.optim.Adam(self._q_network.parameters(),lr = learning_rate) self._loss_fn = nn.MSELoss() def select_action(self, observation): # Compute Q-values. q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) # Adds batch dimension. q_values = q_values.squeeze(0) # Removes batch dimension # Select epsilon-greedy action. if self._epsilon < torch.rand(1): action = q_values.argmax(axis=-1) else: action = torch.randint(low=0, high=self._num_actions , size=(1,), dtype=torch.int64) return action def q_values(self, observation): q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) return q_values.squeeze(0).detach() def update(self): if not self._replay_buffer.is_ready(self._batch_size): # If the replay buffer is not ready to sample from, do nothing. return # Sample a minibatch of transitions from experience replay. transitions = self._replay_buffer.sample(self._batch_size) # Note: each of these tensors will be of shape [batch_size, ...]. s = torch.tensor(transitions.state) a = torch.tensor(transitions.action,dtype=torch.int64) r = torch.tensor(transitions.reward) d = torch.tensor(transitions.discount) next_s = torch.tensor(transitions.next_state) # Compute the Q-values at next states in the transitions. with torch.no_grad(): q_next_s = self._q_network(next_s) # Shape [batch_size, num_actions]. max_q_next_s = q_next_s.max(axis=-1)[0] # Compute the TD error and then the losses. target_q_value = r + d * max_q_next_s # Compute the Q-values at original state. q_s = self._q_network(s) # Gather the Q-value corresponding to each action in the batch. q_s_a = q_s.gather(1, a.view(-1,1)).squeeze(0) ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the NFQ Agent") ################################################# # TODO Average the squared TD errors over the entire batch using # self._loss_fn, which is defined above as nn.MSELoss() # HINT: Take a look at the reference for nn.MSELoss here: # https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html # What should you put for the input and the target? loss = ... # Compute the gradients of the loss with respect to the q_network variables. self._optimizer.zero_grad() loss.backward() # Apply the gradient update. self._optimizer.step() # Store the loss for logging purposes (see run_loop implementation above). self.last_loss = loss.detach().numpy() def observe_first(self, timestep: dm_env.TimeStep): self._replay_buffer.add_first(timestep) def observe(self, action: int, next_timestep: dm_env.TimeStep): self._replay_buffer.add(action, next_timestep) # add event to airtable atform.add_event('Coding Exercise 6.1: Implement NFQ') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_f331422f.py) Train and Evaluate the NFQ Agent ###Code # @title Training the NFQ Agent epsilon = 0.4 # @param {type:"number"} max_episode_length = 200 # Create the environment. grid = build_gridworld_task( task='simple', observation_type=ObservationType.AGENT_GOAL_POS, max_episode_length=max_episode_length) environment, environment_spec = setup_environment(grid) # Define the neural function approximator (aka Q network). q_network = nn.Sequential(nn.Linear(4, 50), nn.ReLU(), nn.Linear(50, 50), nn.ReLU(), nn.Linear(50, environment_spec.actions.num_values)) # Build the trainable Q-learning agent agent = NeuralFittedQAgent( environment_spec, q_network, epsilon=epsilon, replay_capacity=100_000, batch_size=10, learning_rate=1e-3) returns = run_loop( environment=environment, agent=agent, num_episodes=500, logger_time_delta=1., log_loss=True) # @title Evaluating the agent (set $\epsilon=0$) # Temporarily change epsilon to be more greedy; remember to change it back. agent._epsilon = 0.0 # Record a few episodes. frames = evaluate(environment, agent, evaluation_episodes=5) # Change epsilon back. agent._epsilon = epsilon # Display the video of the episodes. display_video(frames, frame_rate=6) # @title Visualise the learned $Q$ values # Evaluate the policy for every state, similar to tabular agents above. environment.reset() pi = np.zeros(grid._layout_dims, dtype=np.int32) q = np.zeros(grid._layout_dims + (4, )) for y in range(grid._layout_dims[0]): for x in range(grid._layout_dims[1]): # Hack observation to see what the Q-network would output at that point. environment.set_state(x, y) obs = environment.get_obs() q[y, x] = np.asarray(agent.q_values(obs)) pi[y, x] = np.asarray(agent.select_action(obs)) plot_action_values(q) ###Output _____no_output_____ ###Markdown Compare the Q-values approximated with the neural network with the tabular case in **Section 5.3**. Notice how the neural network is generalizing from the visited states to the unvisited similar states, while in the tabular case we updated the value of each state only when we visited that state. Compare the greedy and behaviour ($\epsilon$-greedy) policies ###Code # @title Compare the greedy policy with the agent's policy # @markdown Notice that the agent's behavior policy has a lot more randomness, # @markdown due to the high $\epsilon$. However, the greedy policy that's learned # @markdown is optimal. environment.plot_greedy_policy(q) plt.figtext(-.08, .95, 'Greedy policy using the learnt Q-values') plt.title('') plt.show() environment.plot_policy(pi) plt.figtext(-.08, .95, "Policy using the agent's behavior policy") plt.title('') plt.show() ###Output _____no_output_____ ###Markdown --- Section 7: RL in the real world*Time estimate: ~10mins* ###Code # @title Video 7: Real-world applications and ethics from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Nq4y1X7AF", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"5kBtiW88QVw", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 7: Real-world applications and ethics') display(out) ###Output _____no_output_____ ###Markdown Exercise 7: Group discussionForm a group of 2-3 and have discussions (roughly 3 minutes each) of the following questions:1. **Safety**: what are some safety issues that arise in RL that don’t arise with e.g. supervised learning?2. **Generalization**: What happens if your RL agent is presented with data it hasn’t trained on? (“goes out of distribution”)3. How important do you think **interpretability** is in the ethical and safe deployment of RL agents in the real world? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_99944c89.py) --- Summary*Time estimate: ~3mins* In this tutorial we learn the most important aspects of RL. Within the RL framework, we are able to identify the different components: environment, agent, states, and actions. In addition, we learned and understand the Bellman equation and components involved.We implemented tabular value-based model-free learning (Q-learning and SARSA). Finally, we discussed real-world applications and ethical issues of RL.If you have time left, in Bonus sections you can run a DQN agent and experiment with different hyperparameters (Bounus 1), and you can have a high-level understanding of other (nonvalue-based) RL methods (Bonus 2).See also our *Appendix and further reading* for more information. ###Code # @title Video 8: How to learn more from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1WM4y1T7G5", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"dKaOpgor5Ek", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 8: How to learn more') display(out) # @title Airtable Submission Link from IPython import display as IPydisplay IPydisplay.HTML( f""" <div> <a href= "{atform.url()}" target="_blank"> <img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1" alt="button link end of day Survey" style="width:410px"></a> </div>""" ) ###Output _____no_output_____ ###Markdown --- Bonus 1: DQN*Time estimate: ~30mins* ###Code # @title Video 9: Deep Q-Networks (DQN) from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Mo4y1Q7yD", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"HEDoNtV1y-w", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 9: Deep Q-Networks (DQN)') display(out) ###Output _____no_output_____ ###Markdown Adopted from Mnih et al., 2015 In this section, we will look at an advanced deep RL Agent based on the following publication, [Playing Atari with Deep Reinforcement Learning](https://deepmind.com/research/publications/playing-atari-deep-reinforcement-learning), which introduced the first deep learning model to successfully learn control policies directly from high-dimensional pixel inputs using RL.Here the agent will act directly on a pixel representation of the gridworld. You can find an incomplete implementation below. Bonus Coding Exercise 1: Run a DQN Agent ###Code class DQN(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, network: nn.Module, replay_capacity: int = 100_000, epsilon: float = 0.1, batch_size: int = 1, learning_rate: float = 5e-4, target_update_frequency: int = 10): # Store agent hyperparameters and network. self._num_actions = environment_spec.actions.num_values self._epsilon = epsilon self._batch_size = batch_size self._q_network = q_network # create a second q net with the same structure and initial values, which # we'll be updating separately from the learned q-network. self._target_network = copy.deepcopy(self._q_network) # Container for the computed loss (see run_loop implementation above). self.last_loss = 0.0 # Create the replay buffer. self._replay_buffer = ReplayBuffer(replay_capacity) # Keep an internal tracker of steps self._current_step = 0 # How often to update the target network self._target_update_frequency = target_update_frequency # Setup optimizer that will train the network to minimize the loss. self._optimizer = torch.optim.Adam(self._q_network.parameters(), lr=learning_rate) self._loss_fn = nn.MSELoss() def select_action(self, observation): # Compute Q-values. # Sonnet requires a batch dimension, which we squeeze out right after. q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) # Adds batch dimension. q_values = q_values.squeeze(0) # Removes batch dimension # Select epsilon-greedy action. if self._epsilon < torch.rand(1): action = q_values.argmax(axis=-1) else: action = torch.randint(low=0, high=self._num_actions , size=(1,), dtype=torch.int64) return action def q_values(self, observation): q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) return q_values.squeeze(0).detach() def update(self): self._current_step += 1 if not self._replay_buffer.is_ready(self._batch_size): # If the replay buffer is not ready to sample from, do nothing. return # Sample a minibatch of transitions from experience replay. transitions = self._replay_buffer.sample(self._batch_size) # Optionally unpack the transitions to lighten notation. # Note: each of these tensors will be of shape [batch_size, ...]. s = torch.tensor(transitions.state) a = torch.tensor(transitions.action,dtype=torch.int64) r = torch.tensor(transitions.reward) d = torch.tensor(transitions.discount) next_s = torch.tensor(transitions.next_state) # Compute the Q-values at next states in the transitions. with torch.no_grad(): ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the DQN Agent") ################################################# #TODO get the value of the next states evaluated by the target network #HINT: use self._target_network, defined above. q_next_s = ... # Shape [batch_size, num_actions]. max_q_next_s = q_next_s.max(axis=-1)[0] # Compute the TD error and then the losses. target_q_value = r + d * max_q_next_s # Compute the Q-values at original state. q_s = self._q_network(s) # Gather the Q-value corresponding to each action in the batch. q_s_a = q_s.gather(1, a.view(-1,1)).squeeze(0) # Average the squared TD errors over the entire batch loss = self._loss_fn(target_q_value, q_s_a) # Compute the gradients of the loss with respect to the q_network variables. self._optimizer.zero_grad() loss.backward() # Apply the gradient update. self._optimizer.step() if self._current_step % self._target_update_frequency == 0: self._target_network.load_state_dict(self._q_network.state_dict()) # Store the loss for logging purposes (see run_loop implementation above). self.last_loss = loss.detach().numpy() def observe_first(self, timestep: dm_env.TimeStep): self._replay_buffer.add_first(timestep) def observe(self, action: int, next_timestep: dm_env.TimeStep): self._replay_buffer.add(action, next_timestep) # Create a convenient container for the SARS tuples required by NFQ. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_c2f18cc9.py) ###Code # @title Train and evaluate the DQN agent epsilon = 0.25 # @param {type: "number"} num_episodes = 500 # @param {type: "integer"} max_episode_length = 50 # @param {type: "integer"} grid = build_gridworld_task( task='simple', observation_type=ObservationType.GRID, max_episode_length=max_episode_length) environment, environment_spec = setup_environment(grid) # Build the agent's network. class Permute(nn.Module): def __init__(self, order: list): super(Permute,self).__init__() self.order = order def forward(self, x): return x.permute(self.order) q_network = nn.Sequential(Permute([0, 3, 1, 2]), nn.Conv2d(3, 32, kernel_size=4, stride=2,padding=1), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(3, 1), nn.Flatten(), nn.Linear(384, 50), nn.ReLU(), nn.Linear(50, environment_spec.actions.num_values) ) agent = DQN( environment_spec=environment_spec, network=q_network, batch_size=10, epsilon=epsilon, target_update_frequency=25) returns = run_loop( environment=environment, agent=agent, num_episodes=num_episodes, num_steps=100000) # @title Visualise the learned $Q$ values # Evaluate the policy for every state, similar to tabular agents above. pi = np.zeros(grid._layout_dims, dtype=np.int32) q = np.zeros(grid._layout_dims + (4,)) for y in range(grid._layout_dims[0]): for x in range(grid._layout_dims[1]): # Hack observation to see what the Q-network would output at that point. environment.set_state(x, y) obs = environment.get_obs() q[y, x] = np.asarray(agent.q_values(obs)) pi[y, x] = np.asarray(agent.select_action(obs)) plot_action_values(q) # @title Compare the greedy policy with the agent's policy environment.plot_greedy_policy(q) plt.figtext(-.08, .95, "Greedy policy using the learnt Q-values") plt.title('') plt.show() environment.plot_policy(pi) plt.figtext(-.08, .95, "Policy using the agent's epsilon-greedy policy") plt.title('') plt.show() ###Output _____no_output_____ ###Markdown **Note:** You’ll get a better estimate of the value functions if you increase `num_episodes` and `max_episode_length`, but this will take longer to train. Feel free to play around after the day! --- Bonus 2: Beyond Value Based Model-Free Methods*Time estimate: ~25mins* ###Code # @title Video 10: Other RL Methods from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV14w411977Y", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"1N4Jm9loJx4", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 10: Other RL Methods') display(out) ###Output _____no_output_____ ###Markdown Cartpole taskHere we switch to training on a different kind of task, which has a continuous action space: Cartpole in [Gym](https://gym.openai.com/). As you recall from the video, policy-based methods are particularly well-suited for these kinds of tasks. We will be exploring two of those methods below. ###Code # @title Make a CartPole environment, `gym.make('CartPole-v1')` env = gym.make('CartPole-v1') # Set seeds env.seed(SEED) set_seed(SEED) ###Output _____no_output_____ ###Markdown Bonus 2.1: Policy gradientNow we will turn to policy gradient methods. Rather than defining the policy in terms of a value function, i.e. $\color{blue}\pi(\color{red}s) = \arg\max_{\color{blue}a}\color{green}Q(\color{red}s, \color{blue}a)$, we will directly parameterize the policy and write it as the distribution\begin{equation}\color{blue}a_t \sim \color{blue}\pi_{\theta}(\color{blue}a_t|\color{red}s_t).\end{equation}Here $\theta$ represent the parameters of the policy. We will update the policy parameters using gradient ascent to **maximize** expected future reward.One convenient way to represent the conditional distribution above is as a function that takes a state $\color{red}s$ and returns a distribution over actions $\color{blue}a$.Defined below is an agent which implements the REINFORCE algorithm. REINFORCE (Williams 1992) is the simplest model-free general reinforcement learning technique.The **basic idea** is to use probabilistic action choice. If the reward at the end turns out to be high, we make **all** actions in this sequence **more likely** (otherwise, we do the opposite).This strategy could reinforce "bad" actions as well, however they will turn out to be part of trajectories with low reward and will likely not get accentuated.From the lectures, we know that we need to compute\begin{equation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} G_t \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}where $\color{green} G_t$ is the sum over future rewards from time $t$, defined as\begin{equation}\color{green} G_t = \sum_{n=t}^T \gamma^{n-t} \color{green} R(\color{red}{s_t}, \color{blue}{a_t}, \color{red}{s_{t+1}}).\end{equation}The algorithm below will collect the state, action, and reward data in its buffer until it reaches a full trajectory. It will then update its policy given the above gradient (and the Adam optimizer).A policy gradient trains an agent without explicitly mapping the value for every state-action pair in an environment by taking small steps and updating the policy based on the reward associated with that step. In this section, we will build a small network that trains using policy gradient using PyTorch.The agent can receive a reward immediately for an action or it can receive the award at a later time such as the end of the episode. The policy function our agent will try to learn is $\pi_\theta(a,s)$, where $\theta$ is the parameter vector, $s$ is a particular state, and $a$ is an action.Monte-Carlo Policy Gradient approach will be used, which means the agent will run through an entire episode and then update policy based on the rewards obtained. ###Code # @title Set the hyperparameters for Policy Gradient num_steps = 300 learning_rate = 0.01 # @param {type:"number"} gamma = 0.99 # @param {type:"number"} dropout = 0.6 # @param {type:"number"} # @markdown Only used in Policy Gradient Method: hidden_neurons = 128 # @param {type:"integer"} ###Output _____no_output_____ ###Markdown Bonus Coding Exercise 2.1: Creating a simple neural networkBelow you will find some incomplete code. Fill in the missing code to construct the specified neural network.Let us define a simple feed forward neural network with one hidden layer of 128 neurons and a dropout of 0.6. Let's use Adam as our optimizer and a learning rate of 0.01. Use the hyperparameters already defined rather than using explicit values.Using dropout will significantly improve the performance of the policy. Do compare your results with and without dropout and experiment with other hyper-parameter values as well. ###Code class PolicyGradientNet(nn.Module): def __init__(self): super(PolicyGradientNet, self).__init__() self.state_space = env.observation_space.shape[0] self.action_space = env.action_space.n ################################################# ## TODO for students: Define two linear layers ## from the first expression raise NotImplementedError("Student exercise: Create FF neural network.") ################################################# # HINT: you can construct linear layers using nn.Linear(); what are the # sizes of the inputs and outputs of each of the layers? Also remember # that you need to use hidden_neurons (see hyperparameters section above). # https://pytorch.org/docs/stable/generated/torch.nn.Linear.html self.l1 = ... self.l2 = ... self.gamma = ... # Episode policy and past rewards self.past_policy = Variable(torch.Tensor()) self.reward_episode = [] # Overall reward and past loss self.past_reward = [] self.past_loss = [] def forward(self, x): model = torch.nn.Sequential( self.l1, nn.Dropout(p=dropout), nn.ReLU(), self.l2, nn.Softmax(dim=-1) ) return model(x) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_9aaf4a83.py) Now let's create an instance of the network we have defined and use Adam as the optimizer using the learning_rate as hyperparameter already defined above. ###Code policy = PolicyGradientNet() pg_optimizer = optim.Adam(policy.parameters(), lr=learning_rate) ###Output _____no_output_____ ###Markdown Select ActionThe `select_action()` function chooses an action based on our policy probability distribution using the PyTorch distributions package. Our policy returns a probability for each possible action in our action space (move left or move right) as an array of length two such as [0.7, 0.3]. We then choose an action based on these probabilities, record our history, and return our action. ###Code def select_action(state): #Select an action (0 or 1) by running policy model and choosing based on the probabilities in state state = torch.from_numpy(state).type(torch.FloatTensor) state = policy(Variable(state)) c = Categorical(state) action = c.sample() # Add log probability of chosen action if policy.past_policy.dim() != 0: policy.past_policy = torch.cat([policy.past_policy, c.log_prob(action).reshape(1)]) else: policy.past_policy = (c.log_prob(action).reshape(1)) return action ###Output _____no_output_____ ###Markdown Update policyThis function updates the policy. Reward $G_t$We update our policy by taking a sample of the action value function $Q^{\pi_\theta} (s_t,a_t)$ by playing through episodes of the game. $Q^{\pi_\theta} (s_t,a_t)$ is defined as the expected return by taking action $a$ in state $s$ following policy $\pi$.We know that for every step the simulation continues we receive a reward of 1. We can use this to calculate the policy gradient at each time step, where $r$ is the reward for a particular state-action pair. Rather than using the instantaneous reward, $r$, we instead use a long term reward $ v_{t} $ where $v_t$ is the discounted sum of all future rewards for the length of the episode. $v_{t}$ is then,\begin{equation}\color{green} G_t = \sum_{n=t}^T \gamma^{n-t} \color{green} R(\color{red}{s_t}, \color{blue}{a_t}, \color{red}{s_{t+1}}).\end{equation}where $\gamma$ is the discount factor (0.99). For example, if an episode lasts 5 steps, the reward for each step will be [4.90, 3.94, 2.97, 1.99, 1].Next we scale our reward vector by substracting the mean from each element and scaling to unit variance by dividing by the standard deviation. This practice is common for machine learning applications and the same operation as Scikit Learn's __[StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)__. It also has the effect of compensating for future uncertainty. Update Policy: equationAfter each episode we apply Monte-Carlo Policy Gradient to improve our policy according to the equation:\begin{equation}\Delta\theta_t = \alpha\nabla_\theta \, \log \pi_\theta (s_t,a_t)G_t\end{equation}We will then feed our policy history multiplied by our rewards to our optimizer and update the weights of our neural network using stochastic gradient **ascent**. This should increase the likelihood of actions that got our agent a larger reward. The following function ```update_policy``` updates the network weights and therefore the policy. ###Code def update_policy(): R = 0 rewards = [] # Discount future rewards back to the present using gamma for r in policy.reward_episode[::-1]: R = r + policy.gamma * R rewards.insert(0, R) # Scale rewards rewards = torch.FloatTensor(rewards) rewards = (rewards - rewards.mean()) / (rewards.std() + np.finfo(np.float32).eps) # Calculate loss pg_loss = (torch.sum(torch.mul(policy.past_policy, Variable(rewards)).mul(-1), -1)) # Update network weights # Use zero_grad(), backward() and step() methods of the optimizer instance. pg_optimizer.zero_grad() pg_loss.backward() # Update the weights for param in policy.parameters(): param.grad.data.clamp_(-1, 1) pg_optimizer.step() # Save and intialize episode past counters policy.past_loss.append(pg_loss.item()) policy.past_reward.append(np.sum(policy.reward_episode)) policy.past_policy = Variable(torch.Tensor()) policy.reward_episode= [] ###Output _____no_output_____ ###Markdown TrainingThis is our main policy training loop. For each step in a training episode, we choose an action, take a step through the environment, and record the resulting new state and reward. We call update_policy() at the end of each episode to feed the episode history to our neural network and improve our policy. ###Code def policy_gradient_train(episodes): running_reward = 10 for episode in range(episodes): state = env.reset() done = False for time in range(1000): action = select_action(state) # Step through environment using chosen action state, reward, done, _ = env.step(action.item()) # Save reward policy.reward_episode.append(reward) if done: break # Used to determine when the environment is solved. running_reward = (running_reward * gamma) + (time * (1 - gamma)) update_policy() if episode % 50 == 0: print(f"Episode {episode}\tLast length: {time:5.0f}" f"\tAverage length: {running_reward:.2f}") if running_reward > env.spec.reward_threshold: print(f"Solved! Running reward is now {running_reward} " f"and the last episode runs to {time} time steps!") break ###Output _____no_output_____ ###Markdown Run the model ###Code episodes = 500 #@param {type:"integer"} policy_gradient_train(episodes) ###Output _____no_output_____ ###Markdown Plot the results ###Code # @title Plot the training performance for policy gradient def plot_policy_gradient_training(): window = int(episodes / 20) fig, ((ax1), (ax2)) = plt.subplots(1, 2, sharey=True, figsize=[15, 4]); rolling_mean = pd.Series(policy.past_reward).rolling(window).mean() std = pd.Series(policy.past_reward).rolling(window).std() ax1.plot(rolling_mean) ax1.fill_between(range(len(policy.past_reward)), rolling_mean-std, rolling_mean+std, color='orange', alpha=0.2) ax1.set_title(f"Episode Length Moving Average ({window}-episode window)") ax1.set_xlabel('Episode'); ax1.set_ylabel('Episode Length') ax2.plot(policy.past_reward) ax2.set_title('Episode Length') ax2.set_xlabel('Episode') ax2.set_ylabel('Episode Length') fig.tight_layout(pad=2) plt.show() plot_policy_gradient_training() ###Output _____no_output_____ ###Markdown Bonus Exercise 2.1: Explore different hyperparameters.Try running the model again, by modifying the hyperparameters and observe the outputs. Be sure to rerun the function definition cells in order to pick up on the updated values.What do you see when you 1. increase learning rate2. decrease learning rate3. decrease gamma ($\gamma$)4. increase number of hidden neurons in the network Bonus 2.2: Actor-criticRecall the policy gradient\begin{equation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} G_t \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}The policy parameters are updated using Monte Carlo technique and uses random samples. This introduces high variability in log probabilities and cumulative reward values. This leads to noisy gradients and can cause unstable learning.One way to reduce variance and increase stability is subtracting the cumulative reward by a baseline:\begin{equation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} (G_t - b) \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}Intuitively, reducing cumulative reward will make smaller gradients and thus smaller and more stable (hopefully) updates.From the lecture slides, we know that in Actor Critic Method:1. The “Critic” estimates the value function. This could be the action-value (the Q value) or state-value (the V value).2. The “Actor” updates the policy distribution in the direction suggested by the Critic (such as with policy gradients).Both the Critic and Actor functions are parameterized with neural networks. The "Critic" network parameterizes the Q-value. ###Code # @title Set the hyperparameters for Actor Critic learning_rate = 0.01 # @param {type:"number"} gamma = 0.99 # @param {type:"number"} dropout = 0.6 # Only used in Actor-Critic Method hidden_size = 256 # @param {type:"integer"} num_steps = 300 ###Output _____no_output_____ ###Markdown Actor Critic Network ###Code class ActorCriticNet(nn.Module): def __init__(self, num_inputs, num_actions, hidden_size, learning_rate=3e-4): super(ActorCriticNet, self).__init__() self.num_actions = num_actions self.critic_linear1 = nn.Linear(num_inputs, hidden_size) self.critic_linear2 = nn.Linear(hidden_size, 1) self.actor_linear1 = nn.Linear(num_inputs, hidden_size) self.actor_linear2 = nn.Linear(hidden_size, num_actions) self.all_rewards = [] self.all_lengths = [] self.average_lengths = [] def forward(self, state): state = Variable(torch.from_numpy(state).float().unsqueeze(0)) value = F.relu(self.critic_linear1(state)) value = self.critic_linear2(value) policy_dist = F.relu(self.actor_linear1(state)) policy_dist = F.softmax(self.actor_linear2(policy_dist), dim=1) return value, policy_dist ###Output _____no_output_____ ###Markdown Training ###Code def actor_critic_train(episodes): all_lengths = [] average_lengths = [] all_rewards = [] entropy_term = 0 for episode in range(episodes): log_probs = [] values = [] rewards = [] state = env.reset() for steps in range(num_steps): value, policy_dist = actor_critic.forward(state) value = value.detach().numpy()[0, 0] dist = policy_dist.detach().numpy() action = np.random.choice(num_outputs, p=np.squeeze(dist)) log_prob = torch.log(policy_dist.squeeze(0)[action]) entropy = -np.sum(np.mean(dist) * np.log(dist)) new_state, reward, done, _ = env.step(action) rewards.append(reward) values.append(value) log_probs.append(log_prob) entropy_term += entropy state = new_state if done or steps == num_steps - 1: qval, _ = actor_critic.forward(new_state) qval = qval.detach().numpy()[0, 0] all_rewards.append(np.sum(rewards)) all_lengths.append(steps) average_lengths.append(np.mean(all_lengths[-10:])) if episode % 50 == 0: print(f"episode: {episode},\treward: {np.sum(rewards)}," f"\ttotal length: {steps}," f"\taverage length: {average_lengths[-1]}") break # compute Q values qvals = np.zeros_like(values) for t in reversed(range(len(rewards))): qval = rewards[t] + gamma * qval qvals[t] = qval #update actor critic values = torch.FloatTensor(values) qvals = torch.FloatTensor(qvals) log_probs = torch.stack(log_probs) advantage = qvals - values actor_loss = (-log_probs * advantage).mean() critic_loss = 0.5 * advantage.pow(2).mean() ac_loss = actor_loss + critic_loss + 0.001 * entropy_term ac_optimizer.zero_grad() ac_loss.backward() ac_optimizer.step() # Store results actor_critic.average_lengths = average_lengths actor_critic.all_rewards = all_rewards actor_critic.all_lengths = all_lengths ###Output _____no_output_____ ###Markdown Run the model ###Code episodes = 500 # @param {type:"integer"} env.reset() num_inputs = env.observation_space.shape[0] num_outputs = env.action_space.n actor_critic = ActorCriticNet(num_inputs, num_outputs, hidden_size) ac_optimizer = optim.Adam(actor_critic.parameters()) actor_critic_train(episodes) ###Output _____no_output_____ ###Markdown Plot the results ###Code # @title Plot the training performance for Actor Critic def plot_actor_critic_training(actor_critic, episodes): window = int(episodes / 20) plt.figure(figsize=(15, 4)) plt.subplot(1, 2, 1) smoothed_rewards = pd.Series(actor_critic.all_rewards).rolling(window).mean() std = pd.Series(actor_critic.all_rewards).rolling(window).std() plt.plot(smoothed_rewards, label='Smoothed rewards') plt.fill_between(range(len(smoothed_rewards)), smoothed_rewards - std, smoothed_rewards + std, color='orange', alpha=0.2) plt.xlabel('Episode') plt.ylabel('Reward') plt.subplot(1, 2, 2) plt.plot(actor_critic.all_lengths, label='All lengths') plt.plot(actor_critic.average_lengths, label='Average lengths') plt.xlabel('Episode') plt.ylabel('Episode length') plt.legend() plt.tight_layout() plt.show() plot_actor_critic_training(actor_critic, episodes) ###Output _____no_output_____ ###Markdown Tutorial 1: Introduction to Reinforcement Learning**Week 3, Day 2: Basic Reinforcement Learning (RL)****By Neuromatch Academy**__Content creators:__ Matthew Sargent, Anoop Kulkarni, Sowmya Parthiban, Feryal Behbahani, Jane Wang__Content reviewers:__ Ezekiel Williams, Mehul Rastogi, Lily Cheng, Roberto Guidotti, Arush Tagade__Content editors:__ Spiros Chavlis __Production editors:__ Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesBy the end of the tutorial, you should be able to:1. Within the RL framework, be able to identify the different components: environment, agent, states, and actions. 2. Understand the Bellman equation and components involved. 3. Implement tabular value-based model-free learning (Q-learning and SARSA).4. Run a DQN agent and experiment with different hyperparameters.5. Have a high-level understanding of other (nonvalue-based) RL methods.6. Discuss real-world applications and ethical issues of RL.**Note:** There is an issue with some images not showing up if you're using a Safari browser. Please switch to Chrome if this is the case. ###Code # @title Tutorial slides # @markdown These are the slides for the videos in this tutorial from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/m3kqy/?direct%26mode=render%26action=download%26mode=render", width=854, height=480) ###Output _____no_output_____ ###Markdown --- SetupRun the following 5 cells in order to set up needed functions. Don't worry about the code for now! ###Code # @title Install requirements from IPython.display import clear_output # @markdown we install the acme library, see [here](https://github.com/deepmind/acme) for more info # @markdown WARNING: There may be errors and warnings reported during the installation. # @markdown However, they should be ignored. !apt-get install -y xvfb ffmpeg --quiet !pip install --upgrade pip --quiet !pip install imageio --quiet !pip install imageio-ffmpeg !pip install gym --quiet !pip install enum34 --quiet !pip install dm-env --quiet !pip install pandas --quiet !pip install keras-nightly==2.5.0.dev2021020510 --quiet !pip install grpcio==1.34.0 --quiet !pip install tensorflow --quiet !pip install typing --quiet !pip install einops --quiet !pip install dm-acme --quiet !pip install dm-acme[reverb] --quiet !pip install dm-acme[tf] --quiet !pip install dm-acme[envs] --quiet !pip install dm-env --quiet clear_output() # Import modules import gym import enum import copy import time import acme import torch import base64 import dm_env import IPython import imageio import warnings import itertools import collections import numpy as np import pandas as pd import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import matplotlib as mpl import matplotlib.pyplot as plt import tensorflow.compat.v2 as tf from acme import specs from acme import wrappers from acme.utils import tree_utils from acme.utils import loggers from torch.autograd import Variable from torch.distributions import Categorical from typing import Callable, Sequence tf.enable_v2_behavior() warnings.filterwarnings('ignore') np.set_printoptions(precision=3, suppress=1) # @title Figure settings import ipywidgets as widgets # interactive display %matplotlib inline %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") mpl.rc('image', cmap='Blues') # @title Helper Functions # @markdown Implement helpers for value visualisation map_from_action_to_subplot = lambda a: (2, 6, 8, 4)[a] map_from_action_to_name = lambda a: ("up", "right", "down", "left")[a] def plot_values(values, colormap='pink', vmin=-1, vmax=10): plt.imshow(values, interpolation="nearest", cmap=colormap, vmin=vmin, vmax=vmax) plt.yticks([]) plt.xticks([]) plt.colorbar(ticks=[vmin, vmax]) def plot_state_value(action_values, epsilon=0.1): q = action_values fig = plt.figure(figsize=(4, 4)) vmin = np.min(action_values) vmax = np.max(action_values) v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1) plot_values(v, colormap='summer', vmin=vmin, vmax=vmax) plt.title("$v(s)$") def plot_action_values(action_values, epsilon=0.1): q = action_values fig = plt.figure(figsize=(8, 8)) fig.subplots_adjust(wspace=0.3, hspace=0.3) vmin = np.min(action_values) vmax = np.max(action_values) dif = vmax - vmin for a in [0, 1, 2, 3]: plt.subplot(3, 3, map_from_action_to_subplot(a)) plot_values(q[..., a], vmin=vmin - 0.05*dif, vmax=vmax + 0.05*dif) action_name = map_from_action_to_name(a) plt.title(r"$q(s, \mathrm{" + action_name + r"})$") plt.subplot(3, 3, 5) v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1) plot_values(v, colormap='summer', vmin=vmin, vmax=vmax) plt.title("$v(s)$") def plot_stats(stats, window=10): plt.figure(figsize=(16,4)) plt.subplot(121) xline = range(0, len(stats.episode_lengths), window) plt.plot(xline, smooth(stats.episode_lengths, window=window)) plt.ylabel('Episode Length') plt.xlabel('Episode Count') plt.subplot(122) plt.plot(xline, smooth(stats.episode_rewards, window=window)) plt.ylabel('Episode Return') plt.xlabel('Episode Count') # @title Helper functions def smooth(x, window=10): return x[:window*(len(x)//window)].reshape(len(x)//window, window).mean(axis=1) # @title Set random seed # @markdown Executing `set_seed(seed=seed)` you are setting the seed # for DL its critical to set the random seed so that students can have a # baseline to compare their results to expected results. # Read more here: https://pytorch.org/docs/stable/notes/randomness.html # Call `set_seed` function in the exercises to ensure reproducibility. import random import torch def set_seed(seed=None, seed_torch=True): if seed is None: seed = np.random.choice(2 ** 32) random.seed(seed) np.random.seed(seed) if seed_torch: torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True print(f'Random seed {seed} has been set.') # In case that `DataLoader` is used def seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 np.random.seed(worker_seed) random.seed(worker_seed) # @title Set device (GPU or CPU). Execute `set_device()` # especially if torch modules used. # inform the user if the notebook uses GPU or CPU. def set_device(): device = "cuda" if torch.cuda.is_available() else "cpu" if device != "cuda": print("WARNING: For this notebook to perform best, " "if possible, in the menu under `Runtime` -> " "`Change runtime type.` select `GPU` ") else: print("GPU is enabled in this notebook.") return device SEED = 2021 set_seed(seed=SEED) DEVICE = set_device() ###Output _____no_output_____ ###Markdown --- Section 1: Introduction to Reinforcement Learning ###Code # @title Video 1: Introduction to RL from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV18V411p7iK", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"BWz3scQN50M", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Acme: a research framework for reinforcement learning**Acme** is a library of reinforcement learning (RL) agents and agent building blocks by Google DeepMind. Acme strives to expose simple, efficient, and readable agents, that serve both as reference implementations of popular algorithms and as strong baselines, while still providing enough flexibility to do novel research. The design of Acme also attempts to provide multiple points of entry to the RL problem at differing levels of complexity.For more information see [github repository](https://github.com/deepmind/acme). --- Section 2: General Formulation of RL Problems and Gridworlds ###Code # @title Video 2: General Formulation and MDPs from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1k54y1E7Zn", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"h6TxAALY5Fc", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown The agent interacts with the environment in a loop corresponding to the following diagram. The environment defines a set of **actions** that an agent can take. The agent takes an action informed by the **observations** it receives, and will get a **reward** from the environment after each action. The goal in RL is to find an agent whose actions maximize the total accumulation of rewards obtained from the environment. Section 2.1: The Environment For this practical session we will focus on a **simple grid world** environment,which consists of a 9 x 10 grid of either wall or empty cells, depicted in black and white, respectively. The smiling agent starts from an initial location and needs to navigate to reach the goal square.Below you will find an implementation of this Gridworld as a ```dm_env.Environment```.There is no coding in this section, but if you want, you can look over the provided code so that you can familiarize yourself with an example of how to set up a **grid world** environment. ###Code # @title Implement GridWorld { form-width: "30%" } # @markdown *Double-click* to inspect the contents of this cell. class ObservationType(enum.IntEnum): STATE_INDEX = enum.auto() AGENT_ONEHOT = enum.auto() GRID = enum.auto() AGENT_GOAL_POS = enum.auto() class GridWorld(dm_env.Environment): def __init__(self, layout, start_state, goal_state=None, observation_type=ObservationType.STATE_INDEX, discount=0.9, penalty_for_walls=-5, reward_goal=10, max_episode_length=None, randomize_goals=False): """Build a grid environment. Simple gridworld defined by a map layout, a start and a goal state. Layout should be a NxN grid, containing: * 0: empty * -1: wall * Any other positive value: value indicates reward; episode will terminate Args: layout: NxN array of numbers, indicating the layout of the environment. start_state: Tuple (y, x) of starting location. goal_state: Optional tuple (y, x) of goal location. Will be randomly sampled once if None. observation_type: Enum observation type to use. One of: * ObservationType.STATE_INDEX: int32 index of agent occupied tile. * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the agent is and 0 elsewhere. * ObservationType.GRID: NxNx3 float32 grid of feature channels. First channel contains walls (1 if wall, 0 otherwise), second the agent position (1 if agent, 0 otherwise) and third goal position (1 if goal, 0 otherwise) * ObservationType.AGENT_GOAL_POS: float32 tuple with (agent_y, agent_x, goal_y, goal_x) discount: Discounting factor included in all Timesteps. penalty_for_walls: Reward added when hitting a wall (should be negative). reward_goal: Reward added when finding the goal (should be positive). max_episode_length: If set, will terminate an episode after this many steps. randomize_goals: If true, randomize goal at every episode. """ if observation_type not in ObservationType: raise ValueError('observation_type should be a ObservationType instace.') self._layout = np.array(layout) self._start_state = start_state self._state = self._start_state self._number_of_states = np.prod(np.shape(self._layout)) self._discount = discount self._penalty_for_walls = penalty_for_walls self._reward_goal = reward_goal self._observation_type = observation_type self._layout_dims = self._layout.shape self._max_episode_length = max_episode_length self._num_episode_steps = 0 self._randomize_goals = randomize_goals if goal_state is None: # Randomly sample goal_state if not provided goal_state = self._sample_goal() self.goal_state = goal_state def _sample_goal(self): """Randomly sample reachable non-starting state.""" # Sample a new goal n = 0 max_tries = 1e5 while n < max_tries: goal_state = tuple(np.random.randint(d) for d in self._layout_dims) if goal_state != self._state and self._layout[goal_state] == 0: # Reachable state found! return goal_state n += 1 raise ValueError('Failed to sample a goal state.') @property def layout(self): return self._layout @property def number_of_states(self): return self._number_of_states @property def goal_state(self): return self._goal_state @property def start_state(self): return self._start_state @property def state(self): return self._state def set_state(self, x, y): self._state = (y, x) @goal_state.setter def goal_state(self, new_goal): if new_goal == self._state or self._layout[new_goal] < 0: raise ValueError('This is not a valid goal!') # Zero out any other goal self._layout[self._layout > 0] = 0 # Setup new goal location self._layout[new_goal] = self._reward_goal self._goal_state = new_goal def observation_spec(self): if self._observation_type is ObservationType.AGENT_ONEHOT: return specs.Array( shape=self._layout_dims, dtype=np.float32, name='observation_agent_onehot') elif self._observation_type is ObservationType.GRID: return specs.Array( shape=self._layout_dims + (3,), dtype=np.float32, name='observation_grid') elif self._observation_type is ObservationType.AGENT_GOAL_POS: return specs.Array( shape=(4,), dtype=np.float32, name='observation_agent_goal_pos') elif self._observation_type is ObservationType.STATE_INDEX: return specs.DiscreteArray( self._number_of_states, dtype=int, name='observation_state_index') def action_spec(self): return specs.DiscreteArray(4, dtype=int, name='action') def get_obs(self): if self._observation_type is ObservationType.AGENT_ONEHOT: obs = np.zeros(self._layout.shape, dtype=np.float32) # Place agent obs[self._state] = 1 return obs elif self._observation_type is ObservationType.GRID: obs = np.zeros(self._layout.shape + (3,), dtype=np.float32) obs[..., 0] = self._layout < 0 obs[self._state[0], self._state[1], 1] = 1 obs[self._goal_state[0], self._goal_state[1], 2] = 1 return obs elif self._observation_type is ObservationType.AGENT_GOAL_POS: return np.array(self._state + self._goal_state, dtype=np.float32) elif self._observation_type is ObservationType.STATE_INDEX: y, x = self._state return y * self._layout.shape[1] + x def reset(self): self._state = self._start_state self._num_episode_steps = 0 if self._randomize_goals: self.goal_state = self._sample_goal() return dm_env.TimeStep( step_type=dm_env.StepType.FIRST, reward=None, discount=None, observation=self.get_obs()) def step(self, action): y, x = self._state if action == 0: # up new_state = (y - 1, x) elif action == 1: # right new_state = (y, x + 1) elif action == 2: # down new_state = (y + 1, x) elif action == 3: # left new_state = (y, x - 1) else: raise ValueError( 'Invalid action: {} is not 0, 1, 2, or 3.'.format(action)) new_y, new_x = new_state step_type = dm_env.StepType.MID if self._layout[new_y, new_x] == -1: # wall reward = self._penalty_for_walls discount = self._discount new_state = (y, x) elif self._layout[new_y, new_x] == 0: # empty cell reward = 0. discount = self._discount else: # a goal reward = self._layout[new_y, new_x] discount = 0. new_state = self._start_state step_type = dm_env.StepType.LAST self._state = new_state self._num_episode_steps += 1 if (self._max_episode_length is not None and self._num_episode_steps >= self._max_episode_length): step_type = dm_env.StepType.LAST return dm_env.TimeStep( step_type=step_type, reward=np.float32(reward), discount=discount, observation=self.get_obs()) def plot_grid(self, add_start=True): plt.figure(figsize=(4, 4)) plt.imshow(self._layout <= -1, interpolation='nearest') ax = plt.gca() ax.grid(0) plt.xticks([]) plt.yticks([]) # Add start/goal if add_start: plt.text( self._start_state[1], self._start_state[0], r'$\mathbf{S}$', fontsize=16, ha='center', va='center') plt.text( self._goal_state[1], self._goal_state[0], r'$\mathbf{G}$', fontsize=16, ha='center', va='center') h, w = self._layout.shape for y in range(h - 1): plt.plot([-0.5, w - 0.5], [y + 0.5, y + 0.5], '-w', lw=2) for x in range(w - 1): plt.plot([x + 0.5, x + 0.5], [-0.5, h - 0.5], '-w', lw=2) def plot_state(self, return_rgb=False): self.plot_grid(add_start=False) # Add the agent location plt.text( self._state[1], self._state[0], u'😃', # fontname='symbola', fontsize=18, ha='center', va='center', ) if return_rgb: fig = plt.gcf() plt.axis('tight') plt.subplots_adjust(0, 0, 1, 1, 0, 0) fig.canvas.draw() data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') w, h = fig.canvas.get_width_height() data = data.reshape((h, w, 3)) plt.close(fig) return data def plot_policy(self, policy): action_names = [ r'$\uparrow$', r'$\rightarrow$', r'$\downarrow$', r'$\leftarrow$' ] self.plot_grid() plt.title('Policy Visualization') h, w = self._layout.shape for y in range(h): for x in range(w): # if ((y, x) != self._start_state) and ((y, x) != self._goal_state): if (y, x) != self._goal_state: action_name = action_names[policy[y, x]] plt.text(x, y, action_name, ha='center', va='center') def plot_greedy_policy(self, q): greedy_actions = np.argmax(q, axis=2) self.plot_policy(greedy_actions) def build_gridworld_task(task, discount=0.9, penalty_for_walls=-5, observation_type=ObservationType.STATE_INDEX, max_episode_length=200): """Construct a particular Gridworld layout with start/goal states. Args: task: string name of the task to use. One of {'simple', 'obstacle', 'random_goal'}. discount: Discounting factor included in all Timesteps. penalty_for_walls: Reward added when hitting a wall (should be negative). observation_type: Enum observation type to use. One of: * ObservationType.STATE_INDEX: int32 index of agent occupied tile. * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the agent is and 0 elsewhere. * ObservationType.GRID: NxNx3 float32 grid of feature channels. First channel contains walls (1 if wall, 0 otherwise), second the agent position (1 if agent, 0 otherwise) and third goal position (1 if goal, 0 otherwise) * ObservationType.AGENT_GOAL_POS: float32 tuple with (agent_y, agent_x, goal_y, goal_x). max_episode_length: If set, will terminate an episode after this many steps. """ tasks_specifications = { 'simple': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), 'goal_state': (7, 2) }, 'obstacle': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, -1, 0, 0, -1], [-1, 0, 0, 0, -1, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), 'goal_state': (2, 8) }, 'random_goal': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), # 'randomize_goals': True }, } return GridWorld( discount=discount, penalty_for_walls=penalty_for_walls, observation_type=observation_type, max_episode_length=max_episode_length, **tasks_specifications[task]) def setup_environment(environment): """Returns the environment and its spec.""" # Make sure the environment outputs single-precision floats. environment = wrappers.SinglePrecisionWrapper(environment) # Grab the spec of the environment. environment_spec = specs.make_environment_spec(environment) return environment, environment_spec ###Output _____no_output_____ ###Markdown We will use two distinct tabular GridWorlds:* `simple` where the goal is at the bottom left of the grid, little navigation required.* `obstacle` where the goal is behind an obstacle the agent must avoid.You can visualize the grid worlds by running the cell below. Note that **S** indicates the start state and **G** indicates the goal. ###Code # Visualise GridWorlds # Instantiate two tabular environments, a simple task, and one that involves # the avoidance of an obstacle. simple_grid = build_gridworld_task( task='simple', observation_type=ObservationType.GRID) obstacle_grid = build_gridworld_task( task='obstacle', observation_type=ObservationType.GRID) # Plot them. simple_grid.plot_grid() plt.title('Simple') obstacle_grid.plot_grid() plt.title('Obstacle') ###Output _____no_output_____ ###Markdown In this environment, the agent has four possible **actions**: `up`, `right`, `down`, and `left`. The **reward** is `-5` for bumping into a wall, `+10` for reaching the goal, and `0` otherwise. The episode ends when the agent reaches the goal, and otherwise continues. The **discount** on continuing steps, is $\gamma = 0.9$. Before we start building an agent to interact with this environment, let's first look at the types of objects the environment either returns (e.g., **observations**) or consumes (e.g., **actions**). The `environment_spec` will show you the form of the **observations**, **rewards** and **discounts** that the environment exposes and the form of the **actions** that can be taken. ###Code # @title Look at environment_spec { form-width: "30%" } # Note: setup_environment is implemented in the same cell as GridWorld. environment, environment_spec = setup_environment(simple_grid) print('actions:\n', environment_spec.actions, '\n') print('observations:\n', environment_spec.observations, '\n') print('rewards:\n', environment_spec.rewards, '\n') print('discounts:\n', environment_spec.discounts, '\n') ###Output _____no_output_____ ###Markdown We first set the environment to its initial state by calling the `reset()` method which returns the first observation and resets the agent to the starting location. ###Code environment.reset() environment.plot_state() ###Output _____no_output_____ ###Markdown Now we want to take an action to interact with the environment. We do this by passing a valid action to the `dm_env.Environment.step()` method which returns a `dm_env.TimeStep` namedtuple with fields `(step_type, reward, discount, observation)`.Let's take an action and visualise the resulting state of the grid-world. (You'll need to rerun the cell if you pick a new action.) **Note for kaggle users:** As kaggle does not render the forms automatically the students should be careful to notice the various instructions and manually play around with the values for the variables ###Code # @title Pick an action and see the state changing action = "left" #@param ["up", "right", "down", "left"] {type:"string"} action_int = {'up': 0, 'right': 1, 'down': 2, 'left':3 } action = int(action_int[action]) timestep = environment.step(action) # pytype: dm_env.TimeStep environment.plot_state() # @title Run loop { form-width: "30%" } # @markdown This function runs an agent in the environment for a number of # @markdown episodes, allowing it to learn. # @markdown *Double-click* to inspect the `run_loop` function. def run_loop(environment, agent, num_episodes=None, num_steps=None, logger_time_delta=1., label='training_loop', log_loss=False, ): """Perform the run loop. We are following the Acme run loop. Run the environment loop for `num_episodes` episodes. Each episode is itself a loop which interacts first with the environment to get an observation and then give that observation to the agent in order to retrieve an action. Upon termination of an episode a new episode will be started. If the number of episodes is not given then this will interact with the environment infinitely. Args: environment: dm_env used to generate trajectories. agent: acme.Actor for selecting actions in the run loop. num_steps: number of steps to run the loop for. If `None` (default), runs without limit. num_episodes: number of episodes to run the loop for. If `None` (default), runs without limit. logger_time_delta: time interval (in seconds) between consecutive logging steps. label: optional label used at logging steps. """ logger = loggers.TerminalLogger(label=label, time_delta=logger_time_delta) iterator = range(num_episodes) if num_episodes else itertools.count() all_returns = [] num_total_steps = 0 for episode in iterator: # Reset any counts and start the environment. start_time = time.time() episode_steps = 0 episode_return = 0 episode_loss = 0 timestep = environment.reset() # Make the first observation. agent.observe_first(timestep) # Run an episode. while not timestep.last(): # Generate an action from the agent's policy and step the environment. action = agent.select_action(timestep.observation) timestep = environment.step(action) # Have the agent observe the timestep and let the agent update itself. agent.observe(action, next_timestep=timestep) agent.update() # Book-keeping. episode_steps += 1 num_total_steps += 1 episode_return += timestep.reward if log_loss: episode_loss += agent.last_loss if num_steps is not None and num_total_steps >= num_steps: break # Collect the results and combine with counts. steps_per_second = episode_steps / (time.time() - start_time) result = { 'episode': episode, 'episode_length': episode_steps, 'episode_return': episode_return, } if log_loss: result['loss_avg'] = episode_loss/episode_steps all_returns.append(episode_return) # Log the given results. logger.write(result) if num_steps is not None and num_total_steps >= num_steps: break return all_returns # @title Implement the evaluation loop { form-width: "30%" } # @markdown This function runs the agent in the environment for a number of # @markdown episodes, without allowing it to learn, in order to evaluate it. # @markdown *Double-click* to inspect the `evaluate` function. def evaluate(environment: dm_env.Environment, agent: acme.Actor, evaluation_episodes: int): frames = [] for episode in range(evaluation_episodes): timestep = environment.reset() episode_return = 0 steps = 0 while not timestep.last(): frames.append(environment.plot_state(return_rgb=True)) action = agent.select_action(timestep.observation) timestep = environment.step(action) steps += 1 episode_return += timestep.reward print( f'Episode {episode} ended with reward {episode_return} in {steps} steps' ) return frames def display_video(frames: Sequence[np.ndarray], filename: str = 'temp.mp4', frame_rate: int = 12): """Save and display video.""" # Write the frames to a video. with imageio.get_writer(filename, fps=frame_rate) as video: for frame in frames: video.append_data(frame) # Read video and display the video. video = open(filename, 'rb').read() b64_video = base64.b64encode(video) video_tag = ('<video width="320" height="240" controls alt="test" ' 'src="data:video/mp4;base64,{0}">').format(b64_video.decode()) return IPython.display.HTML(video_tag) ###Output _____no_output_____ ###Markdown Section 2.2: The AgentWe will be implementing Tabular & Function Approximation agents. Tabular agents are purely in Python.All agents will share the same interface from the Acme `Actor`. Here we borrow a figure from Acme to show how this interaction occurs: Agent interfaceEach agent implements the following functions:```pythonclass Agent(acme.Actor): def __init__(self, number_of_actions, number_of_states, ...): """Provides the agent the number of actions and number of states.""" def select_action(self, observation): """Generates actions from observations.""" def observe_first(self, timestep): """Records the initial timestep in a trajectory.""" def observe(self, action, next_timestep): """Records the transition which occurred from taking an action.""" def update(self): """Updates the agent's internals to potentially change its behavior."""```Remarks on the `observe()` function:1. In the last method, the `next_timestep` provides the `reward`, `discount`, and `observation` that resulted from selecting `action`.2. The `next_timestep.step_type` will be either `MID` or `LAST` and should be used to determine whether this is the last observation in the episode.3. The `next_timestep.step_type` cannot be `FIRST`; such a timestep should only ever be given to `observe_first()`. Coding Exercise 2.1: Random AgentBelow is a partially complete implemention of an agent that follows a random (non-learning) policy. Fill in the ```select_action``` method.The ```select_action``` method should return a random **integer** between 0 and ```self._num_actions``` (not a tensor or an array!) ###Code class RandomAgent(acme.Actor): def __init__(self, environment_spec): """Gets the number of available actions from the environment spec.""" self._num_actions = environment_spec.actions.num_values def select_action(self, observation): """Selects an action uniformly at random.""" ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the select action method") ################################################# # TODO return a random integer beween 0 and self._num_actions. # HINT: see the reference for how to sample a random integer in numpy: # https://numpy.org/doc/1.16/reference/routines.random.html action = ... return action def observe_first(self, timestep): """Does not record as the RandomAgent has no use for data.""" pass def observe(self, action, next_timestep): """Does not record as the RandomAgent has no use for data.""" pass def update(self): """Does not update as the RandomAgent does not learn from data.""" pass ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_7eaa84d6.py) ###Code # @title Visualisation of a random agent in GridWorld { form-width: "30%" } # Create the agent by giving it the action space specification. agent = RandomAgent(environment_spec) # Run the agent in the evaluation loop, which returns the frames. frames = evaluate(environment, agent, evaluation_episodes=1) # Visualize the random agent's episode. display_video(frames) ###Output _____no_output_____ ###Markdown --- Section 3: The Bellman Equation ###Code # @title Video 3: The Bellman Equation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Lv411E7CB", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"cLCoNBmYUns", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown In this tutorial we focus mainly on **value based methods**, where agents maintain a value for all state-action pairs and use those estimates to choose actions that maximize that **value** (instead of maintaining a policy directly, like in **policy gradient methods**). We represent the **action-value function** (otherwise known as $\color{green}Q$-function associated with following/employing a policy $\pi$ in a given MDP as:\begin{equation}\color{green}Q^{\color{blue}{\pi}}(\color{red}{s},\color{blue}{a}) = \mathbb{E}_{\tau \sim P^{\color{blue}{\pi}}} \left[ \sum_t \gamma^t \color{green}{r_t}| s_0=\color{red}s,a_0=\color{blue}{a} \right]\end{equation}where $\tau = \{\color{red}{s_0}, \color{blue}{a_0}, \color{green}{r_0}, \color{red}{s_1}, \color{blue}{a_1}, \color{green}{r_1}, \cdots \}$Recall that efficient value estimations are based on the famous **_Bellman Expectation Equation_**:\begin{equation}\color{green}Q^\color{blue}{\pi}(\color{red}{s},\color{blue}{a}) = \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\left( \color{green}{R}(\color{red}{s},\color{blue}{a}, \color{red}{s'}) + \gamma \color{green}V^\color{blue}{\pi}(\color{red}{s'}) \right)\end{equation}where $\color{green}V^\color{blue}{\pi}$ is the expected $\color{green}Q^\color{blue}{\pi}$ value for a particular state, i.e. $\color{green}V^\color{blue}{\pi}(\color{red}{s}) = \sum_{\color{blue}{a} \in \color{blue}{\mathcal{A}}} \color{blue}{\pi}(\color{blue}{a} |\color{red}{s}) \color{green}Q^\color{blue}{\pi}(\color{red}{s},\color{blue}{a})$. --- Section 4: Policy Evaluation ###Code # @title Video 4: Policy Evaluation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV15f4y157zA", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"HAxR4SuaZs4", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Lecture footnotes: **Episodic vs non-episodic environments:** Up until now, we've mainly been talking about episodic environments, or environments that terminate and reset (resampled) after a finite number of steps. However, there are also *non-episodic* environments, in which an agent cannot count on the environment resetting. Thus, they are forced to learn in a *continual* fashion.**Policy iteration vs value iteration:** Compare the two equations below, noting that the only difference is that in value iteration, the second sum is replaced by a max.*Policy iteration (using Bellman expectation equation)*\begin{equation}\color{green}Q_\color{green}{k}(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}{R}(\color{red}{s},\color{blue}{a}) +\gamma \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\sum_{\color{blue}{a'} \in \color{blue}{\mathcal{A}}} \color{blue}{\pi_{k-1}}(\color{blue}{a'} |\color{red}{s'}) \color{green}{Q_{k-1}}(\color{red}{s'},\color{blue}{a'})\end{equation}*Value iteration (using Bellman optimality equation)*\begin{equation}\color{green}Q_\color{green}{k}(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}{R}(\color{red}{s},\color{blue}{a}) +\gamma \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\max_{\color{blue}{a'}} \color{green}{Q_{k-1}}(\color{red}{s'},\color{blue}{a'})\end{equation} Coding Exercise 4.1 Policy Evaluation Agent Tabular agents implement a function `q_values()` returning a matrix of Q valuesof shape: (`number_of_states`, `number_of_actions`)In this section, we will implement a `PolicyEvalAgent` as an ACME actor: given an `evaluation_policy` $\pi_e$ and a `behaviour_policy` $\pi_b$, it will use the `behaviour_policy` to choose actions, and it will use the corresponding trajectory data to evaluate the `evaluation_policy` (i.e. compute the Q-values as if you were following the `evaluation_policy`). Algorithm:**Initialize** $Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s}$ ∈ $\mathcal{\color{red}S}$ and $\color{blue}a$ ∈ $\mathcal{\color{blue}A}(\color{red}s)$**Loop forever**:1. $\color{red}{s} \gets{}$current (nonterminal) state 2. $\color{blue}{a} \gets{} \text{behaviour_policy }\pi_b(\color{red}s)$ 3. Take action $\color{blue}{a}$; observe resulting reward $\color{green}{r}$, discount $\gamma$, and state, $\color{red}{s'}$4. Compute TD-error: $\delta = \color{green}R + \gamma Q(\color{red}{s'}, \underbrace{\pi_e(\color{red}{s'}}_{\color{blue}{a'}})) − Q(\color{red}s, \color{blue}a)$4. Update Q-value with a small $\alpha$ step: $Q(\color{red}s, \color{blue}a) \gets Q(\color{red}s, \color{blue}a) + \alpha \delta$We will use a uniform `random policy` as our `evaluation policy` here, but you could replace this with any policy you want, such as a greedy one. ###Code # Uniform random policy def random_policy(q): return np.random.randint(4) class PolicyEvalAgent(acme.Actor): def __init__(self, environment_spec, evaluated_policy, behaviour_policy=random_policy, step_size=0.1): self._state = None # Get number of states and actions from the environment spec. self._number_of_states = environment_spec.observations.num_values self._number_of_actions = environment_spec.actions.num_values self._step_size = step_size self._behaviour_policy = behaviour_policy self._evaluated_policy = evaluated_policy ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Initialize your Q-values!") ################################################# # TODO Initialize the Q-values to be all zeros. # (Note: can also be random, but we use zeros here for reproducibility) # HINT: This is a table of state and action pairs, so needs to be a 2-D # array. See the reference for how to create this in numpy: # https://numpy.org/doc/stable/reference/generated/numpy.zeros.html self._q = ... self._action = None self._next_state = None @property def q_values(self): # return the Q values return self._q def select_action(self, observation): # Select an action return self._behaviour_policy(self._q[observation]) def observe_first(self, timestep): self._state = timestep.observation def observe(self, action, next_timestep): s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute TD-Error. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Need to select the next action") ################################################# # TODO Select the next action from the evaluation policy # HINT: Refer to step 4 of the algorithm above. next_a = ... self._td_error = r + g * self._q[next_s, next_a] - self._q[s, a] def update(self): # Updates s = self._state a = self._action # Q-value table update. self._q[s, a] += self._step_size * self._td_error # Update the state self._state = self._next_state ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_7b3f830c.py) ###Code # @title Perform policy evaluation { form-width: "30%" } # @markdown Here you can visualize the state value and action-value functions for the "simple" task. num_steps = 1e3 # Create the environment grid = build_gridworld_task(task='simple') environment, environment_spec = setup_environment(grid) # Create the policy evaluation agent to evaluate a random policy. agent = PolicyEvalAgent(environment_spec, evaluated_policy=random_policy) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=int(num_steps)) # get the q-values q = agent.q_values.reshape(grid._layout.shape + (4, )) # visualize value functions print('AFTER {} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=1.) ###Output _____no_output_____ ###Markdown --- Section 5: Tabular Value-Based Model-Free Learning ###Code # @title Video 5: Model-Free Learning from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1iU4y1n7M6", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"Y4TweUYnexU", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Lecture footnotes: **On-policy (SARSA) vs off-policy (Q-learning) TD control:** Compare the two equations below and see that the only difference is that for Q-learning, the update is performed assuming that a greedy policy is followed, which is not the one used to collect the data, hence the name *off-policy*. *SARSA*\begin{equation}\color{green}Q(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}Q(\color{red}{s},\color{blue}{a}) +\alpha(\color{green}{r} + \gamma\color{green}{Q}(\color{red}{s'},\color{blue}{a'}) - \color{green}{Q}(\color{red}{s},\color{blue}{a}))\end{equation}*Q-learning*\begin{equation}\color{green}Q(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}Q(\color{red}{s},\color{blue}{a}) +\alpha(\color{green}{r} + \gamma\max_{\color{blue}{a'}} \color{green}{Q}(\color{red}{s'},\color{blue}{a'}) - \color{green}{Q}(\color{red}{s},\color{blue}{a}))\end{equation} Section 5.1: On-policy control: SARSA AgentIn this section, we are focusing on control RL algorithms, which perform the **evaluation** and **improvement** of the policy synchronously. That is, the policy that is being evaluated improves as the agent is using it to interact with the environent.The first algorithm we are going to be looking at is SARSA. This is an **on-policy algorithm** -- i.e: the data collection is done by leveraging the policy we're trying to optimize. As discussed during lectures, a greedy policy with respect to a given $\color{Green}Q$ fails to explore the environment as needed; we will use instead an $\epsilon$-greedy policy with respect to $\color{Green}Q$. SARSA Algorithm**Input:**- $\epsilon \in (0, 1)$ the probability of taking a random action , and- $\alpha > 0$ the step size, also known as learning rate.**Initialize:** $\color{green}Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s}$ ∈ $\mathcal{\color{red}S}$ and $\color{blue}a$ ∈ $\mathcal{\color{blue}A}$**Loop forever:**1. Get $\color{red}s \gets{}$current (non-terminal) state 2. Select $\color{blue}a \gets{} \text{epsilon_greedy}(\color{green}Q(\color{red}s, \cdot))$ 3. Step in the environment by passing the selected action $\color{blue}a$4. Observe resulting reward $\color{green}r$, discount $\gamma$, and state $\color{red}{s'}$5. Compute TD error: $\Delta \color{green}Q \gets \color{green}r + \gamma \color{green}Q(\color{red}{s'}, \color{blue}{a'}) − \color{green}Q(\color{red}s, \color{blue}a)$, where $\color{blue}{a'} \gets \text{epsilon_greedy}(\color{green}Q(\color{red}{s'}, \cdot))$5. Update $\color{green}Q(\color{red}s, \color{blue}a) \gets \color{green}Q(\color{red}s, \color{blue}a) + \alpha \Delta \color{green}Q$ Coding Exercise 5.1: Implement $\epsilon$-greedyBelow you will find incomplete code for sampling from an $\epsilon$-greedy policy, to be used later when we implement an agent that learns values according to the SARSA algorithm. ###Code def epsilon_greedy( q_values_at_s: np.ndarray, # Q-values in state s: Q(s, a). epsilon: float = 0.1 # Probability of taking a random action. ): """Return an epsilon-greedy action sample.""" ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete epsilon greedy policy function") ################################################# # TODO generate a uniform random number and compare it to epsilon to decide if # the action should be greedy or not # HINT: Use np.random.random() to generate a random float from 0 to 1. if ...: #TODO Greedy: Pick action with the largest Q-value. action = ... else: # Get the number of actions from the size of the given vector of Q-values. num_actions = np.array(q_values_at_s).shape[-1] # TODO else return a random action # HINT: Use np.random.randint() to generate a random integer. action = ... return action ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_524ce08f.py) ###Code # @title Sample action from $\epsilon$-greedy { form-width: "30%" } # @markdown With $\epsilon=0.5$, you should see that about half the time, you will get back the optimal # @markdown action 3, but half the time, it will be random. # Create fake q-values q_values = np.array([0, 0, 0, 1]) # Set epsilon = 0.5 epsilon = 0.5 action = epsilon_greedy(q_values, epsilon=epsilon) print(action) ###Output _____no_output_____ ###Markdown Coding Exercise 5.2: Run your SARSA agent on the `obstacle` environmentThis environment is similar to the Cliff-walking example from [Sutton & Barto](http://incompleteideas.net/book/RLbook2018.pdf) and allows us to see the different policies learned by on-policy vs off-policy methods. Try varying the number of steps. ###Code class SarsaAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, epsilon: float, step_size: float = 0.1 ): # Get number of states and actions from the environment spec. self._num_states = environment_spec.observations.num_values self._num_actions = environment_spec.actions.num_values # Create the table of Q-values, all initialized at zero. self._q = np.zeros((self._num_states, self._num_actions)) # Store algorithm hyper-parameters. self._step_size = step_size self._epsilon = epsilon # Containers you may find useful. self._state = None self._action = None self._next_state = None @property def q_values(self): return self._q def select_action(self, observation): return epsilon_greedy(self._q[observation], self._epsilon) def observe_first(self, timestep): # Set current state. self._state = timestep.observation def observe(self, action, next_timestep): # Unpacking the timestep to lighten notation. s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute the action that would be taken from the next state. next_a = self.select_action(next_s) # Compute the on-policy Q-value update. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the on-policy Q-value update") ################################################# # TODO complete the line below to compute the temporal difference error # HINT: see step 5 in the pseudocode above. self._td_error = ... def update(self): # Optional unpacking to lighten notation. s = self._state a = self._action ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete value update") ################################################# # Update the Q-value table value at (s, a). # TODO: Update the Q-value table value at (s, a). # HINT: see step 6 in the pseudocode above, remember that alpha = step_size! self._q[s, a] += ... # Update the current state. self._state = self._next_state ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_4f341a18.py) ###Code # @title Run SARSA agent and visualize value function num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # Create the environment. grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # Create the agent. agent = SarsaAgent(environment_spec, epsilon=0.1, step_size=0.1) # Run the experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) print('AFTER {0:,} STEPS ...'.format(num_steps)) # Get the Q-values and reshape them to recover grid-like structure of states. q_values = agent.q_values grid_shape = grid.layout.shape q_values = q_values.reshape([*grid_shape, -1]) # Visualize the value and Q-value tables. plot_action_values(q_values, epsilon=1.) # Visualize the greedy policy. environment.plot_greedy_policy(q_values) ###Output _____no_output_____ ###Markdown Section 5.2 Off-policy control: Q-learning AgentReminder: $\color{green}Q$-learning is a very powerful and general algorithm, that enables control (figuring out the optimal policy/value function) both on and off-policy.**Initialize** $\color{green}Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s} \in \color{red}{\mathcal{S}}$ and $\color{blue}{a} \in \color{blue}{\mathcal{A}}$**Loop forever**:1. Get $\color{red}{s} \gets{}$current (non-terminal) state 2. Select $\color{blue}{a} \gets{} \text{behaviour_policy}(\color{red}{s})$ 3. Step in the environment by passing the selected action $\color{blue}{a}$4. Observe resulting reward $\color{green}{r}$, discount $\gamma$, and state, $\color{red}{s'}$5. Compute the TD error: $\Delta \color{green}Q \gets \color{green}{r} + \gamma \color{green}Q(\color{red}{s'}, \color{blue}{a'}) − \color{green}Q(\color{red}{s}, \color{blue}{a})$, where $\color{blue}{a'} \gets \arg\max_{\color{blue}{\mathcal A}} \color{green}Q(\color{red}{s'}, \cdot)$6. Update $\color{green}Q(\color{red}{s}, \color{blue}{a}) \gets \color{green}Q(\color{red}{s}, \color{blue}{a}) + \alpha \Delta \color{green}Q$Notice that the actions $\color{blue}{a}$ and $\color{blue}{a'}$ are not selected using the same policy, hence this algorithm being **off-policy**. Coding Exercise 5.3: Implement Q-Learning ###Code QValues = np.ndarray Action = int # A value-based policy takes the Q-values at a state and returns an action. ValueBasedPolicy = Callable[[QValues], Action] class QLearningAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, behaviour_policy: ValueBasedPolicy, step_size: float = 0.1): # Get number of states and actions from the environment spec. self._num_states = environment_spec.observations.num_values self._num_actions = environment_spec.actions.num_values # Create the table of Q-values, all initialized at zero. self._q = np.zeros((self._num_states, self._num_actions)) # Store algorithm hyper-parameters. self._step_size = step_size # Store behavior policy. self._behaviour_policy = behaviour_policy # Containers you may find useful. self._state = None self._action = None self._next_state = None @property def q_values(self): return self._q def select_action(self, observation): return self._behaviour_policy(self._q[observation]) def observe_first(self, timestep): # Set current state. self._state = timestep.observation def observe(self, action, next_timestep): # Unpacking the timestep to lighten notation. s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute the TD error. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the off-policy Q-value update") ################################################# # TODO complete the line below to compute the temporal difference error # HINT: This is very similar to what we did for SARSA, except keep in mind # that we're now taking a max over the q-values (see lecture footnotes above). # You will find the function np.max() useful. self._td_error = ... def update(self): # Optional unpacking to lighten notation. s = self._state a = self._action ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete value update") ################################################# # Update the Q-value table value at (s, a). # TODO: Update the Q-value table value at (s, a). # HINT: see step 6 in the pseudocode above, remember that alpha = step_size! self._q[...] += ... # Update the current state. self._state = self._next_state ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_0f0ff9d8.py) Run your Q-learning agent on the `obstacle` environment ###Code # @title Run your Q-learning epsilon = 1. # @param {type:"number"} num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # environment grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # behavior policy behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon) # agent agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) # get the q-values q = agent.q_values.reshape(grid.layout.shape + (4,)) # visualize value functions print('AFTER {:,} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=0) # visualise the greedy policy grid.plot_greedy_policy(q) plt.show() ###Output _____no_output_____ ###Markdown Experiment with different levels of greediness* The default was $\epsilon=1.$, what does this correspond to?* Try also $\epsilon =0.1, 0.5$. What do you observe? Does the behaviour policy affect the training in any way? ###Code # @title Run the cell epsilon = 0.1 # @param {type:"number"} num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # environment grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # behavior policy behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon) # agent agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) # get the q-values q = agent.q_values.reshape(grid.layout.shape + (4,)) # visualize value functions print('AFTER {:,} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=epsilon) # visualise the greedy policy grid.plot_greedy_policy(q) plt.show() ###Output _____no_output_____ ###Markdown --- Section 6: Function Approximation ###Code # @title Video 6: Function approximation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1sg411M7cn", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"7_MYePyYhrM", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown So far we only considered look-up tables for value-functions. In all previous cases every state and action pair $(\color{red}{s}, \color{blue}{a})$, had an entry in our $\color{green}Q$-table. Again, this is possible in this environment as the number of states is equal to the number of cells in the grid. But this is not scalable to situations where, say, the goal location changes or the obstacles are in different locations at every episode (consider how big the table could be in this situation?).An example (not covered in this tutorial) is ATARI from pixels, where the number of possible frames an agent can see is exponential in the number of pixels on the screen.But what we **really** want is just to be able to *compute* the Q-value, when fed with a particular $(\color{red}{s}, \color{blue}{a})$ pair. So if we had a way to get a function to do this work instead of keeping a big table, we'd get around this problem.To address this, we can use **function approximation** as a way to generalize Q-values over some representation of the very large state space, and **train** them to output the values they should. In this section, we will explore $\color{green}Q$-learning with function approximation, which (although it has been theoretically proven to diverge for some degenerate MDPs) can yield impressive results in very large environments. In particular, we will look at [Neural Fitted Q (NFQ) Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) and [Deep Q-Networks (DQN)](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf). Section 6.1 Replay BuffersAn important property of off-policy methods like $\color{green}Q$-learning is that they involve two policies: one for exploration and one that is being optimized (via the $\color{green}Q$-function updates). This means that we can generate data from the **behavior** policy and insert that data into some form of data storage---usually referred to as **replay**.In order to optimize the $\color{green}Q$-function we can then sample data from the replay **dataset** and use that data to perform an update. An illustration of this learning loop is shown below. In the next cell we will show how to implement a simple replay buffer. This can be as simple as a python list containing transition data. In more complicated scenarios we might want to have a more performance-tuned variant, we might have to be more concerned about how large replay is and what to do when its full, and we might want to sample from replay in different ways. But a simple python list can go a surprisingly long way. ###Code # Simple replay buffer # Create a convenient container for the SARS tuples required by deep RL agents. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) class ReplayBuffer(object): """A simple Python replay buffer.""" def __init__(self, capacity: int = None): self.buffer = collections.deque(maxlen=capacity) self._prev_state = None def add_first(self, initial_timestep: dm_env.TimeStep): self._prev_state = initial_timestep.observation def add(self, action: int, timestep: dm_env.TimeStep): transition = Transitions( state=self._prev_state, action=action, reward=timestep.reward, discount=timestep.discount, next_state=timestep.observation, ) self.buffer.append(transition) self._prev_state = timestep.observation def sample(self, batch_size: int) -> Transitions: # Sample a random batch of Transitions as a list. batch_as_list = random.sample(self.buffer, batch_size) # Convert the list of `batch_size` Transitions into a single Transitions # object where each field has `batch_size` stacked fields. return tree_utils.stack_sequence_fields(batch_as_list) def flush(self) -> Transitions: entire_buffer = tree_utils.stack_sequence_fields(self.buffer) self.buffer.clear() return entire_buffer def is_ready(self, batch_size: int) -> bool: return batch_size <= len(self.buffer) ###Output _____no_output_____ ###Markdown Section 6.2: NFQ Agent[Neural Fitted Q Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) was one of the first papers to demonstrate how to leverage recent advances in Deep Learning to approximate the Q-value by a neural network.$^1$In other words, the value $\color{green}Q(\color{red}{s}, \color{blue}{a})$ are approximated by the output of a neural network $\color{green}{Q_w}(\color{red}{s}, \color{blue}{a})$ for each possible action $\color{blue}{a} \in \color{blue}{\mathcal{A}}$.$^2$When introducing function approximations, and neural networks in particular, we need to have a loss to optimize. But looking back at the tabular setting above, you can see that we already have some notion of error: the **TD error**.By training our neural network to output values such that the *TD error is minimized*, we will also satisfy the Bellman Optimality Equation, which is a good sufficient condition to enforce, to obtain an optimal policy.Thanks to automatic differentiation, we can just write the TD error as a loss, e.g., with an $\ell^2$ loss, but others would work too:\begin{equation}L(\color{green}w) = \mathbb{E}\left[ \left( \color{green}{r} + \gamma \max_\color{blue}{a'} \color{green}{Q_w}(\color{red}{s'}, \color{blue}{a'}) − \color{green}{Q_w}(\color{red}{s}, \color{blue}{a}) \right)^2\right].\end{equation}Then we can compute the gradient with respect to the parameters of the neural network and improve our Q-value approximation incrementally.NFQ builds on $\color{green}Q$-learning, but if one were to update the Q-values online directly, the training can be unstable and very slow.Instead, NFQ uses a replay buffer, similar to what we see implemented above (Section 6.1), to update the Q-value in a batched setting.When it was introduced, it also was entirely off-policy using a uniformly random policy to collect data, which was prone to instability when applied to more complex environments (e.g. when the input are pixels or the tasks are longer and more complicated).But it is a good stepping stone to the more complex agents used today. Here, we will look at a slightly different and modernised implementation of NFQ.Below you will find an incomplete NFQ agent that takes in observations from a gridworld. Instead of receiving a tabular state, it receives an observation in the form of its (x,y) coordinates in the gridworld, and the (x,y) coordinates of the goal.The goal of this coding exercise is to complete this agent by implementing the loss, using mean squared error.---$^1$ If you read the NFQ paper, they use a "control" notation, where there is a "cost to minimize", instead of "rewards to maximize", so don't be surprised if signs/max/min do not correspond.$^2$ We could feed it $\color{blue}{a}$ as well and ask $Q_w$ for a single scalar value, but given we have a fixed number of actions and we usually need to take an $argmax$ over them, it's easiest to just output them all in one pass. Coding Exercise 6.1: Implement NFQ ###Code # Create a convenient container for the SARS tuples required by NFQ. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) class NeuralFittedQAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, q_network: nn.Module, replay_capacity: int = 100_000, epsilon: float = 0.1, batch_size: int = 1, learning_rate: float = 3e-4): # Store agent hyperparameters and network. self._num_actions = environment_spec.actions.num_values self._epsilon = epsilon self._batch_size = batch_size self._q_network = q_network # Container for the computed loss (see run_loop implementation above). self.last_loss = 0.0 # Create the replay buffer. self._replay_buffer = ReplayBuffer(replay_capacity) # Setup optimizer that will train the network to minimize the loss. self._optimizer = torch.optim.Adam(self._q_network.parameters(),lr = learning_rate) self._loss_fn = nn.MSELoss() def select_action(self, observation): # Compute Q-values. q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) # Adds batch dimension. q_values = q_values.squeeze(0) # Removes batch dimension # Select epsilon-greedy action. if self._epsilon < torch.rand(1): action = q_values.argmax(axis=-1) else: action = torch.randint(low=0, high=self._num_actions , size=(1,), dtype=torch.int64) return action def q_values(self, observation): q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) return q_values.squeeze(0).detach() def update(self): if not self._replay_buffer.is_ready(self._batch_size): # If the replay buffer is not ready to sample from, do nothing. return # Sample a minibatch of transitions from experience replay. transitions = self._replay_buffer.sample(self._batch_size) # Note: each of these tensors will be of shape [batch_size, ...]. s = torch.tensor(transitions.state) a = torch.tensor(transitions.action,dtype=torch.int64) r = torch.tensor(transitions.reward) d = torch.tensor(transitions.discount) next_s = torch.tensor(transitions.next_state) # Compute the Q-values at next states in the transitions. with torch.no_grad(): q_next_s = self._q_network(next_s) # Shape [batch_size, num_actions]. max_q_next_s = q_next_s.max(axis=-1)[0] # Compute the TD error and then the losses. target_q_value = r + d * max_q_next_s # Compute the Q-values at original state. q_s = self._q_network(s) # Gather the Q-value corresponding to each action in the batch. q_s_a = q_s.gather(1, a.view(-1,1)).squeeze(0) ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the NFQ Agent") ################################################# # TODO Average the squared TD errors over the entire batch using # self._loss_fn, which is defined above as nn.MSELoss() # HINT: Take a look at the reference for nn.MSELoss here: # https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html # What should you put for the input and the target? loss = ... # Compute the gradients of the loss with respect to the q_network variables. self._optimizer.zero_grad() loss.backward() # Apply the gradient update. self._optimizer.step() # Store the loss for logging purposes (see run_loop implementation above). self.last_loss = loss.detach().numpy() def observe_first(self, timestep: dm_env.TimeStep): self._replay_buffer.add_first(timestep) def observe(self, action: int, next_timestep: dm_env.TimeStep): self._replay_buffer.add(action, next_timestep) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_f42d1415.py) Train and Evaluate the NFQ Agent ###Code # @title Training the NFQ Agent epsilon = 0.4 # @param {type:"number"} max_episode_length = 200 # Create the environment. grid = build_gridworld_task( task='simple', observation_type=ObservationType.AGENT_GOAL_POS, max_episode_length=max_episode_length) environment, environment_spec = setup_environment(grid) # Define the neural function approximator (aka Q network). q_network = nn.Sequential(nn.Linear(4, 50), nn.ReLU(), nn.Linear(50, 50), nn.ReLU(), nn.Linear(50, environment_spec.actions.num_values)) # Build the trainable Q-learning agent agent = NeuralFittedQAgent( environment_spec, q_network, epsilon=epsilon, replay_capacity=100_000, batch_size=10, learning_rate=1e-3) returns = run_loop( environment=environment, agent=agent, num_episodes=500, logger_time_delta=1., log_loss=True) # @title Evaluating the agent (set $\epsilon=0$) # Temporarily change epsilon to be more greedy; remember to change it back. agent._epsilon = 0.0 # Record a few episodes. frames = evaluate(environment, agent, evaluation_episodes=5) # Change epsilon back. agent._epsilon = epsilon # Display the video of the episodes. display_video(frames, frame_rate=6) # @title Visualise the learned $Q$ values # Evaluate the policy for every state, similar to tabular agents above. environment.reset() pi = np.zeros(grid._layout_dims, dtype=np.int32) q = np.zeros(grid._layout_dims + (4, )) for y in range(grid._layout_dims[0]): for x in range(grid._layout_dims[1]): # Hack observation to see what the Q-network would output at that point. environment.set_state(x, y) obs = environment.get_obs() q[y, x] = np.asarray(agent.q_values(obs)) pi[y, x] = np.asarray(agent.select_action(obs)) plot_action_values(q) ###Output _____no_output_____ ###Markdown Compare the Q-values approximated with the neural network with the tabular case in **Section 5.3**. Notice how the neural network is generalizing from the visited states to the unvisited similar states, while in the tabular case we updated the value of each state only when we visited that state. Compare the greedy and behaviour ($\epsilon$-greedy) policies ###Code # @title Compare the greedy policy with the agent's policy # @markdown Notice that the agent's behavior policy has a lot more randomness, # @markdown due to the high $\epsilon$. However, the greedy policy that's learned # @markdown is optimal. environment.plot_greedy_policy(q) plt.figtext(-.08, .95, 'Greedy policy using the learnt Q-values') plt.title('') plt.show() environment.plot_policy(pi) plt.figtext(-.08, .95, "Policy using the agent's behavior policy") plt.title('') plt.show() ###Output _____no_output_____ ###Markdown --- Section 7: DQN ###Code #@title Video 7: Deep Q-Networks (DQN) from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Mo4y1Q7yD", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"HEDoNtV1y-w", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown --> In this section, we will look at an advanced deep RL Agent based on the following publication, [Playing Atari with Deep Reinforcement Learning](https://deepmind.com/research/publications/playing-atari-deep-reinforcement-learning), which introduced the first deep learning model to successfully learn control policies directly from high-dimensional pixel inputs using RL.Here the agent will act directly on a pixel representation of the gridworld. You can find an incomplete implementation below. Coding Exercise 7.1: Run a DQN Agent ###Code class DQN(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, network: nn.Module, replay_capacity: int = 100_000, epsilon: float = 0.1, batch_size: int = 1, learning_rate: float = 5e-4, target_update_frequency: int = 10): # Store agent hyperparameters and network. self._num_actions = environment_spec.actions.num_values self._epsilon = epsilon self._batch_size = batch_size self._q_network = q_network # create a second q net with the same structure and initial values, which # we'll be updating separately from the learned q-network. self._target_network = copy.deepcopy(self._q_network) # Container for the computed loss (see run_loop implementation above). self.last_loss = 0.0 # Create the replay buffer. self._replay_buffer = ReplayBuffer(replay_capacity) # Keep an internal tracker of steps self._current_step = 0 # How often to update the target network self._target_update_frequency = target_update_frequency # Setup optimizer that will train the network to minimize the loss. self._optimizer = torch.optim.Adam(self._q_network.parameters(), lr=learning_rate) self._loss_fn = nn.MSELoss() def select_action(self, observation): # Compute Q-values. # Sonnet requires a batch dimension, which we squeeze out right after. q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) # Adds batch dimension. q_values = q_values.squeeze(0) # Removes batch dimension # Select epsilon-greedy action. if self._epsilon < torch.rand(1): action = q_values.argmax(axis=-1) else: action = torch.randint(low=0, high=self._num_actions , size=(1,), dtype=torch.int64) return action def q_values(self, observation): q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) return q_values.squeeze(0).detach() def update(self): self._current_step += 1 if not self._replay_buffer.is_ready(self._batch_size): # If the replay buffer is not ready to sample from, do nothing. return # Sample a minibatch of transitions from experience replay. transitions = self._replay_buffer.sample(self._batch_size) # Optionally unpack the transitions to lighten notation. # Note: each of these tensors will be of shape [batch_size, ...]. s = torch.tensor(transitions.state) a = torch.tensor(transitions.action,dtype=torch.int64) r = torch.tensor(transitions.reward) d = torch.tensor(transitions.discount) next_s = torch.tensor(transitions.next_state) # Compute the Q-values at next states in the transitions. with torch.no_grad(): ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the DQN Agent") ################################################# #TODO get the value of the next states evaluated by the target network #HINT: use self._target_network, defined above. q_next_s = ... # Shape [batch_size, num_actions]. max_q_next_s = q_next_s.max(axis=-1)[0] # Compute the TD error and then the losses. target_q_value = r + d * max_q_next_s # Compute the Q-values at original state. q_s = self._q_network(s) # Gather the Q-value corresponding to each action in the batch. q_s_a = q_s.gather(1, a.view(-1,1)).squeeze(0) # Average the squared TD errors over the entire batch loss = self._loss_fn(target_q_value, q_s_a) # Compute the gradients of the loss with respect to the q_network variables. self._optimizer.zero_grad() loss.backward() # Apply the gradient update. self._optimizer.step() if self._current_step % self._target_update_frequency == 0: self._target_network.load_state_dict(self._q_network.state_dict()) # Store the loss for logging purposes (see run_loop implementation above). self.last_loss = loss.detach().numpy() def observe_first(self, timestep: dm_env.TimeStep): self._replay_buffer.add_first(timestep) def observe(self, action: int, next_timestep: dm_env.TimeStep): self._replay_buffer.add(action, next_timestep) # Create a convenient container for the SARS tuples required by NFQ. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_d6d1b1d0.py) ###Code # @title Train and evaluate the DQN agent epsilon = 0.25 # @param {type: "number"} num_episodes = 1000 # @param {type: "integer"} grid = build_gridworld_task( task='simple', observation_type=ObservationType.GRID, max_episode_length=200) environment, environment_spec = setup_environment(grid) # Build the agent's network. class Permute(nn.Module): def __init__(self, order: list): super(Permute,self).__init__() self.order = order def forward(self, x): return x.permute(self.order) q_network = nn.Sequential(Permute([0, 3, 1, 2]), nn.Conv2d(3, 32, kernel_size=4, stride=2,padding=1), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(3, 1), nn.Flatten(), nn.Linear(384, 50), nn.ReLU(), nn.Linear(50, environment_spec.actions.num_values) ) agent = DQN( environment_spec=environment_spec, network=q_network, batch_size=10, epsilon=epsilon, target_update_frequency=25) returns = run_loop( environment=environment, agent=agent, num_episodes=num_episodes, num_steps=100000) # @title Visualise the learned $Q$ values # Evaluate the policy for every state, similar to tabular agents above. pi = np.zeros(grid._layout_dims, dtype=np.int32) q = np.zeros(grid._layout_dims + (4,)) for y in range(grid._layout_dims[0]): for x in range(grid._layout_dims[1]): # Hack observation to see what the Q-network would output at that point. environment.set_state(x, y) obs = environment.get_obs() q[y, x] = np.asarray(agent.q_values(obs)) pi[y, x] = np.asarray(agent.select_action(obs)) plot_action_values(q) # @title Compare the greedy policy with the agent's policy environment.plot_greedy_policy(q) plt.figtext(-.08, .95, "Greedy policy using the learnt Q-values") plt.title('') plt.show() environment.plot_policy(pi) plt.figtext(-.08, .95, "Policy using the agent's epsilon-greedy policy") plt.title('') plt.show() ###Output _____no_output_____ ###Markdown --- Section 8: Beyond Value Based Model-Free Methods ###Code # @title Video 8: Other RL Methods from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV14w411977Y", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"1N4Jm9loJx4", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Cartpole taskHere we switch to training on a different kind of task, which has a continuous action space: Cartpole in [Gym](https://gym.openai.com/). As you recall from the video, policy-based methods are particularly well-suited for these kinds of tasks. We will be exploring two of those methods below. ###Code # @title Make a CartPole environment, `gym.make('CartPole-v1')` env = gym.make('CartPole-v1') # Set seeds env.seed(SEED) set_seed(SEED) ###Output _____no_output_____ ###Markdown Section 8.1: Policy gradientNow we will turn to policy gradient methods. Rather than defining the policy in terms of a value function, i.e. $\color{blue}\pi(\color{red}s) = \arg\max_{\color{blue}a}\color{green}Q(\color{red}s, \color{blue}a)$, we will directly parameterize the policy and write it as the distribution\begin{equation}\color{blue}a_t \sim \color{blue}\pi_{\theta}(\color{blue}a_t|\color{red}s_t).\end{equation}Here $\theta$ represent the parameters of the policy. We will update the policy parameters using gradient ascent to **maximize** expected future reward.One convenient way to represent the conditional distribution above is as a function that takes a state $\color{red}s$ and returns a distribution over actions $\color{blue}a$.Defined below is an agent which implements the REINFORCE algorithm. REINFORCE (Williams 1992) is the simplest model-free general reinforcement learning technique.The **basic idea** is to use probabilistic action choice. If the reward at the end turns out to be high, we make **all** actions in this sequence **more likely** (otherwise, we do the opposite).This strategy could reinforce "bad" actions as well, however they will turn out to be part of trajectories with low reward and will likely not get accentuated.From the lectures, we know that we need to compute\begin{equation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} G_t \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}where $\color{green} G_t$ is the sum over future rewards from time $t$, defined as\begin{equation}\color{green} G_t = \sum_{n=t}^T \gamma^{n-t} \color{green} R(\color{red}{s_t}, \color{blue}{a_t}, \color{red}{s_{t+1}}).\end{equation}The algorithm below will collect the state, action, and reward data in its buffer until it reaches a full trajectory. It will then update its policy given the above gradient (and the Adam optimizer).A policy gradient trains an agent without explicitly mapping the value for every state-action pair in an environment by taking small steps and updating the policy based on the reward associated with that step. In this section, we will build a small network that trains using policy gradient using PyTorch.The agent can receive a reward immediately for an action or it can receive the award at a later time such as the end of the episode. The policy function our agent will try to learn is $\pi_\theta(a,s)$, where $\theta$ is the parameter vector, $s$ is a particular state, and $a$ is an action.Monte-Carlo Policy Gradient approach will be used, which means the agent will run through an entire episode and then update policy based on the rewards obtained. ###Code # @title Set the hyperparameters for Policy Gradient num_steps = 300 learning_rate = 0.01 # @param {type:"number"} gamma = 0.99 # @param {type:"number"} dropout = 0.6 # @param {type:"number"} # @markdown Only used in Policy Gradient Method: hidden_neurons = 128 # @param {type:"integer"} ###Output _____no_output_____ ###Markdown Coding Exercise 8.1: Creating a simple neural networkBelow you will find some incomplete code. Fill in the missing code to construct the specified neural network.Let us define a simple feed forward neural network with one hidden layer of 128 neurons and a dropout of 0.6. Let's use Adam as our optimizer and a learning rate of 0.01. Use the hyperparameters already defined rather than using explicit values.Using dropout will significantly improve the performance of the policy. Do compare your results with and without dropout and experiment with other hyper-parameter values as well. ###Code class PolicyGradientNet(nn.Module): def __init__(self): super(PolicyGradientNet, self).__init__() self.state_space = env.observation_space.shape[0] self.action_space = env.action_space.n ################################################# ## TODO for students: Define two linear layers ## from the first expression raise NotImplementedError("Student exercise: Create FF neural network.") ################################################# # HINT: you can construct linear layers using nn.Linear(); what are the # sizes of the inputs and outputs of each of the layers? Also remember # that you need to use hidden_neurons (see hyperparameters section above). # https://pytorch.org/docs/stable/generated/torch.nn.Linear.html self.l1 = ... self.l2 = ... self.gamma = ... # Episode policy and past rewards self.past_policy = Variable(torch.Tensor()) self.reward_episode = [] # Overall reward and past loss self.past_reward = [] self.past_loss = [] def forward(self, x): model = torch.nn.Sequential( self.l1, nn.Dropout(p=dropout), nn.ReLU(), self.l2, nn.Softmax(dim=-1) ) return model(x) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_9aaf4a83.py) Now let's create an instance of the network we have defined and use Adam as the optimizer using the learning_rate as hyperparameter already defined above. ###Code policy = PolicyGradientNet() pg_optimizer = optim.Adam(policy.parameters(), lr=learning_rate) ###Output _____no_output_____ ###Markdown Select ActionThe `select_action()` function chooses an action based on our policy probability distribution using the PyTorch distributions package. Our policy returns a probability for each possible action in our action space (move left or move right) as an array of length two such as [0.7, 0.3]. We then choose an action based on these probabilities, record our history, and return our action. ###Code def select_action(state): #Select an action (0 or 1) by running policy model and choosing based on the probabilities in state state = torch.from_numpy(state).type(torch.FloatTensor) state = policy(Variable(state)) c = Categorical(state) action = c.sample() # Add log probability of chosen action if policy.past_policy.dim() != 0: policy.past_policy = torch.cat([policy.past_policy, c.log_prob(action).reshape(1)]) else: policy.past_policy = (c.log_prob(action).reshape(1)) return action ###Output _____no_output_____ ###Markdown Update policyThis function updates the policy. Reward $G_t$We update our policy by taking a sample of the action value function $Q^{\pi_\theta} (s_t,a_t)$ by playing through episodes of the game. $Q^{\pi_\theta} (s_t,a_t)$ is defined as the expected return by taking action $a$ in state $s$ following policy $\pi$.We know that for every step the simulation continues we receive a reward of 1. We can use this to calculate the policy gradient at each time step, where $r$ is the reward for a particular state-action pair. Rather than using the instantaneous reward, $r$, we instead use a long term reward $ v_{t} $ where $v_t$ is the discounted sum of all future rewards for the length of the episode. $v_{t}$ is then,\begin{equation}\color{green} G_t = \sum_{n=t}^T \gamma^{n-t} \color{green} R(\color{red}{s_t}, \color{blue}{a_t}, \color{red}{s_{t+1}}).\end{equation}where $\gamma$ is the discount factor (0.99). For example, if an episode lasts 5 steps, the reward for each step will be [4.90, 3.94, 2.97, 1.99, 1].Next we scale our reward vector by substracting the mean from each element and scaling to unit variance by dividing by the standard deviation. This practice is common for machine learning applications and the same operation as Scikit Learn's __[StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)__. It also has the effect of compensating for future uncertainty. Update Policy: equationAfter each episode we apply Monte-Carlo Policy Gradient to improve our policy according to the equation:\begin{equation}\Delta\theta_t = \alpha\nabla_\theta \, \log \pi_\theta (s_t,a_t)G_t\end{equation}We will then feed our policy history multiplied by our rewards to our optimizer and update the weights of our neural network using stochastic gradient **ascent**. This should increase the likelihood of actions that got our agent a larger reward. The following function ```update_policy``` updates the network weights and therefore the policy. ###Code def update_policy(): R = 0 rewards = [] # Discount future rewards back to the present using gamma for r in policy.reward_episode[::-1]: R = r + policy.gamma * R rewards.insert(0, R) # Scale rewards rewards = torch.FloatTensor(rewards) rewards = (rewards - rewards.mean()) / (rewards.std() + np.finfo(np.float32).eps) # Calculate loss pg_loss = (torch.sum(torch.mul(policy.past_policy, Variable(rewards)).mul(-1), -1)) # Update network weights # Use zero_grad(), backward() and step() methods of the optimizer instance. pg_optimizer.zero_grad() pg_loss.backward() # Update the weights for param in policy.parameters(): param.grad.data.clamp_(-1, 1) pg_optimizer.step() # Save and intialize episode past counters policy.past_loss.append(pg_loss.item()) policy.past_reward.append(np.sum(policy.reward_episode)) policy.past_policy = Variable(torch.Tensor()) policy.reward_episode= [] ###Output _____no_output_____ ###Markdown TrainingThis is our main policy training loop. For each step in a training episode, we choose an action, take a step through the environment, and record the resulting new state and reward. We call update_policy() at the end of each episode to feed the episode history to our neural network and improve our policy. ###Code def policy_gradient_train(episodes): running_reward = 10 for episode in range(episodes): state = env.reset() done = False for time in range(1000): action = select_action(state) # Step through environment using chosen action state, reward, done, _ = env.step(action.item()) # Save reward policy.reward_episode.append(reward) if done: break # Used to determine when the environment is solved. running_reward = (running_reward * gamma) + (time * (1 - gamma)) update_policy() if episode % 50 == 0: print(f"Episode {episode}\tLast length: {time:5.0f}" f"\tAverage length: {running_reward:.2f}") if running_reward > env.spec.reward_threshold: print(f"Solved! Running reward is now {running_reward} " f"and the last episode runs to {time} time steps!") break ###Output _____no_output_____ ###Markdown Run the model ###Code episodes = 500 #@param {type:"integer"} policy_gradient_train(episodes) ###Output _____no_output_____ ###Markdown Plot the results ###Code # @title Plot the training performance for policy gradient def plot_policy_gradient_training(): window = int(episodes / 20) fig, ((ax1), (ax2)) = plt.subplots(1, 2, sharey=True, figsize=[15, 4]); rolling_mean = pd.Series(policy.past_reward).rolling(window).mean() std = pd.Series(policy.past_reward).rolling(window).std() ax1.plot(rolling_mean) ax1.fill_between(range(len(policy.past_reward)), rolling_mean-std, rolling_mean+std, color='orange', alpha=0.2) ax1.set_title(f"Episode Length Moving Average ({window}-episode window)") ax1.set_xlabel('Episode'); ax1.set_ylabel('Episode Length') ax2.plot(policy.past_reward) ax2.set_title('Episode Length') ax2.set_xlabel('Episode') ax2.set_ylabel('Episode Length') fig.tight_layout(pad=2) plt.show() plot_policy_gradient_training() ###Output _____no_output_____ ###Markdown Exercise 8.1: Explore different hyperparameters.Try running the model again, by modifying the hyperparameters and observe the outputs. Be sure to rerun the function definition cells in order to pick up on the updated values.What do you see when you 1. increase learning rate2. decrease learning rate3. decrease gamma ($\gamma$)4. increase number of hidden neurons in the network Section 8.2: Actor-criticRecall the policy gradient\begin{equation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} G_t \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}The policy parameters are updated using Monte Carlo technique and uses random samples. This introduces high variability in log probabilities and cumulative reward values. This leads to noisy gradients and can cause unstable learning.One way to reduce variance and increase stability is subtracting the cumulative reward by a baseline:\begin{equation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} (G_t - b) \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}Intuitively, reducing cumulative reward will make smaller gradients and thus smaller and more stable (hopefully) updates.From the lecture slides, we know that in Actor Critic Method:1. The “Critic” estimates the value function. This could be the action-value (the Q value) or state-value (the V value).2. The “Actor” updates the policy distribution in the direction suggested by the Critic (such as with policy gradients).Both the Critic and Actor functions are parameterized with neural networks. The "Critic" network parameterizes the Q-value. ###Code # @title Set the hyperparameters for Actor Critic learning_rate = 0.01 # @param {type:"number"} gamma = 0.99 # @param {type:"number"} dropout = 0.6 # Only used in Actor-Critic Method hidden_size = 256 # @param {type:"integer"} num_steps = 300 ###Output _____no_output_____ ###Markdown Actor Critic Network ###Code class ActorCriticNet(nn.Module): def __init__(self, num_inputs, num_actions, hidden_size, learning_rate=3e-4): super(ActorCriticNet, self).__init__() self.num_actions = num_actions self.critic_linear1 = nn.Linear(num_inputs, hidden_size) self.critic_linear2 = nn.Linear(hidden_size, 1) self.actor_linear1 = nn.Linear(num_inputs, hidden_size) self.actor_linear2 = nn.Linear(hidden_size, num_actions) self.all_rewards = [] self.all_lengths = [] self.average_lengths = [] def forward(self, state): state = Variable(torch.from_numpy(state).float().unsqueeze(0)) value = F.relu(self.critic_linear1(state)) value = self.critic_linear2(value) policy_dist = F.relu(self.actor_linear1(state)) policy_dist = F.softmax(self.actor_linear2(policy_dist), dim=1) return value, policy_dist ###Output _____no_output_____ ###Markdown Training ###Code def actor_critic_train(episodes): all_lengths = [] average_lengths = [] all_rewards = [] entropy_term = 0 for episode in range(episodes): log_probs = [] values = [] rewards = [] state = env.reset() for steps in range(num_steps): value, policy_dist = actor_critic.forward(state) value = value.detach().numpy()[0, 0] dist = policy_dist.detach().numpy() action = np.random.choice(num_outputs, p=np.squeeze(dist)) log_prob = torch.log(policy_dist.squeeze(0)[action]) entropy = -np.sum(np.mean(dist) * np.log(dist)) new_state, reward, done, _ = env.step(action) rewards.append(reward) values.append(value) log_probs.append(log_prob) entropy_term += entropy state = new_state if done or steps == num_steps - 1: qval, _ = actor_critic.forward(new_state) qval = qval.detach().numpy()[0, 0] all_rewards.append(np.sum(rewards)) all_lengths.append(steps) average_lengths.append(np.mean(all_lengths[-10:])) if episode % 50 == 0: print(f"episode: {episode},\treward: {np.sum(rewards)}," f"\ttotal length: {steps}," f"\taverage length: {average_lengths[-1]}") break # compute Q values qvals = np.zeros_like(values) for t in reversed(range(len(rewards))): qval = rewards[t] + gamma * qval qvals[t] = qval #update actor critic values = torch.FloatTensor(values) qvals = torch.FloatTensor(qvals) log_probs = torch.stack(log_probs) advantage = qvals - values actor_loss = (-log_probs * advantage).mean() critic_loss = 0.5 * advantage.pow(2).mean() ac_loss = actor_loss + critic_loss + 0.001 * entropy_term ac_optimizer.zero_grad() ac_loss.backward() ac_optimizer.step() # Store results actor_critic.average_lengths = average_lengths actor_critic.all_rewards = all_rewards actor_critic.all_lengths = all_lengths ###Output _____no_output_____ ###Markdown Run the model ###Code episodes = 500 # @param {type:"integer"} env.reset() num_inputs = env.observation_space.shape[0] num_outputs = env.action_space.n actor_critic = ActorCriticNet(num_inputs, num_outputs, hidden_size) ac_optimizer = optim.Adam(actor_critic.parameters()) actor_critic_train(episodes) ###Output _____no_output_____ ###Markdown Plot the results ###Code # @title Plot the training performance for Actor Critic def plot_actor_critic_training(actor_critic, episodes): window = int(episodes / 20) plt.figure(figsize=(15, 4)) plt.subplot(1, 2, 1) smoothed_rewards = pd.Series(actor_critic.all_rewards).rolling(window).mean() std = pd.Series(actor_critic.all_rewards).rolling(window).std() plt.plot(smoothed_rewards, label='Smoothed rewards') plt.fill_between(range(len(smoothed_rewards)), smoothed_rewards - std, smoothed_rewards + std, color='orange', alpha=0.2) plt.xlabel('Episode') plt.ylabel('Reward') plt.subplot(1, 2, 2) plt.plot(actor_critic.all_lengths, label='All lengths') plt.plot(actor_critic.average_lengths, label='Average lengths') plt.xlabel('Episode') plt.ylabel('Episode length') plt.legend() plt.tight_layout() plt.show() plot_actor_critic_training(actor_critic, episodes) ###Output _____no_output_____ ###Markdown Exercise 8.3: Effect of episodes on performanceChange the episodes from 500 to 3000 and observe the performance impact. Exercise 8.4: Effect of learning rate on performanceModify the hyperparameters related to learning_rate and gamma and observe the impact on the performance.Be sure to rerun the function definition cells in order to pick up on the updated values. --- Section 9: RL in the real world ###Code # @title Video 9: Real-world applications and ethics from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Nq4y1X7AF", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"5kBtiW88QVw", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Exercise 9: Group discussionForm a group of 2-3 and have discussions (roughly 3 minutes each) of the following questions:1. **Safety**: what are some safety issues that arise in RL that don’t arise with e.g. supervised learning?2. **Generalization**: What happens if your RL agent is presented with data it hasn’t trained on? (“goes out of distribution”)3. How important do you think **interpretability** is in the ethical and safe deployment of RL agents in the real world? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_99944c89.py) --- Section 10: How to learn more ###Code # @title Video 10: How to learn more from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1WM4y1T7G5", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"dKaOpgor5Ek", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Neuromatch Academy: Week 3, Day 2, Tutorial 1 Introduction to Reinforcement Learning__Content creators:__ Feryal Behbahani, Jane Wang, Matthew Sargent, Anoop Kulkarni, Sowmya Parthiban__Content reviewers:__ Lily Cheng, Roberto Guidotti, Arush Tagade__Content editors:__ Spiros Chavlis __Production editors:__ Spiros Chavlis ---Tutorial ObjectivesBy the end of the tutorial, you should be able to:1. Within the RL framework, be able to identify the different components: environment, agent, states, and actions. 2. Understand the Bellman equation and components involved. 3. Implement tabular value-based model-free learning (Q-learning and SARSA).4. Run a DQN agent and experiment with different hyperparameters.5. Have a high-level understanding of other (nonvalue-based) RL methods.6. Discuss real-world applications and ethical issues of RL. ###Code #@markdown Tutorial slides from IPython.display import HTML HTML('<iframe src="https://docs.google.com/presentation/d/1SspkoRiILE1xGUE0_iRboo-ALqXJVEZCt8IlgWOKgGo/edit?resourcekey=0-gFuj1C_wUqxJ2qPHPTceAQ#slide=id.gdb4fce9ed9_0_289" frameborder="0" width="960" height="569" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>') ###Output _____no_output_____ ###Markdown --- Setup ###Code # Install requirements !pip install einops --quiet !pip install dm-acme --quiet !pip install dm-acme[reverb] --quiet !pip install dm-acme[tf] --quiet !pip install dm-acme[envs] --quiet !pip install dm-env --quiet !sudo apt-get install -y xvfb ffmpeg --quiet !pip install imageio --quiet from IPython.display import clear_output clear_output() # Import modules import gym import enum import copy import time import acme import torch import base64 import dm_env import random import IPython import imageio import warnings import itertools import collections import numpy as np import pandas as pd import sonnet as snt import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import matplotlib.pyplot as plt import tensorflow.compat.v2 as tf from acme import environment_loop from acme import specs from acme import wrappers from acme.utils import tree_utils from acme.utils import loggers from tqdm import tqdm, trange from torch.autograd import Variable from torch.distributions import Categorical from typing import Callable, Optional, Sequence tf.enable_v2_behavior() warnings.filterwarnings('ignore') np.set_printoptions(precision=3, suppress=1) SEED = 2021 %matplotlib inline #@title Figure settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") import warnings warnings.filterwarnings( action="ignore", message="This figure includes Axes", category=UserWarning ) warnings.filterwarnings( action="ignore", message="Calculating RSM", category=UserWarning ) #@title Helper Functions #@markdown Implement helpers for value visualisation { form-width: "30%" } map_from_action_to_subplot = lambda a: (2, 6, 8, 4)[a] map_from_action_to_name = lambda a: ("up", "right", "down", "left")[a] def plot_values(values, colormap='pink', vmin=-1, vmax=10): plt.imshow(values, interpolation="nearest", cmap=colormap, vmin=vmin, vmax=vmax) plt.yticks([]) plt.xticks([]) plt.colorbar(ticks=[vmin, vmax]) def plot_state_value(action_values, epsilon=0.1): q = action_values fig = plt.figure(figsize=(4, 4)) vmin = np.min(action_values) vmax = np.max(action_values) v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1) plot_values(v, colormap='summer', vmin=vmin, vmax=vmax) plt.title("$v(s)$") def plot_action_values(action_values, epsilon=0.1): q = action_values fig = plt.figure(figsize=(8, 8)) fig.subplots_adjust(wspace=0.3, hspace=0.3) vmin = np.min(action_values) vmax = np.max(action_values) dif = vmax - vmin for a in [0, 1, 2, 3]: plt.subplot(3, 3, map_from_action_to_subplot(a)) plot_values(q[..., a], vmin=vmin - 0.05*dif, vmax=vmax + 0.05*dif) action_name = map_from_action_to_name(a) plt.title(r"$q(s, \mathrm{" + action_name + r"})$") plt.subplot(3, 3, 5) v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1) plot_values(v, colormap='summer', vmin=vmin, vmax=vmax) plt.title("$v(s)$") def smooth(x, window=10): return x[:window*(len(x)//window)].reshape(len(x)//window, window).mean(axis=1) def plot_stats(stats, window=10): plt.figure(figsize=(16,4)) plt.subplot(121) xline = range(0, len(stats.episode_lengths), window) plt.plot(xline, smooth(stats.episode_lengths, window=window)) plt.ylabel('Episode Length') plt.xlabel('Episode Count') plt.subplot(122) plt.plot(xline, smooth(stats.episode_rewards, window=window)) plt.ylabel('Episode Return') plt.xlabel('Episode Count') #@title Set random seed. #@markdown Executing `set_seed(seed=seed)` you are setting the seed # for DL its critical to set the random seed so that students can have a # baseline to compare their results to expected results. # Read more here: https://pytorch.org/docs/stable/notes/randomness.html # Call `set_seed` function in the exercises to ensure reproducibility. import random def set_seed(seed=None, seed_torch=True): if seed is None: seed = np.random.choice(2 ** 32) random.seed(seed) np.random.seed(seed) if seed_torch: torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True print(f'Random seed {seed} has been set.') #@title Set device (GPU or CPU). Execute `set_device()` def set_device(): device = "cuda" if torch.cuda.is_available() else "cpu" if device != "cuda": print("WARNING: For this notebook to perform best, " "if possible, in the menu under `Runtime` -> " "`Change runtime type.` select `GPU` ") else: print("GPU is enabled in this notebook.") return device DEVICE = set_device() print(f"`DEVICE` selected: {DEVICE}") ###Output _____no_output_____ ###Markdown ---Section 1: Introduction to Reinforcement Learning ###Code #@title Video 1: Introduction to RL # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="BWz3scQN50M", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown ---Section 2: General Formulation of RL Problems and Gridworlds ###Code #@title Video 2: General Formulation and MDPs # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="h6TxAALY5Fc", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown The agent interacts with the environment in a loop corresponding to the following diagram. The environment defines a set of **actions** that an agent can take. The agent takes an action informed by the **observations** it receives, and will get a **reward** from the environment after each action. The goal in RL is to find an agent whose actions maximize the total accumulation of rewards obtained from the environment. Section 2.1: The Environment For this practical session we will focus on a **simple grid world** environment,which consists of a 9 x 10 grid of either wall or empty cells, depicted in black and white, respectively. The smiling agent starts from an initial location and needs to navigate to reach the goal square.Below you will find an implementation of this Gridworld as a ```dm_env.Environment``` ###Code #@title Implement GridWorld { form-width: "30%" } #@markdown double-click to inspect its contents class ObservationType(enum.IntEnum): STATE_INDEX = enum.auto() AGENT_ONEHOT = enum.auto() GRID = enum.auto() AGENT_GOAL_POS = enum.auto() class GridWorld(dm_env.Environment): def __init__(self, layout, start_state, goal_state=None, observation_type=ObservationType.STATE_INDEX, discount=0.9, penalty_for_walls=-5, reward_goal=10, max_episode_length=None, randomize_goals=False): """Build a grid environment. Simple gridworld defined by a map layout, a start and a goal state. Layout should be a NxN grid, containing: * 0: empty * -1: wall * Any other positive value: value indicates reward; episode will terminate Args: layout: NxN array of numbers, indicating the layout of the environment. start_state: Tuple (y, x) of starting location. goal_state: Optional tuple (y, x) of goal location. Will be randomly sampled once if None. observation_type: Enum observation type to use. One of: * ObservationType.STATE_INDEX: int32 index of agent occupied tile. * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the agent is and 0 elsewhere. * ObservationType.GRID: NxNx3 float32 grid of feature channels. First channel contains walls (1 if wall, 0 otherwise), second the agent position (1 if agent, 0 otherwise) and third goal position (1 if goal, 0 otherwise) * ObservationType.AGENT_GOAL_POS: float32 tuple with (agent_y, agent_x, goal_y, goal_x) discount: Discounting factor included in all Timesteps. penalty_for_walls: Reward added when hitting a wall (should be negative). reward_goal: Reward added when finding the goal (should be positive). max_episode_length: If set, will terminate an episode after this many steps. randomize_goals: If true, randomize goal at every episode. """ if observation_type not in ObservationType: raise ValueError('observation_type should be a ObservationType instace.') self._layout = np.array(layout) self._start_state = start_state self._state = self._start_state self._number_of_states = np.prod(np.shape(self._layout)) self._discount = discount self._penalty_for_walls = penalty_for_walls self._reward_goal = reward_goal self._observation_type = observation_type self._layout_dims = self._layout.shape self._max_episode_length = max_episode_length self._num_episode_steps = 0 self._randomize_goals = randomize_goals if goal_state is None: # Randomly sample goal_state if not provided goal_state = self._sample_goal() self.goal_state = goal_state def _sample_goal(self): """Randomly sample reachable non-starting state.""" # Sample a new goal n = 0 max_tries = 1e5 while n < max_tries: goal_state = tuple(np.random.randint(d) for d in self._layout_dims) if goal_state != self._state and self._layout[goal_state] == 0: # Reachable state found! return goal_state n += 1 raise ValueError('Failed to sample a goal state.') @property def layout(self): return self._layout @property def number_of_states(self): return self._number_of_states @property def goal_state(self): return self._goal_state @property def start_state(self): return self._start_state @property def state(self): return self._state def set_state(self, x, y): self._state = (y, x) @goal_state.setter def goal_state(self, new_goal): if new_goal == self._state or self._layout[new_goal] < 0: raise ValueError('This is not a valid goal!') # Zero out any other goal self._layout[self._layout > 0] = 0 # Setup new goal location self._layout[new_goal] = self._reward_goal self._goal_state = new_goal def observation_spec(self): if self._observation_type is ObservationType.AGENT_ONEHOT: return specs.Array( shape=self._layout_dims, dtype=np.float32, name='observation_agent_onehot') elif self._observation_type is ObservationType.GRID: return specs.Array( shape=self._layout_dims + (3,), dtype=np.float32, name='observation_grid') elif self._observation_type is ObservationType.AGENT_GOAL_POS: return specs.Array( shape=(4,), dtype=np.float32, name='observation_agent_goal_pos') elif self._observation_type is ObservationType.STATE_INDEX: return specs.DiscreteArray( self._number_of_states, dtype=int, name='observation_state_index') def action_spec(self): return specs.DiscreteArray(4, dtype=int, name='action') def get_obs(self): if self._observation_type is ObservationType.AGENT_ONEHOT: obs = np.zeros(self._layout.shape, dtype=np.float32) # Place agent obs[self._state] = 1 return obs elif self._observation_type is ObservationType.GRID: obs = np.zeros(self._layout.shape + (3,), dtype=np.float32) obs[..., 0] = self._layout < 0 obs[self._state[0], self._state[1], 1] = 1 obs[self._goal_state[0], self._goal_state[1], 2] = 1 return obs elif self._observation_type is ObservationType.AGENT_GOAL_POS: return np.array(self._state + self._goal_state, dtype=np.float32) elif self._observation_type is ObservationType.STATE_INDEX: y, x = self._state return y * self._layout.shape[1] + x def reset(self): self._state = self._start_state self._num_episode_steps = 0 if self._randomize_goals: self.goal_state = self._sample_goal() return dm_env.TimeStep( step_type=dm_env.StepType.FIRST, reward=None, discount=None, observation=self.get_obs()) def step(self, action): y, x = self._state if action == 0: # up new_state = (y - 1, x) elif action == 1: # right new_state = (y, x + 1) elif action == 2: # down new_state = (y + 1, x) elif action == 3: # left new_state = (y, x - 1) else: raise ValueError( 'Invalid action: {} is not 0, 1, 2, or 3.'.format(action)) new_y, new_x = new_state step_type = dm_env.StepType.MID if self._layout[new_y, new_x] == -1: # wall reward = self._penalty_for_walls discount = self._discount new_state = (y, x) elif self._layout[new_y, new_x] == 0: # empty cell reward = 0. discount = self._discount else: # a goal reward = self._layout[new_y, new_x] discount = 0. new_state = self._start_state step_type = dm_env.StepType.LAST self._state = new_state self._num_episode_steps += 1 if (self._max_episode_length is not None and self._num_episode_steps >= self._max_episode_length): step_type = dm_env.StepType.LAST return dm_env.TimeStep( step_type=step_type, reward=np.float32(reward), discount=discount, observation=self.get_obs()) def plot_grid(self, add_start=True): plt.figure(figsize=(4, 4)) plt.imshow(self._layout <= -1, interpolation='nearest') ax = plt.gca() ax.grid(0) plt.xticks([]) plt.yticks([]) # Add start/goal if add_start: plt.text( self._start_state[1], self._start_state[0], r'$\mathbf{S}$', fontsize=16, ha='center', va='center') plt.text( self._goal_state[1], self._goal_state[0], r'$\mathbf{G}$', fontsize=16, ha='center', va='center') h, w = self._layout.shape for y in range(h - 1): plt.plot([-0.5, w - 0.5], [y + 0.5, y + 0.5], '-k', lw=2) for x in range(w - 1): plt.plot([x + 0.5, x + 0.5], [-0.5, h - 0.5], '-k', lw=2) def plot_state(self, return_rgb=False): self.plot_grid(add_start=False) # Add the agent location plt.text( self._state[1], self._state[0], u'😃', fontname='symbola', fontsize=18, ha='center', va='center', ) if return_rgb: fig = plt.gcf() plt.axis('tight') plt.subplots_adjust(0, 0, 1, 1, 0, 0) fig.canvas.draw() data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') w, h = fig.canvas.get_width_height() data = data.reshape((h, w, 3)) plt.close(fig) return data def plot_policy(self, policy): action_names = [ r'$\uparrow$', r'$\rightarrow$', r'$\downarrow$', r'$\leftarrow$' ] self.plot_grid() plt.title('Policy Visualization') h, w = self._layout.shape for y in range(h): for x in range(w): # if ((y, x) != self._start_state) and ((y, x) != self._goal_state): if (y, x) != self._goal_state: action_name = action_names[policy[y, x]] plt.text(x, y, action_name, ha='center', va='center') def plot_greedy_policy(self, q): greedy_actions = np.argmax(q, axis=2) self.plot_policy(greedy_actions) def build_gridworld_task(task, discount=0.9, penalty_for_walls=-5, observation_type=ObservationType.STATE_INDEX, max_episode_length=200): """Construct a particular Gridworld layout with start/goal states. Args: task: string name of the task to use. One of {'simple', 'obstacle', 'random_goal'}. discount: Discounting factor included in all Timesteps. penalty_for_walls: Reward added when hitting a wall (should be negative). observation_type: Enum observation type to use. One of: * ObservationType.STATE_INDEX: int32 index of agent occupied tile. * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the agent is and 0 elsewhere. * ObservationType.GRID: NxNx3 float32 grid of feature channels. First channel contains walls (1 if wall, 0 otherwise), second the agent position (1 if agent, 0 otherwise) and third goal position (1 if goal, 0 otherwise) * ObservationType.AGENT_GOAL_POS: float32 tuple with (agent_y, agent_x, goal_y, goal_x). max_episode_length: If set, will terminate an episode after this many steps. """ tasks_specifications = { 'simple': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), 'goal_state': (7, 2) }, 'obstacle': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, -1, 0, 0, -1], [-1, 0, 0, 0, -1, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), 'goal_state': (2, 8) }, 'random_goal': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), # 'randomize_goals': True }, } return GridWorld( discount=discount, penalty_for_walls=penalty_for_walls, observation_type=observation_type, max_episode_length=max_episode_length, **tasks_specifications[task]) def setup_environment(environment): """Returns the environment and its spec.""" # Make sure the environment outputs single-precision floats. environment = wrappers.SinglePrecisionWrapper(environment) # Grab the spec of the environment. environment_spec = specs.make_environment_spec(environment) return environment, environment_spec ###Output _____no_output_____ ###Markdown We will use two distinct tabular GridWorlds:* `simple` where the goal is at the bottom left of the grid, little navigation required.* `obstacle` where the goal is behind an obstacle the agent must avoid.You can visualize the grid worlds by running the cell below. Note that **S** indicates the start state and **G** indicates the goal. ###Code # Visualise GridWorlds # Instantiate two tabular environments, a simple task, and one that involves # the avoidance of an obstacle. simple_grid = build_gridworld_task( task='simple', observation_type=ObservationType.GRID) obstacle_grid = build_gridworld_task( task='obstacle', observation_type=ObservationType.GRID) # Plot them. simple_grid.plot_grid() plt.title('Simple') obstacle_grid.plot_grid() plt.title('Obstacle') ###Output _____no_output_____ ###Markdown In this environment, the agent has four possible **actions**: `up`, `right`, `down`, and `left`. The **reward** is `-5` for bumping into a wall, `+10` for reaching the goal, and `0` otherwise. The episode ends when the agent reaches the goal, and otherwise continues. The **discount** on continuing steps, is $\gamma = 0.9$. Before we start building an agent to interact with this environment, let's first look at the types of objects the environment either returns (e.g. **observations**) or consumes (e.g. **actions**). The `environment_spec` will show you the form of the **observations**, **rewards** and **discounts** that the environment exposes and the form of the **actions** that can be taken. ###Code # @title Look at environment_spec { form-width: "30%" } # Note: setup_environment is implemented in the same cell as GridWorld. environment, environment_spec = setup_environment(simple_grid) print('actions:\n', environment_spec.actions, '\n') print('observations:\n', environment_spec.observations, '\n') print('rewards:\n', environment_spec.rewards, '\n') print('discounts:\n', environment_spec.discounts, '\n') ###Output _____no_output_____ ###Markdown We first set the environment to its initial location by calling the `reset()` method which returns the first observation. ###Code environment.reset() environment.plot_state() ###Output _____no_output_____ ###Markdown Now we want to take an action to interact with the environment. We do this by passing a valid action to the `dm_env.Environment.step()` method which returns a `dm_env.TimeStep` namedtuple with fields `(step_type, reward, discount, observation)`.Let's take an action and visualise the resulting state of the grid-world. (You'll need to rerun the cell if you pick a new action.) ###Code #@title Pick an action and see the state changing action = "left" #@param ["up", "right", "down", "left"] {type:"string"} action_int = {'up': 0, 'right': 1, 'down': 2, 'left':3 } action = int(action_int[action]) timestep = environment.step(action) # pytype: dm_env.TimeStep environment.plot_state() #@title Run loop { form-width: "30%" } #@markdown Double-click to inspect the `run_loop` function. def run_loop(environment, agent, num_episodes=None, num_steps=None, logger_time_delta=1., label='training_loop', log_loss=False, ): """Perform the run loop. We are following the Acme run loop. Run the environment loop for `num_episodes` episodes. Each episode is itself a loop which interacts first with the environment to get an observation and then give that observation to the agent in order to retrieve an action. Upon termination of an episode a new episode will be started. If the number of episodes is not given then this will interact with the environment infinitely. Args: environment: dm_env used to generate trajectories. agent: acme.Actor for selecting actions in the run loop. num_steps: number of episodes to run the loop for. If `None` (default), runs without limit. num_episodes: number of episodes to run the loop for. If `None` (default), runs without limit. logger_time_delta: time interval (in seconds) between consecutive logging steps. label: optional label used at logging steps. """ logger = loggers.TerminalLogger(label=label, time_delta=logger_time_delta) iterator = range(num_episodes) if num_episodes else itertools.count() all_returns = [] num_total_steps = 0 for episode in iterator: # Reset any counts and start the environment. start_time = time.time() episode_steps = 0 episode_return = 0 episode_loss = 0 timestep = environment.reset() # Make the first observation. agent.observe_first(timestep) # Run an episode. while not timestep.last(): # Generate an action from the agent's policy and step the environment. action = agent.select_action(timestep.observation) timestep = environment.step(action) # Have the agent observe the timestep and let the agent update itself. agent.observe(action, next_timestep=timestep) agent.update() # Book-keeping. episode_steps += 1 num_total_steps += 1 episode_return += timestep.reward if log_loss: episode_loss += agent.last_loss if num_steps is not None and num_total_steps >= num_steps: break # Collect the results and combine with counts. steps_per_second = episode_steps / (time.time() - start_time) result = { 'episode': episode, 'episode_length': episode_steps, 'episode_return': episode_return, } if log_loss: result['loss_avg'] = episode_loss/episode_steps all_returns.append(episode_return) # Log the given results. logger.write(result) if num_steps is not None and num_total_steps >= num_steps: break return all_returns #@title Implement the evaluation loop { form-width: "30%" } #@markdown Double-click to inspect the `evaluate` function. def evaluate(environment: dm_env.Environment, agent: acme.Actor, evaluation_episodes: int): frames = [] for episode in range(evaluation_episodes): timestep = environment.reset() episode_return = 0 steps = 0 while not timestep.last(): frames.append(environment.plot_state(return_rgb=True)) action = agent.select_action(timestep.observation) timestep = environment.step(action) steps += 1 episode_return += timestep.reward print( f'Episode {episode} ended with reward {episode_return} in {steps} steps' ) return frames def display_video(frames: Sequence[np.ndarray], filename: str = 'temp.mp4', frame_rate: int = 12): """Save and display video.""" # Write the frames to a video. with imageio.get_writer(filename, fps=frame_rate) as video: for frame in frames: video.append_data(frame) # Read video and display the video. video = open(filename, 'rb').read() b64_video = base64.b64encode(video) video_tag = ('<video width="320" height="240" controls alt="test" ' 'src="data:video/mp4;base64,{0}">').format(b64_video.decode()) return IPython.display.HTML(video_tag) ###Output _____no_output_____ ###Markdown Section 2.2: The AgentWe will be implementing Tabular & Function Approximation agents. Tabular agents are purely in Python.All agents will share the same interface from the Acme `Actor`. Here we borrow a figure from Acme to show how this interaction occurs: Agent interfaceEach agent implements the following functions:```pythonclass Agent(acme.Actor): def __init__(self, number_of_actions, number_of_states, ...): """Provides the agent the number of actions and number of states.""" def select_action(self, observation): """Generates actions from observations.""" def observe_first(self, timestep): """Records the initial timestep in a trajectory.""" def observe(self, action, next_timestep): """Records the transition which occurred from taking an action.""" def update(self): """Updates the agent's internals to potentially change its behavior."""```Remarks on the `observe()` function:1. In the last method, the `next_timestep` provides the `reward`, `discount`, and `observation` that resulted from selecting `action`.2. The `next_timestep.step_type` will be either `MID` or `LAST` and should be used to determine whether this is the last observation in the episode.3. The `next_timestep.step_type` cannot be `FIRST`; such a timestep should only ever be given to `observe_first()`. Coding Exercise 2.1: Random AgentBelow is a partially complete implemention of an agent that follows a random policy. Fill in the ```select_action``` method.The ```select_action``` method should return a random **integer** between 0 and ```self._num_actions``` (not a tensor or an array!) ###Code class RandomAgent(acme.Actor): def __init__(self, environment_spec): """Gets the number of available actions from the environment spec.""" self._num_actions = environment_spec.actions.num_values def select_action(self, observation): """Selects an action uniformly at random.""" ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the select action method") ################################################# # TODO return a random integer action = ... return action def observe_first(self, timestep): """Does not record as the RandomAgent has no use for data.""" pass def observe(self, action, next_timestep): """Does not record as the RandomAgent has no use for data.""" pass def update(self): """Does not update as the RandomAgent does not learn from data.""" pass ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_3b0318bf.py) ###Code #@title Visualisation { form-width: "30%" } # Create the agent by giving it the action space specification. agent = RandomAgent(environment_spec) # Run the agent in the evaluation loop, which returns the frames. frames = evaluate(environment, agent, evaluation_episodes=1) # Visualize the random agent's episode. display_video(frames) ###Output _____no_output_____ ###Markdown ---Section 3: The Bellman Equation ###Code #@title Video 3: The Bellman Equation # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="cLCoNBmYUns", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown In this tutorial we focus mainly on **value based methods**, where agents maintain a value for all state-action pairs and use those estimates to choose actions that maximize that **value** (instead of maintaining a policy directly, like in **policy gradient methods**). We represent the **action-value function** (otherwise known as $\color{green}Q$-function associated with following/employing a policy $\pi$ in a given MDP as:$$ \color{green}Q^{\color{blue}{\pi}}(\color{red}{s},\color{blue}{a}) = \mathbb{E}_{\tau \sim P^{\color{blue}{\pi}}} \left[ \sum_t \gamma^t \color{green}{r_t}| s_0=\color{red}s,a_0=\color{blue}{a} \right]$$where $\tau = \{\color{red}{s_0}, \color{blue}{a_0}, \color{green}{r_0}, \color{red}{s_1}, \color{blue}{a_1}, \color{green}{r_1}, \cdots \}$Recall that efficient value estimations are based on the famous **_Bellman Expectation Equation_**:$$ \color{green}Q^\color{blue}{\pi}(\color{red}{s},\color{blue}{a}) = \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\left( \color{green}{R}(\color{red}{s},\color{blue}{a}, \color{red}{s'}) + \gamma \color{green}V^\color{blue}{\pi}(\color{red}{s'}) \right)$$where $\color{green}V^\color{blue}{\pi}$ is the expected $\color{green}Q^\color{blue}{\pi}$ value for a particular state, i.e. $\color{green}V^\color{blue}{\pi}(\color{red}{s}) = \sum_{\color{blue}{a} \in \color{blue}{\mathcal{A}}} \color{blue}{\pi}(\color{blue}{a} |\color{red}{s}) \color{green}Q^\color{blue}{\pi}(\color{red}{s},\color{blue}{a})$. --- Section 4: Policy Evaluation ###Code #@title Video 4: Policy Evaluation # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="HAxR4SuaZs4", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown Tabular agents implement a function `q_values()` returning a matrix of Q valuesof shape: (`number_of_states`, `number_of_actions`)In this section, we will implement a `PolicyEvalAgent` as an ACME actor: given an `evaluation_policy` and a `behaviour_policy`, it will use the `behaviour_policy` to choose actions, and it will use the corresponding trajectory data to evaluate the `evaluation_policy` (i.e. compute the Q-values as if you were following the `evaluation_policy`). ###Code # Uniform random policy def random_policy(q): return np.random.randint(4) ###Output _____no_output_____ ###Markdown Coding Exercise 4.1 Policy Evaluation Agent ###Code class PolicyEvalAgent(acme.Actor): def __init__(self, number_of_states, number_of_actions, evaluated_policy, behaviour_policy=random_policy, step_size=0.1): self._state = None self._number_of_states = number_of_states self._number_of_actions = number_of_actions self._step_size = step_size self._behaviour_policy = behaviour_policy self._evaluated_policy = evaluated_policy ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Initialize your Q-values!") ################################################# # (this is a table of state and action pairs) # Note: this can be random, but the code was tested w/ zero-initialization self._q = ... self._action = None self._next_state = None @property def q_values(self): # return the Q values return ... def select_action(self, observation): # Select an action return ... def observe_first(self, timestep): self._state = timestep.observation def observe(self, action, next_timestep): s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute TD-Error. self._td_error = ... def update(self): # Updates s = self._state a = self._action # Q-value table update. self._q[s, a] += ... # Update the state self._state = ... ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_f988b0c4.py) --- Section 5: Tabular Value-Based Model-Free Learning ###Code #@title Video 5: Model-Free Learning # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="Y4TweUYnexU", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown Section 5.1: On-policy control: SARSA AgentIn this section, we are focusing on control RL algorithms, which perform the **evaluation** and **improvement** of the policy synchronously. That is, the policy that is being evaluated improves as the agent is using it to interact with the environent.The first algorithm we are going to be looking at is SARSA. This is an **on-policy algorithm** -- i.e: the data collection is done by leveraging the policy we're trying to optimize. As discussed during lectures, a greedy policy with respect to a given $\color{Green}Q$ fails to explore the environment as needed; we will use instead an $\epsilon$-greedy policy with respect to $\color{Green}Q$. SARSA Algorithm**Input:**- $\epsilon \in (0, 1)$ the probability of taking a random action , and- $\alpha > 0$ the step size, also known as learning rate.**Initialize:** $\color{green}Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s}$ ∈ $\mathcal{\color{red}S}$ and $\color{blue}a$ ∈ $\mathcal{\color{blue}A}$**Loop forever:**1. Get $\color{red}s \gets{}$current (non-terminal) state 2. Select $\color{blue}a \gets{} \text{epsilon_greedy}(\color{green}Q(\color{red}s, \cdot))$ 3. Step in the environment by passing the selected action $\color{blue}a$4. Observe resulting reward $\color{green}r$, discount $\gamma$, and state $\color{red}{s'}$5. Compute TD error: $\Delta \color{green}Q \gets \color{green}r + \gamma \color{green}Q(\color{red}{s'}, \color{blue}{a'}) − \color{green}Q(\color{red}s, \color{blue}a)$, where $\color{blue}{a'} \gets \text{epsilon_greedy}(\color{green}Q(\color{red}{s'}, \cdot))$5. Update $\color{green}Q(\color{red}s, \color{blue}a) \gets \color{green}Q(\color{red}s, \color{blue}a) + \alpha \Delta \color{green}Q$ Coding Exercise 5.1: Implement SARSABelow you will find incomplete code for sampling from an $\epsilon$-greedy policy, and for implementing an agent that learns values according to the SARSA algorithm. ###Code def epsilon_greedy( q_values_at_s: np.ndarray, # Q-values in state s: Q(s, :). epsilon: float = 0.1 # Probability of taking a random action. ): """Return an epsilon-greedy action sample.""" ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete epsilon greedy policy function") ################################################# # TODO return the action greedy to Q values if ...: # Greedy: Pick action with the largest Q-value. action = ... else: # Get the number of actions from the size of the given vector of Q-values. num_actions = np.array(q_values_at_s).shape[-1] # TODO else return a random action action = ... return action ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_8a39c08a.py) Coding Exercise 5.2: Run your SARSA agent on the `obstacle` environmentThis environment is similar to the Cliff-walking example from [Sutton & Barto](http://incompleteideas.net/book/RLbook2018.pdf) and allows us to see the different policies learned by on-policy vs off-policy methods. Try varying the number of steps. ###Code class SarsaAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, epsilon: float, step_size: float = 0.1 ): # Get number of states and actions from the environment spec. self._num_states = environment_spec.observations.num_values self._num_actions = environment_spec.actions.num_values # Create the table of Q-values, all initialized at zero. self._q = np.zeros((self._num_states, self._num_actions)) # Store algorithm hyper-parameters. self._step_size = step_size self._epsilon = epsilon # Containers you may find useful. self._state = None self._action = None self._next_state = None @property def q_values(self): return self._q def select_action(self, observation): return epsilon_greedy(self._q[observation], self._epsilon) def observe_first(self, timestep): # Set current state. self._state = timestep.observation def observe(self, action, next_timestep): # Unpacking the timestep to lighten notation. s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute the action that would be taken from the next state. next_a = self.select_action(next_s) # Compute the on-policy Q-value update. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the on-policy Q-value update") ################################################# # TODO complete the line below to compute the temporal difference error self._td_error = r + g * self._q[next_s, next_a] - self._q[s, a] def update(self): # Optional unpacking to lighten notation. s = self._state a = self._action ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete value update") ################################################# # Update the Q-value table value at (s, a). # TODO: Update the Q-value table value at (s, a). self._q[s, a] += ... # Update the current state. self._state = self._next_state ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_7bde630d.py) ###Code #@title Run SARSA agent num_steps = 1e5 #@param {type:"number"} num_steps = int(num_steps) # Create the environment. grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # Create the agent. agent = SarsaAgent(environment_spec, epsilon=0.1, step_size=0.1) # Run the experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) print('AFTER {0:,} STEPS ...'.format(num_steps)) # Get the Q-values and reshape them to recover grid-like structure of states. q_values = agent.q_values grid_shape = grid.layout.shape q_values = q_values.reshape([*grid_shape, -1]) # Visualize the value and Q-value tables. plot_action_values(q_values, epsilon=1.) # Visualize the greedy policy. environment.plot_greedy_policy(q_values) ###Output _____no_output_____ ###Markdown Section 5.2 Off-policy control: Q-learning AgentReminder: $\color{green}Q$-learning is a very powerful and general algorithm, that enables control (figuring out the optimal policy/value function) both on and off-policy.**Initialize** $\color{green}Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s} \in \color{red}{\mathcal{S}}$ and $\color{blue}{a} \in \color{blue}{\mathcal{A}}$**Loop forever**:1. Get $\color{red}{s} \gets{}$current (non-terminal) state 2. Select $\color{blue}{a} \gets{} \text{behaviour_policy}(\color{red}{s})$ 3. Step in the environment by passing the selected action $\color{blue}{a}$4. Observe resulting reward $\color{green}{r}$, discount $\gamma$, and state, $\color{red}{s'}$5. Compute the TD error: $\Delta \color{green}Q \gets \color{green}{r} + \gamma \color{green}Q(\color{red}{s'}, \color{blue}{a'}) − \color{green}Q(\color{red}{s}, \color{blue}{a})$, where $\color{blue}{a'} \gets \arg\max_{\color{blue}{\mathcal A}} \color{green}Q(\color{red}{s'}, \cdot)$6. Update $\color{green}Q(\color{red}{s}, \color{blue}{a}) \gets \color{green}Q(\color{red}{s}, \color{blue}{a}) + \alpha \Delta \color{green}Q$Notice that the actions $\color{blue}{a}$ and $\color{blue}{a'}$ are not selected using the same policy, hence this algorithm being **off-policy**. Coding Exercise 5.3: Implement Q-Learning ###Code QValues = np.ndarray Action = int # A value-based policy takes the Q-values at a state and returns an action. ValueBasedPolicy = Callable[[QValues], Action] class QLearningAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, behaviour_policy: ValueBasedPolicy, step_size: float = 0.1): # Get number of states and actions from the environment spec. self._num_states = environment_spec.observations.num_values self._num_actions = environment_spec.actions.num_values # Create the table of Q-values, all initialized at zero. self._q = np.zeros((self._num_states, self._num_actions)) # Store algorithm hyper-parameters. self._step_size = step_size # Store behavior policy. self._behaviour_policy = behaviour_policy # Containers you may find useful. self._state = None self._action = None self._next_state = None @property def q_values(self): return self._q def select_action(self, observation): return self._behaviour_policy(self._q[observation]) def observe_first(self, timestep): # Set current state. self._state = timestep.observation def observe(self, action, next_timestep): # Unpacking the timestep to lighten notation. s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute the TD error. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the off-policy Q-value update") ################################################# # TODO complete the line below to compute the temporal difference error self._td_error = ... def update(self): # Optional unpacking to lighten notation. s = self._state a = self._action ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete value update") ################################################# # Update the Q-value table value at (s, a). # TODO: Update the Q-value table value at (s, a). self._q[...] += ... # Update the current state. self._state = self._next_state ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_195bbb16.py) Run your Q-learning agent on the `obstacle` environment ###Code #@title Run your Q-learning epsilon = 1. #@param {type:"number"} num_steps = 1e5 #@param {type:"number"} num_steps = int(num_steps) # environment grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # behavior policy behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon) # agent agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) # get the q-values q = agent.q_values.reshape(grid.layout.shape + (4,)) # visualize value functions print('AFTER {:,} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=0) # visualise the greedy policy grid.plot_greedy_policy(q) ###Output _____no_output_____ ###Markdown Experiment with different levels of greediness* The default was $\epsilon=1.$, what does this correspond to?* Try also $\epsilon =0.1, 0.5$. What do you observe? Does the behaviour policy affect the training in any way? ###Code #@title Run the cell epsilon = 0.1 #@param {type:"number"} num_steps = 1e5 #@param {type:"number"} num_steps = int(num_steps) # environment grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # behavior policy behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon) # agent agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) # get the q-values q = agent.q_values.reshape(grid.layout.shape + (4,)) # visualize value functions print('AFTER {:,} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=epsilon) # visualise the greedy policy grid.plot_greedy_policy(q) ###Output _____no_output_____ ###Markdown So far we only considered look-up tables for value-functions. In all previous cases every state and action pair $(\color{red}{s}, \color{blue}{a})$, had an entry in our $\color{green}Q$-table. Again, this is possible in this environment as the number of states is equal to the number of cells in the grid. But this is not scalable to situations where, say, the goal location changes or the obstacles are in different locations at every episode (consider how big the table could be in this situation?).An example (not covered in this tutorial) is ATARI from pixels, where the number of possible frames an agent can see is exponential in the number of pixels on the screen.But what we **really** want is just to be able to *compute* the Q-value, when fed with a particular $(\color{red}{s}, \color{blue}{a})$ pair. So if we had a way to get a function to do this work instead of keeping a big table, we'd get around this problem.To address this, we can use **function approximation** as a way to generalize Q-values over some representation of the very large state space, and **train** them to output the values they should. In this section, we will explore $\color{green}Q$-learning with function approximation, which (although it has been theoretically proven to diverge for some degenerate MDPs) can yield impressive results in very large environments. In particular, we will look at [Neural Fitted Q (NFQ) Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) and [Deep Q-Networks (DQN)](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf). --- Section 6: Function Approximation ###Code #@title Video 6: Function approximation # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="7_MYePyYhrM", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown Section 6.1 Replay BuffersAn important property of off-policy methods like $\color{green}Q$-learning is that they involve two policies: one for exploration and one that is being optimized (via the $\color{green}Q$-function updates). This means that we can generate data from the **behavior** policy and insert that data into some form of data storage---usually referred to as **replay**.In order to optimize the $\color{green}Q$-function we can then sample data from the replay **dataset** and use that data to perform an update. An illustration of this learning loop is shown below. In the next cell we will show how to implement a simple replay buffer. This can be as simple as a python list containing transition data. In more complicated scenarios we might want to have a more performance-tuned variant, we might have to be more concerned about how large replay is and what to do when its full, and we might want to sample from replay in different ways. But a simple python list can go a surprisingly long way. ###Code # Simple replay buffer # Create a convenient container for the SARS tuples required by deep RL agents. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) class ReplayBuffer(object): """A simple Python replay buffer.""" def __init__(self, capacity: int = None): self.buffer = collections.deque(maxlen=capacity) self._prev_state = None def add_first(self, initial_timestep: dm_env.TimeStep): self._prev_state = initial_timestep.observation def add(self, action: int, timestep: dm_env.TimeStep): transition = Transitions( state=self._prev_state, action=action, reward=timestep.reward, discount=timestep.discount, next_state=timestep.observation, ) self.buffer.append(transition) self._prev_state = timestep.observation def sample(self, batch_size: int) -> Transitions: # Sample a random batch of Transitions as a list. batch_as_list = random.sample(self.buffer, batch_size) # Convert the list of `batch_size` Transitions into a single Transitions # object where each field has `batch_size` stacked fields. return tree_utils.stack_sequence_fields(batch_as_list) def flush(self) -> Transitions: entire_buffer = tree_utils.stack_sequence_fields(self.buffer) self.buffer.clear() return entire_buffer def is_ready(self, batch_size: int) -> bool: return batch_size <= len(self.buffer) ###Output _____no_output_____ ###Markdown Section 6.2: NFQ Agent[Neural Fitted Q Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) was one of the first papers to demonstrate how to leverage recent advances in Deep Learning to approximate the Q-value by a neural network.$^1$In other words, the value $\color{green}Q(\color{red}{s}, \color{blue}{a})$ are approximated by the output of a neural network $\color{green}{Q_w}(\color{red}{s}, \color{blue}{a})$ for each possible action $\color{blue}{a} \in \color{blue}{\mathcal{A}}$.$^2$When introducing function approximations, and neural networks in particular, we need to have a loss to optimize. But looking back at the tabular setting above, you can see that we already have some notion of error: the **TD error**.By training our neural network to output values such that the *TD error is minimized*, we will also satisfy the Bellman Optimality Equation, which is a good sufficient condition to enforce, to obtain an optimal policy.Thanks to automatic differentiation, we can just write the TD error as a loss, e.g. with an $\ell^2$ loss, but others would work too:$$L(\color{green}w) = \mathbb{E}\left[ \left( \color{green}{r} + \gamma \max_\color{blue}{a'} \color{green}{Q_w}(\color{red}{s'}, \color{blue}{a'}) − \color{green}{Q_w}(\color{red}{s}, \color{blue}{a}) \right)^2\right].$$Then we can compute the gradient with respect to the parameters of the neural network and improve our Q-value approximation incrementally.NFQ builds on $\color{green}Q$-learning, but if one were to update the Q-values online directly, the training can be unstable and very slow.Instead, NFQ uses a replay buffer, similar to what you just implemented above, to update the Q-value in a batched setting.When it was introduced, it also was entirely off-policy using a uniformly random policy to collect data, which was prone to instability when applied to more complex environments (e.g. when the input are pixels or the tasks are longer and more complicated).But it is a good stepping stone to the more complex agents used today. Here, we will look at a slightly different and modernised implementation of NFQ.Below you will find an incomplete NFQ agent that takes in observations from a gridworld. Instead of receiving a tabular state, it receives an observation in the form of its (x,y) coordinates in the gridworld, and the (x,y) coordinates of the goal.---$^1$ If you read the NFQ paper, they use a "control" notation, where there is a "cost to minimize", instead of "rewards to maximize", so don't be surprised if signs/max/min do not correspond.$^2$ We could feed it $\color{blue}{a}$ as well and ask $Q_w$ for a single scalar value, but given we have a fixed number of actions and we usually need to take an $argmax$ over them, it's easiest to just output them all in one pass. Coding Exercise 6.1: Implement NFQ ###Code # Create a convenient container for the SARS tuples required by NFQ. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) class NeuralFittedQAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, q_network: nn.Module, replay_capacity: int = 100_000, epsilon: float = 0.1, batch_size: int = 1, learning_rate: float = 3e-4): # Store agent hyperparameters and network. self._num_actions = environment_spec.actions.num_values self._epsilon = epsilon self._batch_size = batch_size self._q_network = q_network # Container for the computed loss (see run_loop implementation above). self.last_loss = 0.0 # Create the replay buffer. self._replay_buffer = ReplayBuffer(replay_capacity) # Setup optimizer that will train the network to minimize the loss. self._optimizer = torch.optim.Adam(self._q_network.parameters(),lr = learning_rate) self._loss_fn = nn.MSELoss() def select_action(self, observation): # Compute Q-values. q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) # Adds batch dimension. q_values = q_values.squeeze(0) # Removes batch dimension # Select epsilon-greedy action. if self._epsilon < torch.rand(1): action = q_values.argmax(axis=-1) else: action = torch.randint(low=0, high=self._num_actions , size=(1,), dtype=torch.int64) return action def q_values(self, observation): q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) return q_values.squeeze(0).detach() def update(self): if not self._replay_buffer.is_ready(self._batch_size): # If the replay buffer is not ready to sample from, do nothing. return # Sample a minibatch of transitions from experience replay. transitions = self._replay_buffer.sample(self._batch_size) # Note: each of these tensors will be of shape [batch_size, ...]. s = torch.tensor(transitions.state) a = torch.tensor(transitions.action,dtype=torch.int64) r = torch.tensor(transitions.reward) d = torch.tensor(transitions.discount) next_s = torch.tensor(transitions.next_state) # Compute the Q-values at next states in the transitions. with torch.no_grad(): q_next_s = self._q_network(next_s) # Shape [batch_size, num_actions]. max_q_next_s = q_next_s.max(axis=-1)[0] # Compute the TD error and then the losses. target_q_value = r + d * max_q_next_s # Compute the Q-values at original state. q_s = self._q_network(s) # Gather the Q-value corresponding to each action in the batch. q_s_a = q_s.gather(1, a.view(-1,1)).squeeze(0) ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the NFQ Agent") ################################################# # TODO Average the squared TD errors over the entire batch (axis=0). loss = ... # Compute the gradients of the loss with respect to the q_network variables. self._optimizer.zero_grad() loss.backward() # Apply the gradient update. self._optimizer.step() # Store the loss for logging purposes (see run_loop implementation above). self.last_loss = loss.detach().numpy() def observe_first(self, timestep: dm_env.TimeStep): self._replay_buffer.add_first(timestep) def observe(self, action: int, next_timestep: dm_env.TimeStep): self._replay_buffer.add(action, next_timestep) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_b33e659b.py) Train and Evaluate the NFQ Agent ###Code #@title Training the NFQ Agent. { form-width: "30%" } epsilon = 0.5 # @param {type:"number"} max_episode_length = 200 # Create the environment. grid = build_gridworld_task( task='simple', observation_type=ObservationType.AGENT_GOAL_POS, max_episode_length=max_episode_length) environment, environment_spec = setup_environment(grid) # Define the neural function approximator (aka Q network). q_network = nn.Sequential(nn.Linear(4, 50), nn.ReLU(), nn.Linear(50, 50), nn.ReLU(), nn.Linear(50, environment_spec.actions.num_values)) # Build the trainable Q-learning agent agent = NeuralFittedQAgent( environment_spec, q_network, epsilon=epsilon, replay_capacity=100_000, batch_size=10, learning_rate=1e-3) returns = run_loop( environment=environment, agent=agent, num_episodes=100, logger_time_delta=1., log_loss=True) #@title Evaluating the agent. { form-width: "30%" } # Temporarily change epsilon to be more greedy; remember to change it back. agent._epsilon = 0.05 # Record a few episodes. frames = evaluate(environment, agent, evaluation_episodes=5) # Change epsilon back. agent._epsilon = epsilon # Display the video of the episodes. display_video(frames, frame_rate=6) #@title Visualise the learned Q values { form-width: "30%" } # Evaluate the policy for every state, similar to tabular agents above. environment.reset() pi = np.zeros(grid._layout_dims, dtype=np.int32) q = np.zeros(grid._layout_dims + (4,)) for y in range(grid._layout_dims[0]): for x in range(grid._layout_dims[1]): # Hack observation to see what the Q-network would output at that point. environment.set_state(x, y) obs = environment.get_obs() q[y, x] = np.asarray(agent.q_values(obs)) pi[y, x] = np.asarray(agent.select_action(obs)) plot_action_values(q) ###Output _____no_output_____ ###Markdown Compare the greedy and behaviour ($\epsilon$-greedy) policiesNotice that the behaviour policy randomly flips arrows to random directions. ###Code environment.plot_greedy_policy(q) plt.title('Greedy policy using the learnt Q-values') environment.plot_policy(pi) plt.title("Policy using the agent's behaviour policy"); ###Output _____no_output_____ ###Markdown --- Section 7: DQN ###Code #@title Video 7: Deep Q-Networks (DQN) # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="HEDoNtV1y-w", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown --> In this section, we will look at an advanced deep RL Agent based on the following publication, [Playing Atari with Deep Reinforcement Learning](https://deepmind.com/research/publications/playing-atari-deep-reinforcement-learning), which introduced the first deep learning model to successfully learn control policies directly from high-dimensional pixel inputs using RL.Here the agent will act directly on a pixel representation of the gridworld. You can find an incomplete implementation below. Coding Exercise 7.1: Run a DQN Agent ###Code class DQN(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, network: nn.Module, replay_capacity: int = 100_000, epsilon: float = 0.1, batch_size: int = 1, learning_rate: float = 5e-4, target_update_frequency: int = 10): # Store agent hyperparameters and network. self._num_actions = environment_spec.actions.num_values self._epsilon = epsilon self._batch_size = batch_size self._q_network = q_network self._target_network = copy.deepcopy(self._q_network) # Container for the computed loss (see run_loop implementation above). self.last_loss = 0.0 # Create the replay buffer. self._replay_buffer = ReplayBuffer(replay_capacity) # Keep an internal tracker of steps self._current_step = 0 # How often to update the target network self._target_update_frequency = target_update_frequency # Setup optimizer that will train the network to minimize the loss. self._optimizer = torch.optim.Adam(self._q_network.parameters(),lr = learning_rate) self._loss_fn = nn.MSELoss() def select_action(self, observation): # Compute Q-values. # Sonnet requires a batch dimension, which we squeeze out right after. q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) # Adds batch dimension. q_values = q_values.squeeze(0) # Removes batch dimension # Select epsilon-greedy action. if self._epsilon < torch.rand(1): action = q_values.argmax(axis=-1) else: action = torch.randint(low=0, high=self._num_actions , size=(1,), dtype=torch.int64) return action def q_values(self, observation): q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) return q_values.squeeze(0).detach() def update(self): self._current_step += 1 if not self._replay_buffer.is_ready(self._batch_size): # If the replay buffer is not ready to sample from, do nothing. return # Sample a minibatch of transitions from experience replay. transitions = self._replay_buffer.sample(self._batch_size) # Optionally unpack the transitions to lighten notation. # Note: each of these tensors will be of shape [batch_size, ...]. s = torch.tensor(transitions.state) a = torch.tensor(transitions.action,dtype=torch.int64) r = torch.tensor(transitions.reward) d = torch.tensor(transitions.discount) next_s = torch.tensor(transitions.next_state) # Compute the Q-values at next states in the transitions. with torch.no_grad(): ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the DQN Agent") ################################################# #TODO get the value of the next states evaluated by the target network q_next_s = ... # Shape [batch_size, num_actions]. max_q_next_s = q_next_s.max(axis=-1)[0] # Compute the TD error and then the losses. # TODO compute the target value target_q_value = ... # Compute the Q-values at original state. q_s = self._q_network(s) # Gather the Q-value corresponding to each action in the batch. q_s_a = q_s.gather(1, a.view(-1,1)).squeeze(0) # Compute the TD errors. #td_error = target_q_value - q_s_a # Average the squared TD errors over the entire batch (axis=0). loss = self._loss_fn(target_q_value, q_s_a) # Compute the gradients of the loss with respect to the q_network variables. self._optimizer.zero_grad() loss.backward() # Apply the gradient update. self._optimizer.step() if self._current_step % self._target_update_frequency == 0: self._target_network.load_state_dict(self._q_network.state_dict()) # Store the loss for logging purposes (see run_loop implementation above). self.last_loss = loss.detach().numpy() def observe_first(self, timestep: dm_env.TimeStep): self._replay_buffer.add_first(timestep) def observe(self, action: int, next_timestep: dm_env.TimeStep): self._replay_buffer.add(action, next_timestep) # Create a convenient container for the SARS tuples required by NFQ. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_892324a5.py) ###Code #@title Train and evaluate the DQN agent { form-width: "30%" } epsilon = 0.25 # @param {type: "number"} grid = build_gridworld_task( task='simple', observation_type=ObservationType.GRID, max_episode_length=200) environment, environment_spec = setup_environment(grid) # Build the agent's network. class Permute(nn.Module): def __init__(self, order: list): super(Permute,self).__init__() self.order = order def forward(self, x): return x.permute(self.order) q_network = nn.Sequential(Permute([0, 3, 1, 2]), nn.Conv2d(3, 32, kernel_size=4, stride=2,padding=1), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(3, 1), nn.Flatten(), nn.Linear(384, 50), nn.ReLU(), nn.Linear(50, environment_spec.actions.num_values) ) agent = DQN( environment_spec=environment_spec, network=q_network, batch_size=10, epsilon=epsilon, target_update_frequency=25) returns = run_loop( environment=environment, agent=agent, num_episodes=1000, num_steps=100_000) # @title Visualise the learned Q values { form-width: "30%" } # Evaluate the policy for every state, similar to tabular agents above. pi = np.zeros(grid._layout_dims, dtype=np.int32) q = np.zeros(grid._layout_dims + (4,)) for y in range(grid._layout_dims[0]): for x in range(grid._layout_dims[1]): # Hack observation to see what the Q-network would output at that point. environment.set_state(x, y) obs = environment.get_obs() q[y, x] = np.asarray(agent.q_values(obs)) pi[y, x] = np.asarray(agent.select_action(obs)) plot_action_values(q) #@title Compare the greedy policy with the agent's policy { form-width: "30%" } environment.plot_greedy_policy(q) plt.title('Greedy policy using the learnt Q-values') environment.plot_policy(pi) plt.title("Policy using the agent's epsilon-greedy policy"); ###Output _____no_output_____ ###Markdown --- Section 8: Beyond Value Based Model-Free Methods ###Code #@title Video 8: Other RL Methods # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="1N4Jm9loJx4", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video # Tune hyperparameters SEED=2021 learning_rate = 0.01 gamma = 0.99 dropout = 0.6 # hyperparameters hidden_neurons = 128 # Only used in Policy Gradient Method hidden_size = 256 # only used in Actor-Critic Method num_steps = 300 max_episodes = 1000 # Use the CartPole example env = gym.make('CartPole-v1') # Set seeds env.seed(SEED) set_seed(SEED) ###Output _____no_output_____ ###Markdown Section 8.1: Policy gradientNow we will turn to policy gradient methods. Rather than defining the policy in terms of a value function, i.e. $\color{blue}\pi(\color{red}s) = \arg\max_{\color{blue}a}\color{green}Q(\color{red}s, \color{blue}a)$, we will directly parameterize the policy and write it as the distribution$$\color{blue}a_t \sim \color{blue}\pi_{\theta}(\color{blue}a_t|\color{red}s_t).$$Here $\theta$ represent the parameters of the policy. We will update the policy parameters using gradient ascent to **maximize** expected future reward.One convenient way to represent the conditional distribution above is as a function that takes a state $\color{red}s$ and returns a distribution over actions $\color{blue}a$.Defined below is an agent which implements the REINFORCE algorithm. REINFORCE (Williams 1992) is the simplest model-free general reinforcement learning technique.The **basic idea** is to use probabilistic action choice. If the reward at the end turns out to be high, we make **all** actions in this sequence **more likely** (otherwise, we do the opposite).This may sometimes reinforce "bad" actions as well, but they will hence turn out to be part of trajectories with low reward and will likely not get accentuated.From the lectures, we know that we need to compute$$\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} G_t \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]$$where $\color{green} G_t$ is the sum over future rewards from time $t$, defined as$$\color{green} G_t = \sum_{n=t}^T \gamma^{n-t} \color{green} R(\color{red}{s_t}, \color{blue}{a_t}, \color{red}{s_{t+1}}).$$**insert a pictorial slide from the slide deck**The algorithm below will collect the state, action, and reward data in its buffer until it reaches a full trajectory. It will then update its policy given the above gradient (and the Adam optimizer).A policy gradient trains an agent without explicitly mapping the value for every state-action pair in an environment by taking small steps and updating the policy based on the reward associated with that step. In this section, we will build a small network that trains using policy gradient using PyTorch.The agent can receive a reward immediately for an action or the agent can receive the award at a later time such as the end of the episode. The policy function for our agent will try to learn as $\pi_\theta(a,s)$, where $\theta$ is the parameter vector, $s$ is a particular state, and $a$ is an action.Monte-Carlo Policy Gradient approach will be used, which means the agent will run through an entire episode and then update policy based on the rewards obtained. Coding Exercise 8.1: Creating a simple neural networkBelow you will find some incomplete code. Fill in the missing code to construct the specified neural network.Let us define a simple feed forward neural network with one hidden layer of 128 neurons and a dropout of 0.6. Let's use Adam as our optimizer and a learning rate of 0.01. Use the hyperparameters already defined rather than using explicit values.Using dropout will significantly improve the performance of the policy. Do compare your results with and without dropout and experiment with other hyper-parameter values as well. ###Code class PolicyGradientNet(nn.Module): def __init__(self): super(PolicyGradientNet, self).__init__() self.state_space = env.observation_space.shape[0] self.action_space = env.action_space.n ################################################# ## TODO for students: fill in the missing code ## from the first expression raise NotImplementedError("Student exercise: Create FF neural network.") ################################################# self.l1 = ... self.l2 = ... self.gamma = ... # Episode policy and past rewards self.past_policy = Variable(torch.Tensor()) self.reward_episode = [] # Overall reward and past loss self.past_reward = [] self.past_loss = [] def forward(self, x): model = torch.nn.Sequential( self.l1, nn.Dropout(p=dropout), nn.ReLU(), self.l2, nn.Softmax(dim=-1) ) return model(x) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_74dcaa64.py) Now let's create an instance of the network we have defined and use ADAM as the optimizer using the learning_rate as hyperparameter already defined above. ###Code policy = PolicyGradientNet() pg_optimizer = optim.Adam(policy.parameters(), lr=learning_rate) ###Output _____no_output_____ ###Markdown Select ActionThe `select_action()` function chooses an action based on our policy probability distribution using the PyTorch distributions package. Our policy returns a probability for each possible action in our action space (move left or move right) as an array of length two such as [0.7, 0.3]. We then choose an action based on these probabilities, record our history, and return our action. ###Code def select_action(state): #Select an action (0 or 1) by running policy model and choosing based on the probabilities in state state = torch.from_numpy(state).type(torch.FloatTensor) state = policy(Variable(state)) c = Categorical(state) action = c.sample() # Add log probability of chosen action if policy.past_policy.dim() != 0: policy.past_policy = torch.cat([policy.past_policy, c.log_prob(action).reshape(1)]) else: policy.past_policy = (c.log_prob(action).reshape(1)) return action ###Output _____no_output_____ ###Markdown Update policyThis function updates the policy. Reward $G_t$We update our policy by taking a sample of the action value function $Q^{\pi_\theta} (s_t,a_t)$ by playing through episodes of the game. $Q^{\pi_\theta} (s_t,a_t)$ is defined as the expected return by taking action $a$ in state $s$ following policy $\pi$.We know that for every step the simulation continues we receive a reward of 1. We can use this to calculate the policy gradient at each time step, where $r$ is the reward for a particular state-action pair. Rather than using the instantaneous reward, $r$, we instead use a long term reward $ v_{t} $ where $v_t$ is the discounted sum of all future rewards for the length of the episode. $v_{t}$ is then,$$\color{green} G_t = \sum_{n=t}^T \gamma^{n-t} \color{green} R(\color{red}{s_t}, \color{blue}{a_t}, \color{red}{s_{t+1}}).$$where $\gamma$ is the discount factor (0.99). For example, if an episode lasts 5 steps, the reward for each step will be [4.90, 3.94, 2.97, 1.99, 1].Next we scale our reward vector by substracting the mean from each element and scaling to unit variance by dividing by the standard deviation. This practice is common for machine learning applications and the same operation as Scikit Learn's __[StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)__. It also has the effect of compensating for future uncertainty. Update PolicyAfter each episode we apply Monte-Carlo Policy Gradient to improve our policy according to the equation:$$\Delta\theta_t = \alpha\nabla_\theta \, \log \pi_\theta (s_t,a_t)G_t $$We will then feed our policy history multiplied by our rewards to our optimizer and update the weights of our neural network using stochastic gradient **ascent**. This should increase the likelihood of actions that got our agent a larger reward. Exercise 8.2: Update network weights while updating the overall policyBelow you will find some incomplete code. Fill in the missing code to construct the specified neural network. ###Code def update_policy(): R = 0 rewards = [] # Discount future rewards back to the present using gamma for r in policy.reward_episode[::-1]: R = r + policy.gamma * R rewards.insert(0, R) # Scale rewards rewards = torch.FloatTensor(rewards) rewards = (rewards - rewards.mean()) / (rewards.std() + np.finfo(np.float32).eps) # Calculate loss pg_loss = (torch.sum(torch.mul(policy.past_policy, Variable(rewards)).mul(-1), -1)) ################################################# ## TODO for students: fill in the missing code ## from the first expression raise NotImplementedError("Student exercise: Update the network weights.") ################################################# # Update network weights # Use zero_grad(), backward() and step() methods of the optimizer instance. pg_optimizer.zero_grad() pg_loss.backward() # Update the weights for param in policy.parameters(): param.grad.data.clamp_(-1, 1) pg_optimizer.step() # Save and intialize episode past counters policy.past_loss.append(pg_loss.item()) policy.past_reward.append(np.sum(policy.reward_episode)) policy.past_policy = Variable(torch.Tensor()) policy.reward_episode= [] ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_3d4bb09a.py) TrainingThis is our main policy training loop. For each step in a training episode, we choose an action, take a step through the environment, and record the resulting new state and reward. We call update_policy() at the end of each episode to feed the episode history to our neural network and improve our policy. ###Code def policy_gradient_train(episodes): running_reward = 10 for episode in range(episodes): state = env.reset() done = False for time in range(1000): action = select_action(state) # Step through environment using chosen action state, reward, done, _ = env.step(action.item()) # Save reward policy.reward_episode.append(reward) if done: break # Used to determine when the environment is solved. running_reward = (running_reward * gamma) + (time * (1 - gamma)) update_policy() if episode % 50 == 0: print(f"Episode {episode}\tLast length: {time:5.0f}" f"\tAverage length: {running_reward:.2f}") if running_reward > env.spec.reward_threshold: print(f"Solved! Running reward is now {running_reward} " f"and the last episode runs to {time} time steps!") break ###Output _____no_output_____ ###Markdown Run the model ###Code episodes = 1000 policy_gradient_train(episodes) ###Output _____no_output_____ ###Markdown Plot the results ###Code #@title Helper function for plotting the training performance def plot_policy_gradient_training(): window = int(episodes / 20) fig, ((ax1), (ax2)) = plt.subplots(1, 2, sharey=True, figsize=[15, 4]); rolling_mean = pd.Series(policy.past_reward).rolling(window).mean() std = pd.Series(policy.past_reward).rolling(window).std() ax1.plot(rolling_mean) ax1.fill_between(range(len(policy.past_reward)), rolling_mean-std, rolling_mean+std, color='orange', alpha=0.2) ax1.set_title(f"Episode Length Moving Average ({window}-episode window)") ax1.set_xlabel('Episode'); ax1.set_ylabel('Episode Length') ax2.plot(policy.past_reward) ax2.set_title('Episode Length') ax2.set_xlabel('Episode') ax2.set_ylabel('Episode Length') fig.tight_layout(pad=2) plt.show() plot_policy_gradient_training() ###Output _____no_output_____ ###Markdown Exercise 8.3: BONUSTry running the model again, by modifying the hyperparameters and observe the outputs.What do you see when you 1. increase learning rate2. decrease learning rate3. decrease gamma4. increase number of neurons in the network. Section 8.2: Actor-criticRecall the policy gradient$$\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} G_t \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]$$The policy parameters are updated using Monte Carlo technique and uses random samples. This introduces high variability in log probabilities and cumulative reward values. This leads to noisy gradients and can cause unstable learning.One way to reduce variance and increase stability is subtracting the cumulative reward by a baseline:$$\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} (G_t - b) \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]$$Intuitively, reducing cumulative reward will make smaller gradients and thus smaller and more stable (hopefully) updates.From the lecture slides, we know that in Actor Critic Method:1. The “Critic” estimates the value function. This could be the action-value (the Q value) or state-value (the V value).2. The “Actor” updates the policy distribution in the direction suggested by the Critic (such as with policy gradients).Both the Critic and Actor functions are parameterized with neural networks. The "Critic" network parameterizes the Q-value. Actor Critic Network ###Code class ActorCriticNet(nn.Module): def __init__(self, num_inputs, num_actions, hidden_size, learning_rate=3e-4): super(ActorCriticNet, self).__init__() self.num_actions = num_actions self.critic_linear1 = nn.Linear(num_inputs, hidden_size) self.critic_linear2 = nn.Linear(hidden_size, 1) self.actor_linear1 = nn.Linear(num_inputs, hidden_size) self.actor_linear2 = nn.Linear(hidden_size, num_actions) self.all_rewards = [] self.all_lengths = [] self.average_lengths = [] def forward(self, state): state = Variable(torch.from_numpy(state).float().unsqueeze(0)) value = F.relu(self.critic_linear1(state)) value = self.critic_linear2(value) policy_dist = F.relu(self.actor_linear1(state)) policy_dist = F.softmax(self.actor_linear2(policy_dist), dim=1) return value, policy_dist ###Output _____no_output_____ ###Markdown Training ###Code def actor_critic_train(episodes): all_lengths = [] average_lengths = [] all_rewards = [] entropy_term = 0 for episode in range(episodes): log_probs = [] values = [] rewards = [] state = env.reset() for steps in range(num_steps): value, policy_dist = actor_critic.forward(state) value = value.detach().numpy()[0, 0] dist = policy_dist.detach().numpy() action = np.random.choice(num_outputs, p=np.squeeze(dist)) log_prob = torch.log(policy_dist.squeeze(0)[action]) entropy = -np.sum(np.mean(dist) * np.log(dist)) new_state, reward, done, _ = env.step(action) rewards.append(reward) values.append(value) log_probs.append(log_prob) entropy_term += entropy state = new_state if done or steps == num_steps - 1: qval, _ = actor_critic.forward(new_state) qval = qval.detach().numpy()[0, 0] all_rewards.append(np.sum(rewards)) all_lengths.append(steps) average_lengths.append(np.mean(all_lengths[-10:])) if episode % 50 == 0: print(f"episode: {episode},\treward: {np.sum(rewards)}," f"\ttotal length: {steps}," f"\taverage length: {average_lengths[-1]}") break # compute Q values qvals = np.zeros_like(values) for t in reversed(range(len(rewards))): qval = rewards[t] + gamma * qval qvals[t] = qval #update actor critic values = torch.FloatTensor(values) qvals = torch.FloatTensor(qvals) log_probs = torch.stack(log_probs) advantage = qvals - values actor_loss = (-log_probs * advantage).mean() critic_loss = 0.5 * advantage.pow(2).mean() ac_loss = actor_loss + critic_loss + 0.001 * entropy_term ac_optimizer.zero_grad() ac_loss.backward() ac_optimizer.step() # Store results actor_critic.average_lengths = average_lengths actor_critic.all_rewards = all_rewards actor_critic.all_lengths = all_lengths ###Output _____no_output_____ ###Markdown Run the model ###Code env.reset() num_inputs = env.observation_space.shape[0] num_outputs = env.action_space.n actor_critic = ActorCriticNet(num_inputs, num_outputs, hidden_size) ac_optimizer = optim.Adam(actor_critic.parameters()) actor_critic_train(500) ###Output _____no_output_____ ###Markdown Plot the results ###Code #@title Helper function for plotting training performance def plot_actor_critic_training(): window = int(episodes / 20) plt.figure(figsize=(15, 4)) plt.subplot(1, 2, 1) smoothed_rewards = pd.Series(actor_critic.all_rewards).rolling(window).mean() std = pd.Series(actor_critic.all_rewards).rolling(window).std() plt.plot(smoothed_rewards, label='Smoothed rewards') plt.fill_between(range(len(smoothed_rewards)), smoothed_rewards-std, smoothed_rewards+std, color='orange', alpha=0.2) plt.xlabel('Episode') plt.ylabel('Reward') plt.subplot(1, 2, 2) plt.plot(actor_critic.all_lengths, label='All lengths') plt.plot(actor_critic.average_lengths, label='Average lengths') plt.xlabel('Episode') plt.ylabel('Episode length') plt.legend() plt.tight_layout() plot_actor_critic_training() ###Output _____no_output_____ ###Markdown Exercise 8.4: Effect of episodes on performanceChange the episodes from 500 to 3000 and observe the performance impact. Exercise 8.5: Effect of learning rate on performanceModify the hyperparameters related to learning_rate and gamma and observe the impact on the performance. ---Section 9: RL in the real world ###Code #@title Video 9: Real-world applications and ethics # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="5kBtiW88QVw", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown Exercise 9.1: Group discussionForm a group of 2-3 and have discussions (roughly 3 minutes each) of the following questions:1. **Safety**: what are some safety issues that arise in RL that don’t arise with e.g. supervised learning?2. **Generalization**: What happens if your RL agent is presented with data it hasn’t trained on? (“goes out of distribution”)3. How important do you think **interpretability** is in the ethical and safe deployment of RL agents in the real world? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_78c8456b.py) ---Section 10: How to learn more ###Code #@title Video 10: How to learn more # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="dKaOpgor5Ek", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown Tutorial 1: Introduction to Reinforcement Learning**Week 3, Day 2: Basic Reinforcement Learning (RL)****By Neuromatch Academy**__Content creators:__ Matthew Sargent, Anoop Kulkarni, Sowmya Parthiban, Feryal Behbahani, Jane Wang__Content reviewers:__ Ezekiel Williams, Mehul Rastogi, Lily Cheng, Roberto Guidotti, Arush Tagade__Content editors:__ Spiros Chavlis __Production editors:__ Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** ---Tutorial ObjectivesBy the end of the tutorial, you should be able to:1. Within the RL framework, be able to identify the different components: environment, agent, states, and actions. 2. Understand the Bellman equation and components involved. 3. Implement tabular value-based model-free learning (Q-learning and SARSA).4. Run a DQN agent and experiment with different hyperparameters.5. Have a high-level understanding of other (nonvalue-based) RL methods.6. Discuss real-world applications and ethical issues of RL.Note: There is an issue with some images not showing up if you're using a Safari browser. Please switch to Chrome if this is the case. ###Code # @title Tutorial slides # @markdown These are the slides for the videos in this tutorial from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/m3kqy/?direct%26mode=render%26action=download%26mode=render", width=854, height=480) ###Output _____no_output_____ ###Markdown --- SetupRun the following 5 cells in order to set up needed functions. Don't worry about the code for now! ###Code # @title Install requirements # @markdown we install the acme library, see [here](https://github.com/deepmind/acme) for more info !sudo apt-get install -y xvfb ffmpeg --quiet !pip install einops --quiet !pip install dm-acme --quiet !pip install dm-acme[reverb] --quiet !pip install dm-acme[tf] --quiet !pip install dm-acme[envs] --quiet !pip install dm-env --quiet !pip install imageio --quiet !pip install imageio-ffmpeg !pip install gym --quiet !pip install enum --quiet !pip install dm-env --quiet !pip install pandas --quiet !pip install tensorflow --quiet !pip install dm-sonnet --quiet !pip install typing --quiet from IPython.display import clear_output clear_output # Import modules import gym import enum import copy import time import acme import torch import base64 import dm_env import IPython import imageio import warnings import itertools import collections import numpy as np import pandas as pd import sonnet as snt import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import matplotlib as mpl import matplotlib.pyplot as plt import tensorflow.compat.v2 as tf from acme import environment_loop from acme import specs from acme import wrappers from acme.utils import tree_utils from acme.utils import loggers from tqdm import tqdm, trange from torch.autograd import Variable from torch.distributions import Categorical from typing import Callable, Optional, Sequence tf.enable_v2_behavior() warnings.filterwarnings('ignore') np.set_printoptions(precision=3, suppress=1) # @title Figure Settings %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") mpl.rc('image', cmap='Blues') # @title Helper Functions # @markdown Implement helpers for value visualisation map_from_action_to_subplot = lambda a: (2, 6, 8, 4)[a] map_from_action_to_name = lambda a: ("up", "right", "down", "left")[a] def plot_values(values, colormap='pink', vmin=-1, vmax=10): plt.imshow(values, interpolation="nearest", cmap=colormap, vmin=vmin, vmax=vmax) plt.yticks([]) plt.xticks([]) plt.colorbar(ticks=[vmin, vmax]) def plot_state_value(action_values, epsilon=0.1): q = action_values fig = plt.figure(figsize=(4, 4)) vmin = np.min(action_values) vmax = np.max(action_values) v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1) plot_values(v, colormap='summer', vmin=vmin, vmax=vmax) plt.title("$v(s)$") def plot_action_values(action_values, epsilon=0.1): q = action_values fig = plt.figure(figsize=(8, 8)) fig.subplots_adjust(wspace=0.3, hspace=0.3) vmin = np.min(action_values) vmax = np.max(action_values) dif = vmax - vmin for a in [0, 1, 2, 3]: plt.subplot(3, 3, map_from_action_to_subplot(a)) plot_values(q[..., a], vmin=vmin - 0.05*dif, vmax=vmax + 0.05*dif) action_name = map_from_action_to_name(a) plt.title(r"$q(s, \mathrm{" + action_name + r"})$") plt.subplot(3, 3, 5) v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1) plot_values(v, colormap='summer', vmin=vmin, vmax=vmax) plt.title("$v(s)$") def smooth(x, window=10): return x[:window*(len(x)//window)].reshape(len(x)//window, window).mean(axis=1) def plot_stats(stats, window=10): plt.figure(figsize=(16,4)) plt.subplot(121) xline = range(0, len(stats.episode_lengths), window) plt.plot(xline, smooth(stats.episode_lengths, window=window)) plt.ylabel('Episode Length') plt.xlabel('Episode Count') plt.subplot(122) plt.plot(xline, smooth(stats.episode_rewards, window=window)) plt.ylabel('Episode Return') plt.xlabel('Episode Count') # @title Set random seed # @markdown Executing `set_seed(seed=seed)` you are setting the seed # for DL its critical to set the random seed so that students can have a # baseline to compare their results to expected results. # Read more here: https://pytorch.org/docs/stable/notes/randomness.html # Call `set_seed` function in the exercises to ensure reproducibility. import random import torch def set_seed(seed=None, seed_torch=True): if seed is None: seed = np.random.choice(2 ** 32) random.seed(seed) np.random.seed(seed) if seed_torch: torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True print(f'Random seed {seed} has been set.') # In case that `DataLoader` is used def seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 np.random.seed(worker_seed) random.seed(worker_seed) # @title Set device (GPU or CPU). Execute `set_device()` # especially if torch modules used. # inform the user if the notebook uses GPU or CPU. def set_device(): device = "cuda" if torch.cuda.is_available() else "cpu" if device != "cuda": print("WARNING: For this notebook to perform best, " "if possible, in the menu under `Runtime` -> " "`Change runtime type.` select `GPU` ") else: print("GPU is enabled in this notebook.") return device SEED = 2021 set_seed(seed=SEED) DEVICE = set_device() ###Output _____no_output_____ ###Markdown --- Section 1: Introduction to Reinforcement Learning ###Code # @title Video 1: Introduction to RL from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV18V411p7iK", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"BWz3scQN50M", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Acme: a research framework for reinforcement learning**Acme** is a library of reinforcement learning (RL) agents and agent building blocks by Google DeepMind. Acme strives to expose simple, efficient, and readable agents, that serve both as reference implementations of popular algorithms and as strong baselines, while still providing enough flexibility to do novel research. The design of Acme also attempts to provide multiple points of entry to the RL problem at differing levels of complexity.For more information see [github repository](https://github.com/deepmind/acme). --- Section 2: General Formulation of RL Problems and Gridworlds ###Code # @title Video 2: General Formulation and MDPs from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1k54y1E7Zn", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"h6TxAALY5Fc", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown The agent interacts with the environment in a loop corresponding to the following diagram. The environment defines a set of **actions** that an agent can take. The agent takes an action informed by the **observations** it receives, and will get a **reward** from the environment after each action. The goal in RL is to find an agent whose actions maximize the total accumulation of rewards obtained from the environment. Section 2.1: The Environment For this practical session we will focus on a **simple grid world** environment,which consists of a 9 x 10 grid of either wall or empty cells, depicted in black and white, respectively. The smiling agent starts from an initial location and needs to navigate to reach the goal square.Below you will find an implementation of this Gridworld as a ```dm_env.Environment```.There is no coding in this section, but if you want, you can look over the provided code so that you can familiarize yourself with an example of how to set up a **grid world** environment. ###Code # @title Implement GridWorld { form-width: "30%" } # @markdown *Double-click* to inspect the contents of this cell. class ObservationType(enum.IntEnum): STATE_INDEX = enum.auto() AGENT_ONEHOT = enum.auto() GRID = enum.auto() AGENT_GOAL_POS = enum.auto() class GridWorld(dm_env.Environment): def __init__(self, layout, start_state, goal_state=None, observation_type=ObservationType.STATE_INDEX, discount=0.9, penalty_for_walls=-5, reward_goal=10, max_episode_length=None, randomize_goals=False): """Build a grid environment. Simple gridworld defined by a map layout, a start and a goal state. Layout should be a NxN grid, containing: * 0: empty * -1: wall * Any other positive value: value indicates reward; episode will terminate Args: layout: NxN array of numbers, indicating the layout of the environment. start_state: Tuple (y, x) of starting location. goal_state: Optional tuple (y, x) of goal location. Will be randomly sampled once if None. observation_type: Enum observation type to use. One of: * ObservationType.STATE_INDEX: int32 index of agent occupied tile. * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the agent is and 0 elsewhere. * ObservationType.GRID: NxNx3 float32 grid of feature channels. First channel contains walls (1 if wall, 0 otherwise), second the agent position (1 if agent, 0 otherwise) and third goal position (1 if goal, 0 otherwise) * ObservationType.AGENT_GOAL_POS: float32 tuple with (agent_y, agent_x, goal_y, goal_x) discount: Discounting factor included in all Timesteps. penalty_for_walls: Reward added when hitting a wall (should be negative). reward_goal: Reward added when finding the goal (should be positive). max_episode_length: If set, will terminate an episode after this many steps. randomize_goals: If true, randomize goal at every episode. """ if observation_type not in ObservationType: raise ValueError('observation_type should be a ObservationType instace.') self._layout = np.array(layout) self._start_state = start_state self._state = self._start_state self._number_of_states = np.prod(np.shape(self._layout)) self._discount = discount self._penalty_for_walls = penalty_for_walls self._reward_goal = reward_goal self._observation_type = observation_type self._layout_dims = self._layout.shape self._max_episode_length = max_episode_length self._num_episode_steps = 0 self._randomize_goals = randomize_goals if goal_state is None: # Randomly sample goal_state if not provided goal_state = self._sample_goal() self.goal_state = goal_state def _sample_goal(self): """Randomly sample reachable non-starting state.""" # Sample a new goal n = 0 max_tries = 1e5 while n < max_tries: goal_state = tuple(np.random.randint(d) for d in self._layout_dims) if goal_state != self._state and self._layout[goal_state] == 0: # Reachable state found! return goal_state n += 1 raise ValueError('Failed to sample a goal state.') @property def layout(self): return self._layout @property def number_of_states(self): return self._number_of_states @property def goal_state(self): return self._goal_state @property def start_state(self): return self._start_state @property def state(self): return self._state def set_state(self, x, y): self._state = (y, x) @goal_state.setter def goal_state(self, new_goal): if new_goal == self._state or self._layout[new_goal] < 0: raise ValueError('This is not a valid goal!') # Zero out any other goal self._layout[self._layout > 0] = 0 # Setup new goal location self._layout[new_goal] = self._reward_goal self._goal_state = new_goal def observation_spec(self): if self._observation_type is ObservationType.AGENT_ONEHOT: return specs.Array( shape=self._layout_dims, dtype=np.float32, name='observation_agent_onehot') elif self._observation_type is ObservationType.GRID: return specs.Array( shape=self._layout_dims + (3,), dtype=np.float32, name='observation_grid') elif self._observation_type is ObservationType.AGENT_GOAL_POS: return specs.Array( shape=(4,), dtype=np.float32, name='observation_agent_goal_pos') elif self._observation_type is ObservationType.STATE_INDEX: return specs.DiscreteArray( self._number_of_states, dtype=int, name='observation_state_index') def action_spec(self): return specs.DiscreteArray(4, dtype=int, name='action') def get_obs(self): if self._observation_type is ObservationType.AGENT_ONEHOT: obs = np.zeros(self._layout.shape, dtype=np.float32) # Place agent obs[self._state] = 1 return obs elif self._observation_type is ObservationType.GRID: obs = np.zeros(self._layout.shape + (3,), dtype=np.float32) obs[..., 0] = self._layout < 0 obs[self._state[0], self._state[1], 1] = 1 obs[self._goal_state[0], self._goal_state[1], 2] = 1 return obs elif self._observation_type is ObservationType.AGENT_GOAL_POS: return np.array(self._state + self._goal_state, dtype=np.float32) elif self._observation_type is ObservationType.STATE_INDEX: y, x = self._state return y * self._layout.shape[1] + x def reset(self): self._state = self._start_state self._num_episode_steps = 0 if self._randomize_goals: self.goal_state = self._sample_goal() return dm_env.TimeStep( step_type=dm_env.StepType.FIRST, reward=None, discount=None, observation=self.get_obs()) def step(self, action): y, x = self._state if action == 0: # up new_state = (y - 1, x) elif action == 1: # right new_state = (y, x + 1) elif action == 2: # down new_state = (y + 1, x) elif action == 3: # left new_state = (y, x - 1) else: raise ValueError( 'Invalid action: {} is not 0, 1, 2, or 3.'.format(action)) new_y, new_x = new_state step_type = dm_env.StepType.MID if self._layout[new_y, new_x] == -1: # wall reward = self._penalty_for_walls discount = self._discount new_state = (y, x) elif self._layout[new_y, new_x] == 0: # empty cell reward = 0. discount = self._discount else: # a goal reward = self._layout[new_y, new_x] discount = 0. new_state = self._start_state step_type = dm_env.StepType.LAST self._state = new_state self._num_episode_steps += 1 if (self._max_episode_length is not None and self._num_episode_steps >= self._max_episode_length): step_type = dm_env.StepType.LAST return dm_env.TimeStep( step_type=step_type, reward=np.float32(reward), discount=discount, observation=self.get_obs()) def plot_grid(self, add_start=True): plt.figure(figsize=(4, 4)) plt.imshow(self._layout <= -1, interpolation='nearest') ax = plt.gca() ax.grid(0) plt.xticks([]) plt.yticks([]) # Add start/goal if add_start: plt.text( self._start_state[1], self._start_state[0], r'$\mathbf{S}$', fontsize=16, ha='center', va='center') plt.text( self._goal_state[1], self._goal_state[0], r'$\mathbf{G}$', fontsize=16, ha='center', va='center') h, w = self._layout.shape for y in range(h - 1): plt.plot([-0.5, w - 0.5], [y + 0.5, y + 0.5], '-w', lw=2) for x in range(w - 1): plt.plot([x + 0.5, x + 0.5], [-0.5, h - 0.5], '-w', lw=2) def plot_state(self, return_rgb=False): self.plot_grid(add_start=False) # Add the agent location plt.text( self._state[1], self._state[0], u'😃', # fontname='symbola', fontsize=18, ha='center', va='center', ) if return_rgb: fig = plt.gcf() plt.axis('tight') plt.subplots_adjust(0, 0, 1, 1, 0, 0) fig.canvas.draw() data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') w, h = fig.canvas.get_width_height() data = data.reshape((h, w, 3)) plt.close(fig) return data def plot_policy(self, policy): action_names = [ r'$\uparrow$', r'$\rightarrow$', r'$\downarrow$', r'$\leftarrow$' ] self.plot_grid() plt.title('Policy Visualization') h, w = self._layout.shape for y in range(h): for x in range(w): # if ((y, x) != self._start_state) and ((y, x) != self._goal_state): if (y, x) != self._goal_state: action_name = action_names[policy[y, x]] plt.text(x, y, action_name, ha='center', va='center') def plot_greedy_policy(self, q): greedy_actions = np.argmax(q, axis=2) self.plot_policy(greedy_actions) def build_gridworld_task(task, discount=0.9, penalty_for_walls=-5, observation_type=ObservationType.STATE_INDEX, max_episode_length=200): """Construct a particular Gridworld layout with start/goal states. Args: task: string name of the task to use. One of {'simple', 'obstacle', 'random_goal'}. discount: Discounting factor included in all Timesteps. penalty_for_walls: Reward added when hitting a wall (should be negative). observation_type: Enum observation type to use. One of: * ObservationType.STATE_INDEX: int32 index of agent occupied tile. * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the agent is and 0 elsewhere. * ObservationType.GRID: NxNx3 float32 grid of feature channels. First channel contains walls (1 if wall, 0 otherwise), second the agent position (1 if agent, 0 otherwise) and third goal position (1 if goal, 0 otherwise) * ObservationType.AGENT_GOAL_POS: float32 tuple with (agent_y, agent_x, goal_y, goal_x). max_episode_length: If set, will terminate an episode after this many steps. """ tasks_specifications = { 'simple': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), 'goal_state': (7, 2) }, 'obstacle': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, -1, 0, 0, -1], [-1, 0, 0, 0, -1, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), 'goal_state': (2, 8) }, 'random_goal': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), # 'randomize_goals': True }, } return GridWorld( discount=discount, penalty_for_walls=penalty_for_walls, observation_type=observation_type, max_episode_length=max_episode_length, **tasks_specifications[task]) def setup_environment(environment): """Returns the environment and its spec.""" # Make sure the environment outputs single-precision floats. environment = wrappers.SinglePrecisionWrapper(environment) # Grab the spec of the environment. environment_spec = specs.make_environment_spec(environment) return environment, environment_spec ###Output _____no_output_____ ###Markdown We will use two distinct tabular GridWorlds:* `simple` where the goal is at the bottom left of the grid, little navigation required.* `obstacle` where the goal is behind an obstacle the agent must avoid.You can visualize the grid worlds by running the cell below. Note that **S** indicates the start state and **G** indicates the goal. ###Code # Visualise GridWorlds # Instantiate two tabular environments, a simple task, and one that involves # the avoidance of an obstacle. simple_grid = build_gridworld_task( task='simple', observation_type=ObservationType.GRID) obstacle_grid = build_gridworld_task( task='obstacle', observation_type=ObservationType.GRID) # Plot them. simple_grid.plot_grid() plt.title('Simple') obstacle_grid.plot_grid() plt.title('Obstacle') ###Output _____no_output_____ ###Markdown In this environment, the agent has four possible **actions**: `up`, `right`, `down`, and `left`. The **reward** is `-5` for bumping into a wall, `+10` for reaching the goal, and `0` otherwise. The episode ends when the agent reaches the goal, and otherwise continues. The **discount** on continuing steps, is $\gamma = 0.9$. Before we start building an agent to interact with this environment, let's first look at the types of objects the environment either returns (e.g. **observations**) or consumes (e.g. **actions**). The `environment_spec` will show you the form of the **observations**, **rewards** and **discounts** that the environment exposes and the form of the **actions** that can be taken. ###Code # @title Look at environment_spec { form-width: "30%" } # Note: setup_environment is implemented in the same cell as GridWorld. environment, environment_spec = setup_environment(simple_grid) print('actions:\n', environment_spec.actions, '\n') print('observations:\n', environment_spec.observations, '\n') print('rewards:\n', environment_spec.rewards, '\n') print('discounts:\n', environment_spec.discounts, '\n') ###Output _____no_output_____ ###Markdown We first set the environment to its initial state by calling the `reset()` method which returns the first observation and resets the agent to the starting location. ###Code environment.reset() environment.plot_state() ###Output _____no_output_____ ###Markdown Now we want to take an action to interact with the environment. We do this by passing a valid action to the `dm_env.Environment.step()` method which returns a `dm_env.TimeStep` namedtuple with fields `(step_type, reward, discount, observation)`.Let's take an action and visualise the resulting state of the grid-world. (You'll need to rerun the cell if you pick a new action.) **Note for kaggle users:** As kaggle does not render the forms automatically the students should be careful to notice the various instructions and manually play around with the values for the variables ###Code # @title Pick an action and see the state changing action = "left" #@param ["up", "right", "down", "left"] {type:"string"} action_int = {'up': 0, 'right': 1, 'down': 2, 'left':3 } action = int(action_int[action]) timestep = environment.step(action) # pytype: dm_env.TimeStep environment.plot_state() # @title Run loop { form-width: "30%" } # @markdown This function runs an agent in the environment for a number of # @markdown episodes, allowing it to learn. # @markdown *Double-click* to inspect the `run_loop` function. def run_loop(environment, agent, num_episodes=None, num_steps=None, logger_time_delta=1., label='training_loop', log_loss=False, ): """Perform the run loop. We are following the Acme run loop. Run the environment loop for `num_episodes` episodes. Each episode is itself a loop which interacts first with the environment to get an observation and then give that observation to the agent in order to retrieve an action. Upon termination of an episode a new episode will be started. If the number of episodes is not given then this will interact with the environment infinitely. Args: environment: dm_env used to generate trajectories. agent: acme.Actor for selecting actions in the run loop. num_steps: number of steps to run the loop for. If `None` (default), runs without limit. num_episodes: number of episodes to run the loop for. If `None` (default), runs without limit. logger_time_delta: time interval (in seconds) between consecutive logging steps. label: optional label used at logging steps. """ logger = loggers.TerminalLogger(label=label, time_delta=logger_time_delta) iterator = range(num_episodes) if num_episodes else itertools.count() all_returns = [] num_total_steps = 0 for episode in iterator: # Reset any counts and start the environment. start_time = time.time() episode_steps = 0 episode_return = 0 episode_loss = 0 timestep = environment.reset() # Make the first observation. agent.observe_first(timestep) # Run an episode. while not timestep.last(): # Generate an action from the agent's policy and step the environment. action = agent.select_action(timestep.observation) timestep = environment.step(action) # Have the agent observe the timestep and let the agent update itself. agent.observe(action, next_timestep=timestep) agent.update() # Book-keeping. episode_steps += 1 num_total_steps += 1 episode_return += timestep.reward if log_loss: episode_loss += agent.last_loss if num_steps is not None and num_total_steps >= num_steps: break # Collect the results and combine with counts. steps_per_second = episode_steps / (time.time() - start_time) result = { 'episode': episode, 'episode_length': episode_steps, 'episode_return': episode_return, } if log_loss: result['loss_avg'] = episode_loss/episode_steps all_returns.append(episode_return) # Log the given results. logger.write(result) if num_steps is not None and num_total_steps >= num_steps: break return all_returns # @title Implement the evaluation loop { form-width: "30%" } # @markdown This function runs the agent in the environment for a number of # @markdown episodes, without allowing it to learn, in order to evaluate it. # @markdown *Double-click* to inspect the `evaluate` function. def evaluate(environment: dm_env.Environment, agent: acme.Actor, evaluation_episodes: int): frames = [] for episode in range(evaluation_episodes): timestep = environment.reset() episode_return = 0 steps = 0 while not timestep.last(): frames.append(environment.plot_state(return_rgb=True)) action = agent.select_action(timestep.observation) timestep = environment.step(action) steps += 1 episode_return += timestep.reward print( f'Episode {episode} ended with reward {episode_return} in {steps} steps' ) return frames def display_video(frames: Sequence[np.ndarray], filename: str = 'temp.mp4', frame_rate: int = 12): """Save and display video.""" # Write the frames to a video. with imageio.get_writer(filename, fps=frame_rate) as video: for frame in frames: video.append_data(frame) # Read video and display the video. video = open(filename, 'rb').read() b64_video = base64.b64encode(video) video_tag = ('<video width="320" height="240" controls alt="test" ' 'src="data:video/mp4;base64,{0}">').format(b64_video.decode()) return IPython.display.HTML(video_tag) ###Output _____no_output_____ ###Markdown Section 2.2: The AgentWe will be implementing Tabular & Function Approximation agents. Tabular agents are purely in Python.All agents will share the same interface from the Acme `Actor`. Here we borrow a figure from Acme to show how this interaction occurs: Agent interfaceEach agent implements the following functions:```pythonclass Agent(acme.Actor): def __init__(self, number_of_actions, number_of_states, ...): """Provides the agent the number of actions and number of states.""" def select_action(self, observation): """Generates actions from observations.""" def observe_first(self, timestep): """Records the initial timestep in a trajectory.""" def observe(self, action, next_timestep): """Records the transition which occurred from taking an action.""" def update(self): """Updates the agent's internals to potentially change its behavior."""```Remarks on the `observe()` function:1. In the last method, the `next_timestep` provides the `reward`, `discount`, and `observation` that resulted from selecting `action`.2. The `next_timestep.step_type` will be either `MID` or `LAST` and should be used to determine whether this is the last observation in the episode.3. The `next_timestep.step_type` cannot be `FIRST`; such a timestep should only ever be given to `observe_first()`. Coding Exercise 2.1: Random AgentBelow is a partially complete implemention of an agent that follows a random (non-learning) policy. Fill in the ```select_action``` method.The ```select_action``` method should return a random **integer** between 0 and ```self._num_actions``` (not a tensor or an array!) ###Code class RandomAgent(acme.Actor): def __init__(self, environment_spec): """Gets the number of available actions from the environment spec.""" self._num_actions = environment_spec.actions.num_values def select_action(self, observation): """Selects an action uniformly at random.""" ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the select action method") ################################################# # TODO return a random integer beween 0 and self._num_actions. # HINT: see the reference for how to sample a random integer in numpy: # https://numpy.org/doc/1.16/reference/routines.random.html action = ... return action def observe_first(self, timestep): """Does not record as the RandomAgent has no use for data.""" pass def observe(self, action, next_timestep): """Does not record as the RandomAgent has no use for data.""" pass def update(self): """Does not update as the RandomAgent does not learn from data.""" pass ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_7eaa84d6.py) ###Code # @title Visualisation of a random agent in GridWorld { form-width: "30%" } # Create the agent by giving it the action space specification. agent = RandomAgent(environment_spec) # Run the agent in the evaluation loop, which returns the frames. frames = evaluate(environment, agent, evaluation_episodes=1) # Visualize the random agent's episode. display_video(frames) ###Output _____no_output_____ ###Markdown --- Section 3: The Bellman Equation ###Code # @title Video 3: The Bellman Equation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Lv411E7CB", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"cLCoNBmYUns", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown In this tutorial we focus mainly on **value based methods**, where agents maintain a value for all state-action pairs and use those estimates to choose actions that maximize that **value** (instead of maintaining a policy directly, like in **policy gradient methods**). We represent the **action-value function** (otherwise known as $\color{green}Q$-function associated with following/employing a policy $\pi$ in a given MDP as:\begin{equation}\color{green}Q^{\color{blue}{\pi}}(\color{red}{s},\color{blue}{a}) = \mathbb{E}_{\tau \sim P^{\color{blue}{\pi}}} \left[ \sum_t \gamma^t \color{green}{r_t}| s_0=\color{red}s,a_0=\color{blue}{a} \right]\end{equation}where $\tau = \{\color{red}{s_0}, \color{blue}{a_0}, \color{green}{r_0}, \color{red}{s_1}, \color{blue}{a_1}, \color{green}{r_1}, \cdots \}$Recall that efficient value estimations are based on the famous **_Bellman Expectation Equation_**:\begin{equation}\color{green}Q^\color{blue}{\pi}(\color{red}{s},\color{blue}{a}) = \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\left( \color{green}{R}(\color{red}{s},\color{blue}{a}, \color{red}{s'}) + \gamma \color{green}V^\color{blue}{\pi}(\color{red}{s'}) \right)\end{equation}where $\color{green}V^\color{blue}{\pi}$ is the expected $\color{green}Q^\color{blue}{\pi}$ value for a particular state, i.e. $\color{green}V^\color{blue}{\pi}(\color{red}{s}) = \sum_{\color{blue}{a} \in \color{blue}{\mathcal{A}}} \color{blue}{\pi}(\color{blue}{a} |\color{red}{s}) \color{green}Q^\color{blue}{\pi}(\color{red}{s},\color{blue}{a})$. --- Section 4: Policy Evaluation ###Code # @title Video 4: Policy Evaluation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV15f4y157zA", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"HAxR4SuaZs4", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Lecture footnotes: **Episodic vs non-episodic environments:** Up until now, we've mainly been talking about episodic environments, or environments that terminate and reset (resampled) after a finite number of steps. However, there are also *non-episodic* environments, in which an agent cannot count on the environment resetting. Thus, they are forced to learn in a *continual* fashion.**Policy iteration vs value iteration:** Compare the two equations below, noting that the only difference is that in value iteration, the second sum is replaced by a max.*Policy iteration (using Bellman expectation equation)*\begin{equation}\color{green}Q_\color{green}{k}(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}{R}(\color{red}{s},\color{blue}{a}) +\gamma \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\sum_{\color{blue}{a'} \in \color{blue}{\mathcal{A}}} \color{blue}{\pi_{k-1}}(\color{blue}{a'} |\color{red}{s'}) \color{green}{Q_{k-1}}(\color{red}{s'},\color{blue}{a'})\end{equation}*Value iteration (using Bellman optimality equation)*\begin{equation}\color{green}Q_\color{green}{k}(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}{R}(\color{red}{s},\color{blue}{a}) +\gamma \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\max_{\color{blue}{a'}} \color{green}{Q_{k-1}}(\color{red}{s'},\color{blue}{a'})\end{equation} Coding Exercise 4.1 Policy Evaluation Agent Tabular agents implement a function `q_values()` returning a matrix of Q valuesof shape: (`number_of_states`, `number_of_actions`)In this section, we will implement a `PolicyEvalAgent` as an ACME actor: given an `evaluation_policy` $\pi_e$ and a `behaviour_policy` $\pi_b$, it will use the `behaviour_policy` to choose actions, and it will use the corresponding trajectory data to evaluate the `evaluation_policy` (i.e. compute the Q-values as if you were following the `evaluation_policy`). Algorithm:**Initialize** $Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s}$ ∈ $\mathcal{\color{red}S}$ and $\color{blue}a$ ∈ $\mathcal{\color{blue}A}(\color{red}s)$**Loop forever**:1. $\color{red}{s} \gets{}$current (nonterminal) state 2. $\color{blue}{a} \gets{} \text{behaviour_policy }\pi_b(\color{red}s)$ 3. Take action $\color{blue}{a}$; observe resulting reward $\color{green}{r}$, discount $\gamma$, and state, $\color{red}{s'}$4. Compute TD-error: $\delta = \color{green}R + \gamma Q(\color{red}{s'}, \underbrace{\pi_e(\color{red}{s'}}_{\color{blue}{a'}})) − Q(\color{red}s, \color{blue}a)$4. Update Q-value with a small $\alpha$ step: $Q(\color{red}s, \color{blue}a) \gets Q(\color{red}s, \color{blue}a) + \alpha \delta$We will use a uniform `random policy` as our `evaluation policy` here, but you could replace this with any policy you want, such as a greedy one. ###Code # Uniform random policy def random_policy(q): return np.random.randint(4) class PolicyEvalAgent(acme.Actor): def __init__(self, environment_spec, evaluated_policy, behaviour_policy=random_policy, step_size=0.1): self._state = None # Get number of states and actions from the environment spec. self._number_of_states = environment_spec.observations.num_values self._number_of_actions = environment_spec.actions.num_values self._step_size = step_size self._behaviour_policy = behaviour_policy self._evaluated_policy = evaluated_policy ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Initialize your Q-values!") ################################################# # TODO Initialize the Q-values to be all zeros. # (Note: can also be random, but we use zeros here for reproducibility) # HINT: This is a table of state and action pairs, so needs to be a 2-D # array. See the reference for how to create this in numpy: # https://numpy.org/doc/stable/reference/generated/numpy.zeros.html self._q = ... self._action = None self._next_state = None @property def q_values(self): # return the Q values return self._q def select_action(self, observation): # Select an action return self._behaviour_policy(self._q[observation]) def observe_first(self, timestep): self._state = timestep.observation def observe(self, action, next_timestep): s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute TD-Error. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Need to select the next action") ################################################# # TODO Select the next action from the evaluation policy # HINT: Refer to step 4 of the algorithm above. next_a = ... self._td_error = r + g * self._q[next_s, next_a] - self._q[s, a] def update(self): # Updates s = self._state a = self._action # Q-value table update. self._q[s, a] += self._step_size * self._td_error # Update the state self._state = self._next_state ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_7b3f830c.py) ###Code # @title Perform policy evaluation { form-width: "30%" } # @markdown Here you can visualize the state value and action-value functions for the "simple" task. num_steps = 1e3 # Create the environment grid = build_gridworld_task(task='simple') environment, environment_spec = setup_environment(grid) # Create the policy evaluation agent to evaluate a random policy. agent = PolicyEvalAgent(environment_spec, evaluated_policy=random_policy) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=int(num_steps)) # get the q-values q = agent.q_values.reshape(grid._layout.shape + (4,)) # visualize value functions print('AFTER {} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=1.) ###Output _____no_output_____ ###Markdown --- Section 5: Tabular Value-Based Model-Free Learning ###Code # @title Video 5: Model-Free Learning from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1iU4y1n7M6", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"Y4TweUYnexU", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Lecture footnotes: **On-policy (SARSA) vs off-policy (Q-learning) TD control:** Compare the two equations below and see that the only difference is that for Q-learning, the update is performed assuming that a greedy policy is followed, which is not the one used to collect the data, hence the name *off-policy*. *SARSA*\begin{equation}\color{green}Q(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}Q(\color{red}{s},\color{blue}{a}) +\alpha(\color{green}{r} + \gamma\color{green}{Q}(\color{red}{s'},\color{blue}{a'}) - \color{green}{Q}(\color{red}{s},\color{blue}{a}))\end{equation}*Q-learning*\begin{equation}\color{green}Q(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}Q(\color{red}{s},\color{blue}{a}) +\alpha(\color{green}{r} + \gamma\max_{\color{blue}{a'}} \color{green}{Q}(\color{red}{s'},\color{blue}{a'}) - \color{green}{Q}(\color{red}{s},\color{blue}{a}))\end{equation} Section 5.1: On-policy control: SARSA AgentIn this section, we are focusing on control RL algorithms, which perform the **evaluation** and **improvement** of the policy synchronously. That is, the policy that is being evaluated improves as the agent is using it to interact with the environent.The first algorithm we are going to be looking at is SARSA. This is an **on-policy algorithm** -- i.e: the data collection is done by leveraging the policy we're trying to optimize. As discussed during lectures, a greedy policy with respect to a given $\color{Green}Q$ fails to explore the environment as needed; we will use instead an $\epsilon$-greedy policy with respect to $\color{Green}Q$. SARSA Algorithm**Input:**- $\epsilon \in (0, 1)$ the probability of taking a random action , and- $\alpha > 0$ the step size, also known as learning rate.**Initialize:** $\color{green}Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s}$ ∈ $\mathcal{\color{red}S}$ and $\color{blue}a$ ∈ $\mathcal{\color{blue}A}$**Loop forever:**1. Get $\color{red}s \gets{}$current (non-terminal) state 2. Select $\color{blue}a \gets{} \text{epsilon_greedy}(\color{green}Q(\color{red}s, \cdot))$ 3. Step in the environment by passing the selected action $\color{blue}a$4. Observe resulting reward $\color{green}r$, discount $\gamma$, and state $\color{red}{s'}$5. Compute TD error: $\Delta \color{green}Q \gets \color{green}r + \gamma \color{green}Q(\color{red}{s'}, \color{blue}{a'}) − \color{green}Q(\color{red}s, \color{blue}a)$, where $\color{blue}{a'} \gets \text{epsilon_greedy}(\color{green}Q(\color{red}{s'}, \cdot))$5. Update $\color{green}Q(\color{red}s, \color{blue}a) \gets \color{green}Q(\color{red}s, \color{blue}a) + \alpha \Delta \color{green}Q$ Coding Exercise 5.1: Implement $\epsilon$-greedyBelow you will find incomplete code for sampling from an $\epsilon$-greedy policy, to be used later when we implement an agent that learns values according to the SARSA algorithm. ###Code def epsilon_greedy( q_values_at_s: np.ndarray, # Q-values in state s: Q(s, a). epsilon: float = 0.1 # Probability of taking a random action. ): """Return an epsilon-greedy action sample.""" ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete epsilon greedy policy function") ################################################# # TODO generate a uniform random number and compare it to epsilon to decide if # the action should be greedy or not # HINT: Use np.random.random() to generate a random float from 0 to 1. if ...: #TODO Greedy: Pick action with the largest Q-value. action = ... else: # Get the number of actions from the size of the given vector of Q-values. num_actions = np.array(q_values_at_s).shape[-1] # TODO else return a random action # HINT: Use np.random.randint() to generate a random integer. action = ... return action ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_524ce08f.py) ###Code # @title Sample action from $\epsilon$-greedy { form-width: "30%" } # @markdown With $\epsilon=0.5$, you should see that about half the time, you will get back the optimal # @markdown action 3, but half the time, it will be random. # Create fake q-values q_values = np.array([0, 0, 0, 1]) # Set epsilon = 0.5 epsilon = 0.5 action = epsilon_greedy(q_values, epsilon=epsilon) print(action) ###Output _____no_output_____ ###Markdown Coding Exercise 5.2: Run your SARSA agent on the `obstacle` environmentThis environment is similar to the Cliff-walking example from [Sutton & Barto](http://incompleteideas.net/book/RLbook2018.pdf) and allows us to see the different policies learned by on-policy vs off-policy methods. Try varying the number of steps. ###Code class SarsaAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, epsilon: float, step_size: float = 0.1 ): # Get number of states and actions from the environment spec. self._num_states = environment_spec.observations.num_values self._num_actions = environment_spec.actions.num_values # Create the table of Q-values, all initialized at zero. self._q = np.zeros((self._num_states, self._num_actions)) # Store algorithm hyper-parameters. self._step_size = step_size self._epsilon = epsilon # Containers you may find useful. self._state = None self._action = None self._next_state = None @property def q_values(self): return self._q def select_action(self, observation): return epsilon_greedy(self._q[observation], self._epsilon) def observe_first(self, timestep): # Set current state. self._state = timestep.observation def observe(self, action, next_timestep): # Unpacking the timestep to lighten notation. s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute the action that would be taken from the next state. next_a = self.select_action(next_s) # Compute the on-policy Q-value update. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the on-policy Q-value update") ################################################# # TODO complete the line below to compute the temporal difference error # HINT: see step 5 in the pseudocode above. self._td_error = ... def update(self): # Optional unpacking to lighten notation. s = self._state a = self._action ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete value update") ################################################# # Update the Q-value table value at (s, a). # TODO: Update the Q-value table value at (s, a). # HINT: see step 6 in the pseudocode above, remember that alpha = step_size! self._q[s, a] += ... # Update the current state. self._state = self._next_state ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_4f341a18.py) ###Code # @title Run SARSA agent and visualize value function num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # Create the environment. grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # Create the agent. agent = SarsaAgent(environment_spec, epsilon=0.1, step_size=0.1) # Run the experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) print('AFTER {0:,} STEPS ...'.format(num_steps)) # Get the Q-values and reshape them to recover grid-like structure of states. q_values = agent.q_values grid_shape = grid.layout.shape q_values = q_values.reshape([*grid_shape, -1]) # Visualize the value and Q-value tables. plot_action_values(q_values, epsilon=1.) # Visualize the greedy policy. environment.plot_greedy_policy(q_values) ###Output _____no_output_____ ###Markdown Section 5.2 Off-policy control: Q-learning AgentReminder: $\color{green}Q$-learning is a very powerful and general algorithm, that enables control (figuring out the optimal policy/value function) both on and off-policy.**Initialize** $\color{green}Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s} \in \color{red}{\mathcal{S}}$ and $\color{blue}{a} \in \color{blue}{\mathcal{A}}$**Loop forever**:1. Get $\color{red}{s} \gets{}$current (non-terminal) state 2. Select $\color{blue}{a} \gets{} \text{behaviour_policy}(\color{red}{s})$ 3. Step in the environment by passing the selected action $\color{blue}{a}$4. Observe resulting reward $\color{green}{r}$, discount $\gamma$, and state, $\color{red}{s'}$5. Compute the TD error: $\Delta \color{green}Q \gets \color{green}{r} + \gamma \color{green}Q(\color{red}{s'}, \color{blue}{a'}) − \color{green}Q(\color{red}{s}, \color{blue}{a})$, where $\color{blue}{a'} \gets \arg\max_{\color{blue}{\mathcal A}} \color{green}Q(\color{red}{s'}, \cdot)$6. Update $\color{green}Q(\color{red}{s}, \color{blue}{a}) \gets \color{green}Q(\color{red}{s}, \color{blue}{a}) + \alpha \Delta \color{green}Q$Notice that the actions $\color{blue}{a}$ and $\color{blue}{a'}$ are not selected using the same policy, hence this algorithm being **off-policy**. Coding Exercise 5.3: Implement Q-Learning ###Code QValues = np.ndarray Action = int # A value-based policy takes the Q-values at a state and returns an action. ValueBasedPolicy = Callable[[QValues], Action] class QLearningAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, behaviour_policy: ValueBasedPolicy, step_size: float = 0.1): # Get number of states and actions from the environment spec. self._num_states = environment_spec.observations.num_values self._num_actions = environment_spec.actions.num_values # Create the table of Q-values, all initialized at zero. self._q = np.zeros((self._num_states, self._num_actions)) # Store algorithm hyper-parameters. self._step_size = step_size # Store behavior policy. self._behaviour_policy = behaviour_policy # Containers you may find useful. self._state = None self._action = None self._next_state = None @property def q_values(self): return self._q def select_action(self, observation): return self._behaviour_policy(self._q[observation]) def observe_first(self, timestep): # Set current state. self._state = timestep.observation def observe(self, action, next_timestep): # Unpacking the timestep to lighten notation. s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute the TD error. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the off-policy Q-value update") ################################################# # TODO complete the line below to compute the temporal difference error # HINT: This is very similar to what we did for SARSA, except keep in mind # that we're now taking a max over the q-values (see lecture footnotes above). # You will find the function np.max() useful. self._td_error = ... def update(self): # Optional unpacking to lighten notation. s = self._state a = self._action ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete value update") ################################################# # Update the Q-value table value at (s, a). # TODO: Update the Q-value table value at (s, a). # HINT: see step 6 in the pseudocode above, remember that alpha = step_size! self._q[...] += ... # Update the current state. self._state = self._next_state ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_0f0ff9d8.py) Run your Q-learning agent on the `obstacle` environment ###Code # @title Run your Q-learning epsilon = 1. # @param {type:"number"} num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # environment grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # behavior policy behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon) # agent agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) # get the q-values q = agent.q_values.reshape(grid.layout.shape + (4,)) # visualize value functions print('AFTER {:,} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=0) # visualise the greedy policy grid.plot_greedy_policy(q) ###Output _____no_output_____ ###Markdown Experiment with different levels of greediness* The default was $\epsilon=1.$, what does this correspond to?* Try also $\epsilon =0.1, 0.5$. What do you observe? Does the behaviour policy affect the training in any way? ###Code # @title Run the cell epsilon = 0.1 # @param {type:"number"} num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # environment grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # behavior policy behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon) # agent agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) # get the q-values q = agent.q_values.reshape(grid.layout.shape + (4,)) # visualize value functions print('AFTER {:,} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=epsilon) # visualise the greedy policy grid.plot_greedy_policy(q) ###Output _____no_output_____ ###Markdown --- Section 6: Function Approximation ###Code # @title Video 6: Function approximation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1sg411M7cn", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"7_MYePyYhrM", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown So far we only considered look-up tables for value-functions. In all previous cases every state and action pair $(\color{red}{s}, \color{blue}{a})$, had an entry in our $\color{green}Q$-table. Again, this is possible in this environment as the number of states is equal to the number of cells in the grid. But this is not scalable to situations where, say, the goal location changes or the obstacles are in different locations at every episode (consider how big the table could be in this situation?).An example (not covered in this tutorial) is ATARI from pixels, where the number of possible frames an agent can see is exponential in the number of pixels on the screen.But what we **really** want is just to be able to *compute* the Q-value, when fed with a particular $(\color{red}{s}, \color{blue}{a})$ pair. So if we had a way to get a function to do this work instead of keeping a big table, we'd get around this problem.To address this, we can use **function approximation** as a way to generalize Q-values over some representation of the very large state space, and **train** them to output the values they should. In this section, we will explore $\color{green}Q$-learning with function approximation, which (although it has been theoretically proven to diverge for some degenerate MDPs) can yield impressive results in very large environments. In particular, we will look at [Neural Fitted Q (NFQ) Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) and [Deep Q-Networks (DQN)](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf). Section 6.1 Replay BuffersAn important property of off-policy methods like $\color{green}Q$-learning is that they involve two policies: one for exploration and one that is being optimized (via the $\color{green}Q$-function updates). This means that we can generate data from the **behavior** policy and insert that data into some form of data storage---usually referred to as **replay**.In order to optimize the $\color{green}Q$-function we can then sample data from the replay **dataset** and use that data to perform an update. An illustration of this learning loop is shown below. In the next cell we will show how to implement a simple replay buffer. This can be as simple as a python list containing transition data. In more complicated scenarios we might want to have a more performance-tuned variant, we might have to be more concerned about how large replay is and what to do when its full, and we might want to sample from replay in different ways. But a simple python list can go a surprisingly long way. ###Code # Simple replay buffer # Create a convenient container for the SARS tuples required by deep RL agents. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) class ReplayBuffer(object): """A simple Python replay buffer.""" def __init__(self, capacity: int = None): self.buffer = collections.deque(maxlen=capacity) self._prev_state = None def add_first(self, initial_timestep: dm_env.TimeStep): self._prev_state = initial_timestep.observation def add(self, action: int, timestep: dm_env.TimeStep): transition = Transitions( state=self._prev_state, action=action, reward=timestep.reward, discount=timestep.discount, next_state=timestep.observation, ) self.buffer.append(transition) self._prev_state = timestep.observation def sample(self, batch_size: int) -> Transitions: # Sample a random batch of Transitions as a list. batch_as_list = random.sample(self.buffer, batch_size) # Convert the list of `batch_size` Transitions into a single Transitions # object where each field has `batch_size` stacked fields. return tree_utils.stack_sequence_fields(batch_as_list) def flush(self) -> Transitions: entire_buffer = tree_utils.stack_sequence_fields(self.buffer) self.buffer.clear() return entire_buffer def is_ready(self, batch_size: int) -> bool: return batch_size <= len(self.buffer) ###Output _____no_output_____ ###Markdown Section 6.2: NFQ Agent[Neural Fitted Q Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) was one of the first papers to demonstrate how to leverage recent advances in Deep Learning to approximate the Q-value by a neural network.$^1$In other words, the value $\color{green}Q(\color{red}{s}, \color{blue}{a})$ are approximated by the output of a neural network $\color{green}{Q_w}(\color{red}{s}, \color{blue}{a})$ for each possible action $\color{blue}{a} \in \color{blue}{\mathcal{A}}$.$^2$When introducing function approximations, and neural networks in particular, we need to have a loss to optimize. But looking back at the tabular setting above, you can see that we already have some notion of error: the **TD error**.By training our neural network to output values such that the *TD error is minimized*, we will also satisfy the Bellman Optimality Equation, which is a good sufficient condition to enforce, to obtain an optimal policy.Thanks to automatic differentiation, we can just write the TD error as a loss, e.g., with an $\ell^2$ loss, but others would work too:\begin{equation}L(\color{green}w) = \mathbb{E}\left[ \left( \color{green}{r} + \gamma \max_\color{blue}{a'} \color{green}{Q_w}(\color{red}{s'}, \color{blue}{a'}) − \color{green}{Q_w}(\color{red}{s}, \color{blue}{a}) \right)^2\right].\end{equation}Then we can compute the gradient with respect to the parameters of the neural network and improve our Q-value approximation incrementally.NFQ builds on $\color{green}Q$-learning, but if one were to update the Q-values online directly, the training can be unstable and very slow.Instead, NFQ uses a replay buffer, similar to what we see implemented above (Section 6.1), to update the Q-value in a batched setting.When it was introduced, it also was entirely off-policy using a uniformly random policy to collect data, which was prone to instability when applied to more complex environments (e.g. when the input are pixels or the tasks are longer and more complicated).But it is a good stepping stone to the more complex agents used today. Here, we will look at a slightly different and modernised implementation of NFQ.Below you will find an incomplete NFQ agent that takes in observations from a gridworld. Instead of receiving a tabular state, it receives an observation in the form of its (x,y) coordinates in the gridworld, and the (x,y) coordinates of the goal.The goal of this coding exercise is to complete this agent by implementing the loss, using mean squared error.---$^1$ If you read the NFQ paper, they use a "control" notation, where there is a "cost to minimize", instead of "rewards to maximize", so don't be surprised if signs/max/min do not correspond.$^2$ We could feed it $\color{blue}{a}$ as well and ask $Q_w$ for a single scalar value, but given we have a fixed number of actions and we usually need to take an $argmax$ over them, it's easiest to just output them all in one pass. Coding Exercise 6.1: Implement NFQ ###Code # Create a convenient container for the SARS tuples required by NFQ. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) class NeuralFittedQAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, q_network: nn.Module, replay_capacity: int = 100_000, epsilon: float = 0.1, batch_size: int = 1, learning_rate: float = 3e-4): # Store agent hyperparameters and network. self._num_actions = environment_spec.actions.num_values self._epsilon = epsilon self._batch_size = batch_size self._q_network = q_network # Container for the computed loss (see run_loop implementation above). self.last_loss = 0.0 # Create the replay buffer. self._replay_buffer = ReplayBuffer(replay_capacity) # Setup optimizer that will train the network to minimize the loss. self._optimizer = torch.optim.Adam(self._q_network.parameters(),lr = learning_rate) self._loss_fn = nn.MSELoss() def select_action(self, observation): # Compute Q-values. q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) # Adds batch dimension. q_values = q_values.squeeze(0) # Removes batch dimension # Select epsilon-greedy action. if self._epsilon < torch.rand(1): action = q_values.argmax(axis=-1) else: action = torch.randint(low=0, high=self._num_actions , size=(1,), dtype=torch.int64) return action def q_values(self, observation): q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) return q_values.squeeze(0).detach() def update(self): if not self._replay_buffer.is_ready(self._batch_size): # If the replay buffer is not ready to sample from, do nothing. return # Sample a minibatch of transitions from experience replay. transitions = self._replay_buffer.sample(self._batch_size) # Note: each of these tensors will be of shape [batch_size, ...]. s = torch.tensor(transitions.state) a = torch.tensor(transitions.action,dtype=torch.int64) r = torch.tensor(transitions.reward) d = torch.tensor(transitions.discount) next_s = torch.tensor(transitions.next_state) # Compute the Q-values at next states in the transitions. with torch.no_grad(): q_next_s = self._q_network(next_s) # Shape [batch_size, num_actions]. max_q_next_s = q_next_s.max(axis=-1)[0] # Compute the TD error and then the losses. target_q_value = r + d * max_q_next_s # Compute the Q-values at original state. q_s = self._q_network(s) # Gather the Q-value corresponding to each action in the batch. q_s_a = q_s.gather(1, a.view(-1,1)).squeeze(0) ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the NFQ Agent") ################################################# # TODO Average the squared TD errors over the entire batch using # self._loss_fn, which is defined above as nn.MSELoss() # HINT: Take a look at the reference for nn.MSELoss here: # https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html # What should you put for the input and the target? loss = ... # Compute the gradients of the loss with respect to the q_network variables. self._optimizer.zero_grad() loss.backward() # Apply the gradient update. self._optimizer.step() # Store the loss for logging purposes (see run_loop implementation above). self.last_loss = loss.detach().numpy() def observe_first(self, timestep: dm_env.TimeStep): self._replay_buffer.add_first(timestep) def observe(self, action: int, next_timestep: dm_env.TimeStep): self._replay_buffer.add(action, next_timestep) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_f42d1415.py) Train and Evaluate the NFQ Agent ###Code # @title Training the NFQ Agent. { form-width: "30%" } epsilon = 0.4 # @param {type:"number"} max_episode_length = 200 # Create the environment. grid = build_gridworld_task( task='simple', observation_type=ObservationType.AGENT_GOAL_POS, max_episode_length=max_episode_length) environment, environment_spec = setup_environment(grid) # Define the neural function approximator (aka Q network). q_network = nn.Sequential(nn.Linear(4, 50), nn.ReLU(), nn.Linear(50, 50), nn.ReLU(), nn.Linear(50, environment_spec.actions.num_values)) # Build the trainable Q-learning agent agent = NeuralFittedQAgent( environment_spec, q_network, epsilon=epsilon, replay_capacity=100_000, batch_size=10, learning_rate=1e-3) returns = run_loop( environment=environment, agent=agent, num_episodes=500, logger_time_delta=1., log_loss=True) # @title Evaluating the agent (set $\epsilon=0$). { form-width: "30%" } # Temporarily change epsilon to be more greedy; remember to change it back. agent._epsilon = 0.0 # Record a few episodes. frames = evaluate(environment, agent, evaluation_episodes=5) # Change epsilon back. agent._epsilon = epsilon # Display the video of the episodes. display_video(frames, frame_rate=6) # @title Visualise the learned Q values { form-width: "30%" } # Evaluate the policy for every state, similar to tabular agents above. environment.reset() pi = np.zeros(grid._layout_dims, dtype=np.int32) q = np.zeros(grid._layout_dims + (4,)) for y in range(grid._layout_dims[0]): for x in range(grid._layout_dims[1]): # Hack observation to see what the Q-network would output at that point. environment.set_state(x, y) obs = environment.get_obs() q[y, x] = np.asarray(agent.q_values(obs)) pi[y, x] = np.asarray(agent.select_action(obs)) plot_action_values(q) ###Output _____no_output_____ ###Markdown Compare the Q-values approximated with the neural network with the tabular case in **Section 5.3**. Notice how the neural network is generalizing from the visited states to the unvisited similar states, while in the tabular case we updated the value of each state only when we visited that state. Compare the greedy and behaviour ($\epsilon$-greedy) policies ###Code # @title Compare the greedy policy with the agent's policy { form-width: "30%" } # @markdown Notice that the agent's behavior policy has a lot more randomness, # @markdown due to the high epsilon. However, the greedy policy that's learned # @markdown is optimal. environment.plot_greedy_policy(q) plt.figtext(-.08, .95, 'Greedy policy using the learnt Q-values') plt.title('') environment.plot_policy(pi) plt.figtext(-.08, .95, "Policy using the agent's behavior policy") plt.title('') ###Output _____no_output_____ ###Markdown --- Section 7: DQN ###Code #@title Video 7: Deep Q-Networks (DQN) from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Mo4y1Q7yD", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"HEDoNtV1y-w", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown --> In this section, we will look at an advanced deep RL Agent based on the following publication, [Playing Atari with Deep Reinforcement Learning](https://deepmind.com/research/publications/playing-atari-deep-reinforcement-learning), which introduced the first deep learning model to successfully learn control policies directly from high-dimensional pixel inputs using RL.Here the agent will act directly on a pixel representation of the gridworld. You can find an incomplete implementation below. Coding Exercise 7.1: Run a DQN Agent ###Code class DQN(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, network: nn.Module, replay_capacity: int = 100_000, epsilon: float = 0.1, batch_size: int = 1, learning_rate: float = 5e-4, target_update_frequency: int = 10): # Store agent hyperparameters and network. self._num_actions = environment_spec.actions.num_values self._epsilon = epsilon self._batch_size = batch_size self._q_network = q_network # create a second q net with the same structure and initial values, which # we'll be updating separately from the learned q-network. self._target_network = copy.deepcopy(self._q_network) # Container for the computed loss (see run_loop implementation above). self.last_loss = 0.0 # Create the replay buffer. self._replay_buffer = ReplayBuffer(replay_capacity) # Keep an internal tracker of steps self._current_step = 0 # How often to update the target network self._target_update_frequency = target_update_frequency # Setup optimizer that will train the network to minimize the loss. self._optimizer = torch.optim.Adam(self._q_network.parameters(), lr=learning_rate) self._loss_fn = nn.MSELoss() def select_action(self, observation): # Compute Q-values. # Sonnet requires a batch dimension, which we squeeze out right after. q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) # Adds batch dimension. q_values = q_values.squeeze(0) # Removes batch dimension # Select epsilon-greedy action. if self._epsilon < torch.rand(1): action = q_values.argmax(axis=-1) else: action = torch.randint(low=0, high=self._num_actions , size=(1,), dtype=torch.int64) return action def q_values(self, observation): q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) return q_values.squeeze(0).detach() def update(self): self._current_step += 1 if not self._replay_buffer.is_ready(self._batch_size): # If the replay buffer is not ready to sample from, do nothing. return # Sample a minibatch of transitions from experience replay. transitions = self._replay_buffer.sample(self._batch_size) # Optionally unpack the transitions to lighten notation. # Note: each of these tensors will be of shape [batch_size, ...]. s = torch.tensor(transitions.state) a = torch.tensor(transitions.action,dtype=torch.int64) r = torch.tensor(transitions.reward) d = torch.tensor(transitions.discount) next_s = torch.tensor(transitions.next_state) # Compute the Q-values at next states in the transitions. with torch.no_grad(): ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the DQN Agent") ################################################# #TODO get the value of the next states evaluated by the target network #HINT: use self._target_network, defined above. q_next_s = ... # Shape [batch_size, num_actions]. max_q_next_s = q_next_s.max(axis=-1)[0] # Compute the TD error and then the losses. target_q_value = r + d * max_q_next_s # Compute the Q-values at original state. q_s = self._q_network(s) # Gather the Q-value corresponding to each action in the batch. q_s_a = q_s.gather(1, a.view(-1,1)).squeeze(0) # Average the squared TD errors over the entire batch loss = self._loss_fn(target_q_value, q_s_a) # Compute the gradients of the loss with respect to the q_network variables. self._optimizer.zero_grad() loss.backward() # Apply the gradient update. self._optimizer.step() if self._current_step % self._target_update_frequency == 0: self._target_network.load_state_dict(self._q_network.state_dict()) # Store the loss for logging purposes (see run_loop implementation above). self.last_loss = loss.detach().numpy() def observe_first(self, timestep: dm_env.TimeStep): self._replay_buffer.add_first(timestep) def observe(self, action: int, next_timestep: dm_env.TimeStep): self._replay_buffer.add(action, next_timestep) # Create a convenient container for the SARS tuples required by NFQ. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_d6d1b1d0.py) ###Code # @title Train and evaluate the DQN agent { form-width: "30%" } epsilon = 0.25 # @param {type: "number"} num_episodes = 1000 # @param {type: "integer"} grid = build_gridworld_task( task='simple', observation_type=ObservationType.GRID, max_episode_length=200) environment, environment_spec = setup_environment(grid) # Build the agent's network. class Permute(nn.Module): def __init__(self, order: list): super(Permute,self).__init__() self.order = order def forward(self, x): return x.permute(self.order) q_network = nn.Sequential(Permute([0, 3, 1, 2]), nn.Conv2d(3, 32, kernel_size=4, stride=2,padding=1), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(3, 1), nn.Flatten(), nn.Linear(384, 50), nn.ReLU(), nn.Linear(50, environment_spec.actions.num_values) ) agent = DQN( environment_spec=environment_spec, network=q_network, batch_size=10, epsilon=epsilon, target_update_frequency=25) returns = run_loop( environment=environment, agent=agent, num_episodes=num_episodes, num_steps=100000) # @title Visualise the learned Q values # Evaluate the policy for every state, similar to tabular agents above. pi = np.zeros(grid._layout_dims, dtype=np.int32) q = np.zeros(grid._layout_dims + (4,)) for y in range(grid._layout_dims[0]): for x in range(grid._layout_dims[1]): # Hack observation to see what the Q-network would output at that point. environment.set_state(x, y) obs = environment.get_obs() q[y, x] = np.asarray(agent.q_values(obs)) pi[y, x] = np.asarray(agent.select_action(obs)) plot_action_values(q) # @title Compare the greedy policy with the agent's policy environment.plot_greedy_policy(q) plt.figtext(-.08, .95, 'Greedy policy using the learnt Q-values') plt.title('') environment.plot_policy(pi) plt.figtext(-.08, .95, "Policy using the agent's epsilon-greedy policy") plt.title('') ###Output _____no_output_____ ###Markdown --- Section 8: Beyond Value Based Model-Free Methods ###Code # @title Video 8: Other RL Methods from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV14w411977Y", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"1N4Jm9loJx4", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Cartpole taskHere we switch to training on a different kind of task, which has a continuous action space: Cartpole in [Gym](https://gym.openai.com/). As you recall from the video, policy-based methods are particularly well-suited for these kinds of tasks. We will be exploring two of those methods below. ###Code # @title Make a CartPole environment env = gym.make('CartPole-v1') SEED=2021 # Set seeds env.seed(SEED) set_seed(SEED) ###Output _____no_output_____ ###Markdown Section 8.1: Policy gradientNow we will turn to policy gradient methods. Rather than defining the policy in terms of a value function, i.e. $\color{blue}\pi(\color{red}s) = \arg\max_{\color{blue}a}\color{green}Q(\color{red}s, \color{blue}a)$, we will directly parameterize the policy and write it as the distribution\begin{equation}\color{blue}a_t \sim \color{blue}\pi_{\theta}(\color{blue}a_t|\color{red}s_t).\end{equation}Here $\theta$ represent the parameters of the policy. We will update the policy parameters using gradient ascent to **maximize** expected future reward.One convenient way to represent the conditional distribution above is as a function that takes a state $\color{red}s$ and returns a distribution over actions $\color{blue}a$.Defined below is an agent which implements the REINFORCE algorithm. REINFORCE (Williams 1992) is the simplest model-free general reinforcement learning technique.The **basic idea** is to use probabilistic action choice. If the reward at the end turns out to be high, we make **all** actions in this sequence **more likely** (otherwise, we do the opposite).This strategy could reinforce "bad" actions as well, however they will turn out to be part of trajectories with low reward and will likely not get accentuated.From the lectures, we know that we need to compute\begin{equation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} G_t \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}where $\color{green} G_t$ is the sum over future rewards from time $t$, defined as\begin{equation}\color{green} G_t = \sum_{n=t}^T \gamma^{n-t} \color{green} R(\color{red}{s_t}, \color{blue}{a_t}, \color{red}{s_{t+1}}).\end{equation}The algorithm below will collect the state, action, and reward data in its buffer until it reaches a full trajectory. It will then update its policy given the above gradient (and the Adam optimizer).A policy gradient trains an agent without explicitly mapping the value for every state-action pair in an environment by taking small steps and updating the policy based on the reward associated with that step. In this section, we will build a small network that trains using policy gradient using PyTorch.The agent can receive a reward immediately for an action or it can receive the award at a later time such as the end of the episode. The policy function our agent will try to learn is $\pi_\theta(a,s)$, where $\theta$ is the parameter vector, $s$ is a particular state, and $a$ is an action.Monte-Carlo Policy Gradient approach will be used, which means the agent will run through an entire episode and then update policy based on the rewards obtained. ###Code # @title Set the hyperparameters for Policy Gradient learning_rate = 0.01 # @param {type:"number"} gamma = 0.99 # @param {type:"number"} dropout = 0.6 # @param {type:"number"} # Only used in Policy Gradient Method hidden_neurons = 128 # @param {type:"integer"} num_steps = 300 ###Output _____no_output_____ ###Markdown Coding Exercise 8.1: Creating a simple neural networkBelow you will find some incomplete code. Fill in the missing code to construct the specified neural network.Let us define a simple feed forward neural network with one hidden layer of 128 neurons and a dropout of 0.6. Let's use Adam as our optimizer and a learning rate of 0.01. Use the hyperparameters already defined rather than using explicit values.Using dropout will significantly improve the performance of the policy. Do compare your results with and without dropout and experiment with other hyper-parameter values as well. ###Code class PolicyGradientNet(nn.Module): def __init__(self): super(PolicyGradientNet, self).__init__() self.state_space = env.observation_space.shape[0] self.action_space = env.action_space.n ################################################# ## TODO for students: Define two linear layers ## from the first expression raise NotImplementedError("Student exercise: Create FF neural network.") ################################################# # HINT: you can construct linear layers using nn.Linear(); what are the # sizes of the inputs and outputs of each of the layers? Also remember # that you need to use hidden_neurons (see hyperparameters section above). # https://pytorch.org/docs/stable/generated/torch.nn.Linear.html self.l1 = ... self.l2 = ... self.gamma = ... # Episode policy and past rewards self.past_policy = Variable(torch.Tensor()) self.reward_episode = [] # Overall reward and past loss self.past_reward = [] self.past_loss = [] def forward(self, x): model = torch.nn.Sequential( self.l1, nn.Dropout(p=dropout), nn.ReLU(), self.l2, nn.Softmax(dim=-1) ) return model(x) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_28fb724f.py) Now let's create an instance of the network we have defined and use Adam as the optimizer using the learning_rate as hyperparameter already defined above. ###Code policy = PolicyGradientNet() pg_optimizer = optim.Adam(policy.parameters(), lr=learning_rate) ###Output _____no_output_____ ###Markdown Select ActionThe `select_action()` function chooses an action based on our policy probability distribution using the PyTorch distributions package. Our policy returns a probability for each possible action in our action space (move left or move right) as an array of length two such as [0.7, 0.3]. We then choose an action based on these probabilities, record our history, and return our action. ###Code def select_action(state): #Select an action (0 or 1) by running policy model and choosing based on the probabilities in state state = torch.from_numpy(state).type(torch.FloatTensor) state = policy(Variable(state)) c = Categorical(state) action = c.sample() # Add log probability of chosen action if policy.past_policy.dim() != 0: policy.past_policy = torch.cat([policy.past_policy, c.log_prob(action).reshape(1)]) else: policy.past_policy = (c.log_prob(action).reshape(1)) return action ###Output _____no_output_____ ###Markdown Update policyThis function updates the policy. Reward $G_t$We update our policy by taking a sample of the action value function $Q^{\pi_\theta} (s_t,a_t)$ by playing through episodes of the game. $Q^{\pi_\theta} (s_t,a_t)$ is defined as the expected return by taking action $a$ in state $s$ following policy $\pi$.We know that for every step the simulation continues we receive a reward of 1. We can use this to calculate the policy gradient at each time step, where $r$ is the reward for a particular state-action pair. Rather than using the instantaneous reward, $r$, we instead use a long term reward $ v_{t} $ where $v_t$ is the discounted sum of all future rewards for the length of the episode. $v_{t}$ is then,\begin{equation}\color{green} G_t = \sum_{n=t}^T \gamma^{n-t} \color{green} R(\color{red}{s_t}, \color{blue}{a_t}, \color{red}{s_{t+1}}).\end{equation}where $\gamma$ is the discount factor (0.99). For example, if an episode lasts 5 steps, the reward for each step will be [4.90, 3.94, 2.97, 1.99, 1].Next we scale our reward vector by substracting the mean from each element and scaling to unit variance by dividing by the standard deviation. This practice is common for machine learning applications and the same operation as Scikit Learn's __[StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)__. It also has the effect of compensating for future uncertainty. Update PolicyAfter each episode we apply Monte-Carlo Policy Gradient to improve our policy according to the equation:\begin{equation}\Delta\theta_t = \alpha\nabla_\theta \, \log \pi_\theta (s_t,a_t)G_t\end{equation}We will then feed our policy history multiplied by our rewards to our optimizer and update the weights of our neural network using stochastic gradient **ascent**. This should increase the likelihood of actions that got our agent a larger reward. The following function ```update_policy``` updates the network weights and therefore the policy. ###Code def update_policy(): R = 0 rewards = [] # Discount future rewards back to the present using gamma for r in policy.reward_episode[::-1]: R = r + policy.gamma * R rewards.insert(0, R) # Scale rewards rewards = torch.FloatTensor(rewards) rewards = (rewards - rewards.mean()) / (rewards.std() + np.finfo(np.float32).eps) # Calculate loss pg_loss = (torch.sum(torch.mul(policy.past_policy, Variable(rewards)).mul(-1), -1)) # Update network weights # Use zero_grad(), backward() and step() methods of the optimizer instance. pg_optimizer.zero_grad() pg_loss.backward() # Update the weights for param in policy.parameters(): param.grad.data.clamp_(-1, 1) pg_optimizer.step() # Save and intialize episode past counters policy.past_loss.append(pg_loss.item()) policy.past_reward.append(np.sum(policy.reward_episode)) policy.past_policy = Variable(torch.Tensor()) policy.reward_episode= [] ###Output _____no_output_____ ###Markdown TrainingThis is our main policy training loop. For each step in a training episode, we choose an action, take a step through the environment, and record the resulting new state and reward. We call update_policy() at the end of each episode to feed the episode history to our neural network and improve our policy. ###Code def policy_gradient_train(episodes): running_reward = 10 for episode in range(episodes): state = env.reset() done = False for time in range(1000): action = select_action(state) # Step through environment using chosen action state, reward, done, _ = env.step(action.item()) # Save reward policy.reward_episode.append(reward) if done: break # Used to determine when the environment is solved. running_reward = (running_reward * gamma) + (time * (1 - gamma)) update_policy() if episode % 50 == 0: print(f"Episode {episode}\tLast length: {time:5.0f}" f"\tAverage length: {running_reward:.2f}") if running_reward > env.spec.reward_threshold: print(f"Solved! Running reward is now {running_reward} " f"and the last episode runs to {time} time steps!") break ###Output _____no_output_____ ###Markdown Run the model ###Code episodes = 500 #@param {type:"integer"} policy_gradient_train(episodes) ###Output _____no_output_____ ###Markdown Plot the results ###Code #@title Plot the training performance for policy gradient def plot_policy_gradient_training(): window = int(episodes / 20) fig, ((ax1), (ax2)) = plt.subplots(1, 2, sharey=True, figsize=[15, 4]); rolling_mean = pd.Series(policy.past_reward).rolling(window).mean() std = pd.Series(policy.past_reward).rolling(window).std() ax1.plot(rolling_mean) ax1.fill_between(range(len(policy.past_reward)), rolling_mean-std, rolling_mean+std, color='orange', alpha=0.2) ax1.set_title(f"Episode Length Moving Average ({window}-episode window)") ax1.set_xlabel('Episode'); ax1.set_ylabel('Episode Length') ax2.plot(policy.past_reward) ax2.set_title('Episode Length') ax2.set_xlabel('Episode') ax2.set_ylabel('Episode Length') fig.tight_layout(pad=2) plt.show() plot_policy_gradient_training() ###Output _____no_output_____ ###Markdown Exercise 8.1: Explore different hyperparameters.Try running the model again, by modifying the hyperparameters and observe the outputs. Be sure to rerun the function definition cells in order to pick up on the updated values.What do you see when you 1. increase learning rate2. decrease learning rate3. decrease gamma4. increase number of hidden neurons in the network. Section 8.2: Actor-criticRecall the policy gradient\begin{quation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} G_t \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}The policy parameters are updated using Monte Carlo technique and uses random samples. This introduces high variability in log probabilities and cumulative reward values. This leads to noisy gradients and can cause unstable learning.One way to reduce variance and increase stability is subtracting the cumulative reward by a baseline:\begin{equation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} (G_t - b) \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}Intuitively, reducing cumulative reward will make smaller gradients and thus smaller and more stable (hopefully) updates.From the lecture slides, we know that in Actor Critic Method:1. The “Critic” estimates the value function. This could be the action-value (the Q value) or state-value (the V value).2. The “Actor” updates the policy distribution in the direction suggested by the Critic (such as with policy gradients).Both the Critic and Actor functions are parameterized with neural networks. The "Critic" network parameterizes the Q-value. ###Code # @title Set the hyperparameters for Actor Critic SEED=2021 learning_rate = 0.01 # @param {type:"number"} gamma = 0.99 # @param {type:"number"} dropout = 0.6 # Only used in Actor-Critic Method hidden_size = 256 #@ param {type:"integer"} num_steps = 300 ###Output _____no_output_____ ###Markdown Actor Critic Network ###Code class ActorCriticNet(nn.Module): def __init__(self, num_inputs, num_actions, hidden_size, learning_rate=3e-4): super(ActorCriticNet, self).__init__() self.num_actions = num_actions self.critic_linear1 = nn.Linear(num_inputs, hidden_size) self.critic_linear2 = nn.Linear(hidden_size, 1) self.actor_linear1 = nn.Linear(num_inputs, hidden_size) self.actor_linear2 = nn.Linear(hidden_size, num_actions) self.all_rewards = [] self.all_lengths = [] self.average_lengths = [] def forward(self, state): state = Variable(torch.from_numpy(state).float().unsqueeze(0)) value = F.relu(self.critic_linear1(state)) value = self.critic_linear2(value) policy_dist = F.relu(self.actor_linear1(state)) policy_dist = F.softmax(self.actor_linear2(policy_dist), dim=1) return value, policy_dist ###Output _____no_output_____ ###Markdown Training ###Code def actor_critic_train(episodes): all_lengths = [] average_lengths = [] all_rewards = [] entropy_term = 0 for episode in range(episodes): log_probs = [] values = [] rewards = [] state = env.reset() for steps in range(num_steps): value, policy_dist = actor_critic.forward(state) value = value.detach().numpy()[0, 0] dist = policy_dist.detach().numpy() action = np.random.choice(num_outputs, p=np.squeeze(dist)) log_prob = torch.log(policy_dist.squeeze(0)[action]) entropy = -np.sum(np.mean(dist) * np.log(dist)) new_state, reward, done, _ = env.step(action) rewards.append(reward) values.append(value) log_probs.append(log_prob) entropy_term += entropy state = new_state if done or steps == num_steps - 1: qval, _ = actor_critic.forward(new_state) qval = qval.detach().numpy()[0, 0] all_rewards.append(np.sum(rewards)) all_lengths.append(steps) average_lengths.append(np.mean(all_lengths[-10:])) if episode % 50 == 0: print(f"episode: {episode},\treward: {np.sum(rewards)}," f"\ttotal length: {steps}," f"\taverage length: {average_lengths[-1]}") break # compute Q values qvals = np.zeros_like(values) for t in reversed(range(len(rewards))): qval = rewards[t] + gamma * qval qvals[t] = qval #update actor critic values = torch.FloatTensor(values) qvals = torch.FloatTensor(qvals) log_probs = torch.stack(log_probs) advantage = qvals - values actor_loss = (-log_probs * advantage).mean() critic_loss = 0.5 * advantage.pow(2).mean() ac_loss = actor_loss + critic_loss + 0.001 * entropy_term ac_optimizer.zero_grad() ac_loss.backward() ac_optimizer.step() # Store results actor_critic.average_lengths = average_lengths actor_critic.all_rewards = all_rewards actor_critic.all_lengths = all_lengths ###Output _____no_output_____ ###Markdown Run the model ###Code episodes = 500 # @param {type:"integer"} env.reset() num_inputs = env.observation_space.shape[0] num_outputs = env.action_space.n actor_critic = ActorCriticNet(num_inputs, num_outputs, hidden_size) ac_optimizer = optim.Adam(actor_critic.parameters()) actor_critic_train(episodes) ###Output _____no_output_____ ###Markdown Plot the results ###Code # @title Plot the training performance for Actor Critic def plot_actor_critic_training(): window = int(episodes / 20) plt.figure(figsize=(15, 4)) plt.subplot(1, 2, 1) smoothed_rewards = pd.Series(actor_critic.all_rewards).rolling(window).mean() std = pd.Series(actor_critic.all_rewards).rolling(window).std() plt.plot(smoothed_rewards, label='Smoothed rewards') plt.fill_between(range(len(smoothed_rewards)), smoothed_rewards-std, smoothed_rewards+std, color='orange', alpha=0.2) plt.xlabel('Episode') plt.ylabel('Reward') plt.subplot(1, 2, 2) plt.plot(actor_critic.all_lengths, label='All lengths') plt.plot(actor_critic.average_lengths, label='Average lengths') plt.xlabel('Episode') plt.ylabel('Episode length') plt.legend() plt.tight_layout() plot_actor_critic_training() ###Output _____no_output_____ ###Markdown Exercise 8.3: Effect of episodes on performanceChange the episodes from 500 to 3000 and observe the performance impact. Exercise 8.4: Effect of learning rate on performanceModify the hyperparameters related to learning_rate and gamma and observe the impact on the performance.Be sure to rerun the function definition cells in order to pick up on the updated values. --- Section 9: RL in the real world ###Code # @title Video 9: Real-world applications and ethics from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Nq4y1X7AF", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"5kBtiW88QVw", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Exercise 9: Group discussionForm a group of 2-3 and have discussions (roughly 3 minutes each) of the following questions:1. **Safety**: what are some safety issues that arise in RL that don’t arise with e.g. supervised learning?2. **Generalization**: What happens if your RL agent is presented with data it hasn’t trained on? (“goes out of distribution”)3. How important do you think **interpretability** is in the ethical and safe deployment of RL agents in the real world? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_b0fc905f.py) --- Section 10: How to learn more ###Code # @title Video 10: How to learn more from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1WM4y1T7G5", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"dKaOpgor5Ek", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) ###Output _____no_output_____ ###Markdown Neuromatch Academy: Week 3, Day 2, Tutorial 1 Introduction to Reinforcement Learning__Content creators:__ Feryal Behbahani, Jane Wang, Matthew Sargent, Anoop Kulkarni, Sowmya Parthiban__Content reviewers:__ Lily Cheng, Roberto Guidotti, Arush Tagade__Content editors:__ Spiros Chavlis __Production editors:__ Spiros Chavlis ---Tutorial ObjectivesBy the end of the tutorial, you should be able to:1. Within the RL framework, be able to identify the different components: environment, agent, states, and actions. 2. Understand the Bellman equation and components involved. 3. Implement tabular value-based model-free learning (Q-learning and SARSA).4. Run a DQN agent and experiment with different hyperparameters.5. Have a high-level understanding of other (nonvalue-based) RL methods.6. Discuss real-world applications and ethical issues of RL. ###Code #@markdown Tutorial slides from IPython.display import HTML HTML('<iframe src="https://docs.google.com/presentation/d/1SspkoRiILE1xGUE0_iRboo-ALqXJVEZCt8IlgWOKgGo/edit?resourcekey=0-gFuj1C_wUqxJ2qPHPTceAQ#slide=id.gdb4fce9ed9_0_289" frameborder="0" width="960" height="569" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>') ###Output _____no_output_____ ###Markdown --- Setup ###Code # Install requirements !pip install einops --quiet !pip install dm-acme --quiet !pip install dm-acme[reverb] --quiet !pip install dm-acme[tf] --quiet !pip install dm-acme[envs] --quiet !pip install dm-env --quiet !sudo apt-get install -y xvfb ffmpeg --quiet !pip install imageio --quiet from IPython.display import clear_output clear_output() # Import modules import gym import enum import copy import time import acme import torch import base64 import dm_env import random import IPython import imageio import warnings import itertools import collections import numpy as np import pandas as pd import sonnet as snt import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import matplotlib.pyplot as plt import tensorflow.compat.v2 as tf from acme import environment_loop from acme import specs from acme import wrappers from acme.utils import tree_utils from acme.utils import loggers from tqdm import tqdm, trange from torch.autograd import Variable from torch.distributions import Categorical from typing import Callable, Optional, Sequence tf.enable_v2_behavior() warnings.filterwarnings('ignore') np.set_printoptions(precision=3, suppress=1) SEED = 2021 %matplotlib inline #@title Figure settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") import warnings warnings.filterwarnings( action="ignore", message="This figure includes Axes", category=UserWarning ) warnings.filterwarnings( action="ignore", message="Calculating RSM", category=UserWarning ) #@title Helper Functions #@markdown Implement helpers for value visualisation { form-width: "30%" } map_from_action_to_subplot = lambda a: (2, 6, 8, 4)[a] map_from_action_to_name = lambda a: ("up", "right", "down", "left")[a] def plot_values(values, colormap='pink', vmin=-1, vmax=10): plt.imshow(values, interpolation="nearest", cmap=colormap, vmin=vmin, vmax=vmax) plt.yticks([]) plt.xticks([]) plt.colorbar(ticks=[vmin, vmax]) def plot_state_value(action_values, epsilon=0.1): q = action_values fig = plt.figure(figsize=(4, 4)) vmin = np.min(action_values) vmax = np.max(action_values) v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1) plot_values(v, colormap='summer', vmin=vmin, vmax=vmax) plt.title("$v(s)$") def plot_action_values(action_values, epsilon=0.1): q = action_values fig = plt.figure(figsize=(8, 8)) fig.subplots_adjust(wspace=0.3, hspace=0.3) vmin = np.min(action_values) vmax = np.max(action_values) dif = vmax - vmin for a in [0, 1, 2, 3]: plt.subplot(3, 3, map_from_action_to_subplot(a)) plot_values(q[..., a], vmin=vmin - 0.05*dif, vmax=vmax + 0.05*dif) action_name = map_from_action_to_name(a) plt.title(r"$q(s, \mathrm{" + action_name + r"})$") plt.subplot(3, 3, 5) v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1) plot_values(v, colormap='summer', vmin=vmin, vmax=vmax) plt.title("$v(s)$") def smooth(x, window=10): return x[:window*(len(x)//window)].reshape(len(x)//window, window).mean(axis=1) def plot_stats(stats, window=10): plt.figure(figsize=(16,4)) plt.subplot(121) xline = range(0, len(stats.episode_lengths), window) plt.plot(xline, smooth(stats.episode_lengths, window=window)) plt.ylabel('Episode Length') plt.xlabel('Episode Count') plt.subplot(122) plt.plot(xline, smooth(stats.episode_rewards, window=window)) plt.ylabel('Episode Return') plt.xlabel('Episode Count') #@title Set random seed. #@markdown Executing `set_seed(seed=seed)` you are setting the seed # for DL its critical to set the random seed so that students can have a # baseline to compare their results to expected results. # Read more here: https://pytorch.org/docs/stable/notes/randomness.html # Call `set_seed` function in the exercises to ensure reproducibility. import random def set_seed(seed=None, seed_torch=True): if seed is None: seed = np.random.choice(2 ** 32) random.seed(seed) np.random.seed(seed) if seed_torch: torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True print(f'Random seed {seed} has been set.') #@title Set device (GPU or CPU). Execute `set_device()` def set_device(): device = "cuda" if torch.cuda.is_available() else "cpu" if device != "cuda": print("WARNING: For this notebook to perform best, " "if possible, in the menu under `Runtime` -> " "`Change runtime type.` select `GPU` ") else: print("GPU is enabled in this notebook.") return device DEVICE = set_device() print(f"`DEVICE` selected: {DEVICE}") ###Output _____no_output_____ ###Markdown ---Section 1: Introduction to Reinforcement Learning ###Code #@title Video 1: Introduction to RL # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="BWz3scQN50M", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown ---Section 2: General Formulation of RL Problems and Gridworlds ###Code #@title Video 2: General Formulation and MDPs # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="h6TxAALY5Fc", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown The agent interacts with the environment in a loop corresponding to the following diagram. The environment defines a set of **actions** that an agent can take. The agent takes an action informed by the **observations** it receives, and will get a **reward** from the environment after each action. The goal in RL is to find an agent whose actions maximize the total accumulation of rewards obtained from the environment. Section 2.1: The Environment For this practical session we will focus on a **simple grid world** environment,which consists of a 9 x 10 grid of either wall or empty cells, depicted in black and white, respectively. The smiling agent starts from an initial location and needs to navigate to reach the goal square.Below you will find an implementation of this Gridworld as a ```dm_env.Environment``` ###Code #@title Implement GridWorld { form-width: "30%" } #@markdown double-click to inspect its contents class ObservationType(enum.IntEnum): STATE_INDEX = enum.auto() AGENT_ONEHOT = enum.auto() GRID = enum.auto() AGENT_GOAL_POS = enum.auto() class GridWorld(dm_env.Environment): def __init__(self, layout, start_state, goal_state=None, observation_type=ObservationType.STATE_INDEX, discount=0.9, penalty_for_walls=-5, reward_goal=10, max_episode_length=None, randomize_goals=False): """Build a grid environment. Simple gridworld defined by a map layout, a start and a goal state. Layout should be a NxN grid, containing: * 0: empty * -1: wall * Any other positive value: value indicates reward; episode will terminate Args: layout: NxN array of numbers, indicating the layout of the environment. start_state: Tuple (y, x) of starting location. goal_state: Optional tuple (y, x) of goal location. Will be randomly sampled once if None. observation_type: Enum observation type to use. One of: * ObservationType.STATE_INDEX: int32 index of agent occupied tile. * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the agent is and 0 elsewhere. * ObservationType.GRID: NxNx3 float32 grid of feature channels. First channel contains walls (1 if wall, 0 otherwise), second the agent position (1 if agent, 0 otherwise) and third goal position (1 if goal, 0 otherwise) * ObservationType.AGENT_GOAL_POS: float32 tuple with (agent_y, agent_x, goal_y, goal_x) discount: Discounting factor included in all Timesteps. penalty_for_walls: Reward added when hitting a wall (should be negative). reward_goal: Reward added when finding the goal (should be positive). max_episode_length: If set, will terminate an episode after this many steps. randomize_goals: If true, randomize goal at every episode. """ if observation_type not in ObservationType: raise ValueError('observation_type should be a ObservationType instace.') self._layout = np.array(layout) self._start_state = start_state self._state = self._start_state self._number_of_states = np.prod(np.shape(self._layout)) self._discount = discount self._penalty_for_walls = penalty_for_walls self._reward_goal = reward_goal self._observation_type = observation_type self._layout_dims = self._layout.shape self._max_episode_length = max_episode_length self._num_episode_steps = 0 self._randomize_goals = randomize_goals if goal_state is None: # Randomly sample goal_state if not provided goal_state = self._sample_goal() self.goal_state = goal_state def _sample_goal(self): """Randomly sample reachable non-starting state.""" # Sample a new goal n = 0 max_tries = 1e5 while n < max_tries: goal_state = tuple(np.random.randint(d) for d in self._layout_dims) if goal_state != self._state and self._layout[goal_state] == 0: # Reachable state found! return goal_state n += 1 raise ValueError('Failed to sample a goal state.') @property def layout(self): return self._layout @property def number_of_states(self): return self._number_of_states @property def goal_state(self): return self._goal_state @property def start_state(self): return self._start_state @property def state(self): return self._state def set_state(self, x, y): self._state = (y, x) @goal_state.setter def goal_state(self, new_goal): if new_goal == self._state or self._layout[new_goal] < 0: raise ValueError('This is not a valid goal!') # Zero out any other goal self._layout[self._layout > 0] = 0 # Setup new goal location self._layout[new_goal] = self._reward_goal self._goal_state = new_goal def observation_spec(self): if self._observation_type is ObservationType.AGENT_ONEHOT: return specs.Array( shape=self._layout_dims, dtype=np.float32, name='observation_agent_onehot') elif self._observation_type is ObservationType.GRID: return specs.Array( shape=self._layout_dims + (3,), dtype=np.float32, name='observation_grid') elif self._observation_type is ObservationType.AGENT_GOAL_POS: return specs.Array( shape=(4,), dtype=np.float32, name='observation_agent_goal_pos') elif self._observation_type is ObservationType.STATE_INDEX: return specs.DiscreteArray( self._number_of_states, dtype=int, name='observation_state_index') def action_spec(self): return specs.DiscreteArray(4, dtype=int, name='action') def get_obs(self): if self._observation_type is ObservationType.AGENT_ONEHOT: obs = np.zeros(self._layout.shape, dtype=np.float32) # Place agent obs[self._state] = 1 return obs elif self._observation_type is ObservationType.GRID: obs = np.zeros(self._layout.shape + (3,), dtype=np.float32) obs[..., 0] = self._layout < 0 obs[self._state[0], self._state[1], 1] = 1 obs[self._goal_state[0], self._goal_state[1], 2] = 1 return obs elif self._observation_type is ObservationType.AGENT_GOAL_POS: return np.array(self._state + self._goal_state, dtype=np.float32) elif self._observation_type is ObservationType.STATE_INDEX: y, x = self._state return y * self._layout.shape[1] + x def reset(self): self._state = self._start_state self._num_episode_steps = 0 if self._randomize_goals: self.goal_state = self._sample_goal() return dm_env.TimeStep( step_type=dm_env.StepType.FIRST, reward=None, discount=None, observation=self.get_obs()) def step(self, action): y, x = self._state if action == 0: # up new_state = (y - 1, x) elif action == 1: # right new_state = (y, x + 1) elif action == 2: # down new_state = (y + 1, x) elif action == 3: # left new_state = (y, x - 1) else: raise ValueError( 'Invalid action: {} is not 0, 1, 2, or 3.'.format(action)) new_y, new_x = new_state step_type = dm_env.StepType.MID if self._layout[new_y, new_x] == -1: # wall reward = self._penalty_for_walls discount = self._discount new_state = (y, x) elif self._layout[new_y, new_x] == 0: # empty cell reward = 0. discount = self._discount else: # a goal reward = self._layout[new_y, new_x] discount = 0. new_state = self._start_state step_type = dm_env.StepType.LAST self._state = new_state self._num_episode_steps += 1 if (self._max_episode_length is not None and self._num_episode_steps >= self._max_episode_length): step_type = dm_env.StepType.LAST return dm_env.TimeStep( step_type=step_type, reward=np.float32(reward), discount=discount, observation=self.get_obs()) def plot_grid(self, add_start=True): plt.figure(figsize=(4, 4)) plt.imshow(self._layout <= -1, interpolation='nearest') ax = plt.gca() ax.grid(0) plt.xticks([]) plt.yticks([]) # Add start/goal if add_start: plt.text( self._start_state[1], self._start_state[0], r'$\mathbf{S}$', fontsize=16, ha='center', va='center') plt.text( self._goal_state[1], self._goal_state[0], r'$\mathbf{G}$', fontsize=16, ha='center', va='center') h, w = self._layout.shape for y in range(h - 1): plt.plot([-0.5, w - 0.5], [y + 0.5, y + 0.5], '-k', lw=2) for x in range(w - 1): plt.plot([x + 0.5, x + 0.5], [-0.5, h - 0.5], '-k', lw=2) def plot_state(self, return_rgb=False): self.plot_grid(add_start=False) # Add the agent location plt.text( self._state[1], self._state[0], u'😃', fontname='symbola', fontsize=18, ha='center', va='center', ) if return_rgb: fig = plt.gcf() plt.axis('tight') plt.subplots_adjust(0, 0, 1, 1, 0, 0) fig.canvas.draw() data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') w, h = fig.canvas.get_width_height() data = data.reshape((h, w, 3)) plt.close(fig) return data def plot_policy(self, policy): action_names = [ r'$\uparrow$', r'$\rightarrow$', r'$\downarrow$', r'$\leftarrow$' ] self.plot_grid() plt.title('Policy Visualization') h, w = self._layout.shape for y in range(h): for x in range(w): # if ((y, x) != self._start_state) and ((y, x) != self._goal_state): if (y, x) != self._goal_state: action_name = action_names[policy[y, x]] plt.text(x, y, action_name, ha='center', va='center') def plot_greedy_policy(self, q): greedy_actions = np.argmax(q, axis=2) self.plot_policy(greedy_actions) def build_gridworld_task(task, discount=0.9, penalty_for_walls=-5, observation_type=ObservationType.STATE_INDEX, max_episode_length=200): """Construct a particular Gridworld layout with start/goal states. Args: task: string name of the task to use. One of {'simple', 'obstacle', 'random_goal'}. discount: Discounting factor included in all Timesteps. penalty_for_walls: Reward added when hitting a wall (should be negative). observation_type: Enum observation type to use. One of: * ObservationType.STATE_INDEX: int32 index of agent occupied tile. * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the agent is and 0 elsewhere. * ObservationType.GRID: NxNx3 float32 grid of feature channels. First channel contains walls (1 if wall, 0 otherwise), second the agent position (1 if agent, 0 otherwise) and third goal position (1 if goal, 0 otherwise) * ObservationType.AGENT_GOAL_POS: float32 tuple with (agent_y, agent_x, goal_y, goal_x). max_episode_length: If set, will terminate an episode after this many steps. """ tasks_specifications = { 'simple': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), 'goal_state': (7, 2) }, 'obstacle': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, -1, 0, 0, -1], [-1, 0, 0, 0, -1, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), 'goal_state': (2, 8) }, 'random_goal': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), # 'randomize_goals': True }, } return GridWorld( discount=discount, penalty_for_walls=penalty_for_walls, observation_type=observation_type, max_episode_length=max_episode_length, **tasks_specifications[task]) def setup_environment(environment): """Returns the environment and its spec.""" # Make sure the environment outputs single-precision floats. environment = wrappers.SinglePrecisionWrapper(environment) # Grab the spec of the environment. environment_spec = specs.make_environment_spec(environment) return environment, environment_spec ###Output _____no_output_____ ###Markdown We will use two distinct tabular GridWorlds:* `simple` where the goal is at the bottom left of the grid, little navigation required.* `obstacle` where the goal is behind an obstacle the agent must avoid.You can visualize the grid worlds by running the cell below. Note that **S** indicates the start state and **G** indicates the goal. ###Code # Visualise GridWorlds # Instantiate two tabular environments, a simple task, and one that involves # the avoidance of an obstacle. simple_grid = build_gridworld_task( task='simple', observation_type=ObservationType.GRID) obstacle_grid = build_gridworld_task( task='obstacle', observation_type=ObservationType.GRID) # Plot them. simple_grid.plot_grid() plt.title('Simple') obstacle_grid.plot_grid() plt.title('Obstacle') ###Output _____no_output_____ ###Markdown In this environment, the agent has four possible **actions**: `up`, `right`, `down`, and `left`. The **reward** is `-5` for bumping into a wall, `+10` for reaching the goal, and `0` otherwise. The episode ends when the agent reaches the goal, and otherwise continues. The **discount** on continuing steps, is $\gamma = 0.9$. Before we start building an agent to interact with this environment, let's first look at the types of objects the environment either returns (e.g. **observations**) or consumes (e.g. **actions**). The `environment_spec` will show you the form of the **observations**, **rewards** and **discounts** that the environment exposes and the form of the **actions** that can be taken. ###Code # @title Look at environment_spec { form-width: "30%" } # Note: setup_environment is implemented in the same cell as GridWorld. environment, environment_spec = setup_environment(simple_grid) print('actions:\n', environment_spec.actions, '\n') print('observations:\n', environment_spec.observations, '\n') print('rewards:\n', environment_spec.rewards, '\n') print('discounts:\n', environment_spec.discounts, '\n') ###Output _____no_output_____ ###Markdown We first set the environment to its initial location by calling the `reset()` method which returns the first observation. ###Code environment.reset() environment.plot_state() ###Output _____no_output_____ ###Markdown Now we want to take an action to interact with the environment. We do this by passing a valid action to the `dm_env.Environment.step()` method which returns a `dm_env.TimeStep` namedtuple with fields `(step_type, reward, discount, observation)`.Let's take an action and visualise the resulting state of the grid-world. (You'll need to rerun the cell if you pick a new action.) ###Code #@title Pick an action and see the state changing action = "left" #@param ["up", "right", "down", "left"] {type:"string"} action_int = {'up': 0, 'right': 1, 'down': 2, 'left':3 } action = int(action_int[action]) timestep = environment.step(action) # pytype: dm_env.TimeStep environment.plot_state() #@title Run loop { form-width: "30%" } #@markdown Double-click to inspect the `run_loop` function. def run_loop(environment, agent, num_episodes=None, num_steps=None, logger_time_delta=1., label='training_loop', log_loss=False, ): """Perform the run loop. We are following the Acme run loop. Run the environment loop for `num_episodes` episodes. Each episode is itself a loop which interacts first with the environment to get an observation and then give that observation to the agent in order to retrieve an action. Upon termination of an episode a new episode will be started. If the number of episodes is not given then this will interact with the environment infinitely. Args: environment: dm_env used to generate trajectories. agent: acme.Actor for selecting actions in the run loop. num_steps: number of episodes to run the loop for. If `None` (default), runs without limit. num_episodes: number of episodes to run the loop for. If `None` (default), runs without limit. logger_time_delta: time interval (in seconds) between consecutive logging steps. label: optional label used at logging steps. """ logger = loggers.TerminalLogger(label=label, time_delta=logger_time_delta) iterator = range(num_episodes) if num_episodes else itertools.count() all_returns = [] num_total_steps = 0 for episode in iterator: # Reset any counts and start the environment. start_time = time.time() episode_steps = 0 episode_return = 0 episode_loss = 0 timestep = environment.reset() # Make the first observation. agent.observe_first(timestep) # Run an episode. while not timestep.last(): # Generate an action from the agent's policy and step the environment. action = agent.select_action(timestep.observation) timestep = environment.step(action) # Have the agent observe the timestep and let the agent update itself. agent.observe(action, next_timestep=timestep) agent.update() # Book-keeping. episode_steps += 1 num_total_steps += 1 episode_return += timestep.reward if log_loss: episode_loss += agent.last_loss if num_steps is not None and num_total_steps >= num_steps: break # Collect the results and combine with counts. steps_per_second = episode_steps / (time.time() - start_time) result = { 'episode': episode, 'episode_length': episode_steps, 'episode_return': episode_return, } if log_loss: result['loss_avg'] = episode_loss/episode_steps all_returns.append(episode_return) # Log the given results. logger.write(result) if num_steps is not None and num_total_steps >= num_steps: break return all_returns #@title Implement the evaluation loop { form-width: "30%" } #@markdown Double-click to inspect the `evaluate` function. def evaluate(environment: dm_env.Environment, agent: acme.Actor, evaluation_episodes: int): frames = [] for episode in range(evaluation_episodes): timestep = environment.reset() episode_return = 0 steps = 0 while not timestep.last(): frames.append(environment.plot_state(return_rgb=True)) action = agent.select_action(timestep.observation) timestep = environment.step(action) steps += 1 episode_return += timestep.reward print( f'Episode {episode} ended with reward {episode_return} in {steps} steps' ) return frames def display_video(frames: Sequence[np.ndarray], filename: str = 'temp.mp4', frame_rate: int = 12): """Save and display video.""" # Write the frames to a video. with imageio.get_writer(filename, fps=frame_rate) as video: for frame in frames: video.append_data(frame) # Read video and display the video. video = open(filename, 'rb').read() b64_video = base64.b64encode(video) video_tag = ('<video width="320" height="240" controls alt="test" ' 'src="data:video/mp4;base64,{0}">').format(b64_video.decode()) return IPython.display.HTML(video_tag) ###Output _____no_output_____ ###Markdown Section 2.2: The AgentWe will be implementing Tabular & Function Approximation agents. Tabular agents are purely in Python.All agents will share the same interface from the Acme `Actor`. Here we borrow a figure from Acme to show how this interaction occurs: Agent interfaceEach agent implements the following functions:```pythonclass Agent(acme.Actor): def __init__(self, number_of_actions, number_of_states, ...): """Provides the agent the number of actions and number of states.""" def select_action(self, observation): """Generates actions from observations.""" def observe_first(self, timestep): """Records the initial timestep in a trajectory.""" def observe(self, action, next_timestep): """Records the transition which occurred from taking an action.""" def update(self): """Updates the agent's internals to potentially change its behavior."""```Remarks on the `observe()` function:1. In the last method, the `next_timestep` provides the `reward`, `discount`, and `observation` that resulted from selecting `action`.2. The `next_timestep.step_type` will be either `MID` or `LAST` and should be used to determine whether this is the last observation in the episode.3. The `next_timestep.step_type` cannot be `FIRST`; such a timestep should only ever be given to `observe_first()`. Coding Exercise 2.1: Random AgentBelow is a partially complete implemention of an agent that follows a random policy. Fill in the ```select_action``` method.The ```select_action``` method should return a random **integer** between 0 and ```self._num_actions``` (not a tensor or an array!) ###Code class RandomAgent(acme.Actor): def __init__(self, environment_spec): """Gets the number of available actions from the environment spec.""" self._num_actions = environment_spec.actions.num_values def select_action(self, observation): """Selects an action uniformly at random.""" ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the select action method") ################################################# # TODO return a random integer action = ... return action def observe_first(self, timestep): """Does not record as the RandomAgent has no use for data.""" pass def observe(self, action, next_timestep): """Does not record as the RandomAgent has no use for data.""" pass def update(self): """Does not update as the RandomAgent does not learn from data.""" pass ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_3b0318bf.py) ###Code #@title Visualisation { form-width: "30%" } # Create the agent by giving it the action space specification. agent = RandomAgent(environment_spec) # Run the agent in the evaluation loop, which returns the frames. frames = evaluate(environment, agent, evaluation_episodes=1) # Visualize the random agent's episode. display_video(frames) ###Output _____no_output_____ ###Markdown ---Section 3: The Bellman Equation ###Code #@title Video 3: The Bellman Equation # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="cLCoNBmYUns", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown In this tutorial we focus mainly on **value based methods**, where agents maintain a value for all state-action pairs and use those estimates to choose actions that maximize that **value** (instead of maintaining a policy directly, like in **policy gradient methods**). We represent the **action-value function** (otherwise known as $\color{green}Q$-function associated with following/employing a policy $\pi$ in a given MDP as:$$ \color{green}Q^{\color{blue}{\pi}}(\color{red}{s},\color{blue}{a}) = \mathbb{E}_{\tau \sim P^{\color{blue}{\pi}}} \left[ \sum_t \gamma^t \color{green}{r_t}| s_0=\color{red}s,a_0=\color{blue}{a} \right]$$where $\tau = \{\color{red}{s_0}, \color{blue}{a_0}, \color{green}{r_0}, \color{red}{s_1}, \color{blue}{a_1}, \color{green}{r_1}, \cdots \}$Recall that efficient value estimations are based on the famous **_Bellman Expectation Equation_**:$$ \color{green}Q^\color{blue}{\pi}(\color{red}{s},\color{blue}{a}) = \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\left( \color{green}{R}(\color{red}{s},\color{blue}{a}, \color{red}{s'}) + \gamma \color{green}V^\color{blue}{\pi}(\color{red}{s'}) \right)$$where $\color{green}V^\color{blue}{\pi}$ is the expected $\color{green}Q^\color{blue}{\pi}$ value for a particular state, i.e. $\color{green}V^\color{blue}{\pi}(\color{red}{s}) = \sum_{\color{blue}{a} \in \color{blue}{\mathcal{A}}} \color{blue}{\pi}(\color{blue}{a} |\color{red}{s}) \color{green}Q^\color{blue}{\pi}(\color{red}{s},\color{blue}{a})$. --- Section 4: Policy Evaluation ###Code #@title Video 4: Policy Evaluation # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="HAxR4SuaZs4", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown Tabular agents implement a function `q_values()` returning a matrix of Q valuesof shape: (`number_of_states`, `number_of_actions`)In this section, we will implement a `PolicyEvalAgent` as an ACME actor: given an `evaluation_policy` and a `behaviour_policy`, it will use the `behaviour_policy` to choose actions, and it will use the corresponding trajectory data to evaluate the `evaluation_policy` (i.e. compute the Q-values as if you were following the `evaluation_policy`). ###Code # Uniform random policy def random_policy(q): return np.random.randint(4) ###Output _____no_output_____ ###Markdown Coding Exercise 4.1 Policy Evaluation Agent ###Code class PolicyEvalAgent(acme.Actor): def __init__(self, number_of_states, number_of_actions, evaluated_policy, behaviour_policy=random_policy, step_size=0.1): self._state = None self._number_of_states = number_of_states self._number_of_actions = number_of_actions self._step_size = step_size self._behaviour_policy = behaviour_policy self._evaluated_policy = evaluated_policy ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Initialize your Q-values!") ################################################# # (this is a table of state and action pairs) # Note: this can be random, but the code was tested w/ zero-initialization self._q = ... self._action = None self._next_state = None @property def q_values(self): # return the Q values return ... def select_action(self, observation): # Select an action return ... def observe_first(self, timestep): self._state = timestep.observation def observe(self, action, next_timestep): s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute TD-Error. self._td_error = ... def update(self): # Updates s = self._state a = self._action # Q-value table update. self._q[s, a] += ... # Update the state self._state = ... ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_f988b0c4.py) --- Section 5: Tabular Value-Based Model-Free Learning ###Code #@title Video 5: Model-Free Learning # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="Y4TweUYnexU", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown Section 5.1: On-policy control: SARSA AgentIn this section, we are focusing on control RL algorithms, which perform the **evaluation** and **improvement** of the policy synchronously. That is, the policy that is being evaluated improves as the agent is using it to interact with the environent.The first algorithm we are going to be looking at is SARSA. This is an **on-policy algorithm** -- i.e: the data collection is done by leveraging the policy we're trying to optimize. As discussed during lectures, a greedy policy with respect to a given $\color{Green}Q$ fails to explore the environment as needed; we will use instead an $\epsilon$-greedy policy with respect to $\color{Green}Q$. SARSA Algorithm**Input:**- $\epsilon \in (0, 1)$ the probability of taking a random action , and- $\alpha > 0$ the step size, also known as learning rate.**Initialize:** $\color{green}Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s}$ ∈ $\mathcal{\color{red}S}$ and $\color{blue}a$ ∈ $\mathcal{\color{blue}A}$**Loop forever:**1. Get $\color{red}s \gets{}$current (non-terminal) state 2. Select $\color{blue}a \gets{} \text{epsilon_greedy}(\color{green}Q(\color{red}s, \cdot))$ 3. Step in the environment by passing the selected action $\color{blue}a$4. Observe resulting reward $\color{green}r$, discount $\gamma$, and state $\color{red}{s'}$5. Compute TD error: $\Delta \color{green}Q \gets \color{green}r + \gamma \color{green}Q(\color{red}{s'}, \color{blue}{a'}) − \color{green}Q(\color{red}s, \color{blue}a)$, where $\color{blue}{a'} \gets \text{epsilon_greedy}(\color{green}Q(\color{red}{s'}, \cdot))$5. Update $\color{green}Q(\color{red}s, \color{blue}a) \gets \color{green}Q(\color{red}s, \color{blue}a) + \alpha \Delta \color{green}Q$ Coding Exercise 5.1: Implement SARSABelow you will find incomplete code for sampling from an $\epsilon$-greedy policy, and for implementing an agent that learns values according to the SARSA algorithm. ###Code def epsilon_greedy( q_values_at_s: np.ndarray, # Q-values in state s: Q(s, :). epsilon: float = 0.1 # Probability of taking a random action. ): """Return an epsilon-greedy action sample.""" ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete epsilon greedy policy function") ################################################# # TODO return the action greedy to Q values if ...: # Greedy: Pick action with the largest Q-value. action = ... else: # Get the number of actions from the size of the given vector of Q-values. num_actions = np.array(q_values_at_s).shape[-1] # TODO else return a random action action = ... return action ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_8a39c08a.py) Coding Exercise 5.2: Run your SARSA agent on the `obstacle` environmentThis environment is similar to the Cliff-walking example from [Sutton & Barto](http://incompleteideas.net/book/RLbook2018.pdf) and allows us to see the different policies learned by on-policy vs off-policy methods. Try varying the number of steps. ###Code class SarsaAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, epsilon: float, step_size: float = 0.1 ): # Get number of states and actions from the environment spec. self._num_states = environment_spec.observations.num_values self._num_actions = environment_spec.actions.num_values # Create the table of Q-values, all initialized at zero. self._q = np.zeros((self._num_states, self._num_actions)) # Store algorithm hyper-parameters. self._step_size = step_size self._epsilon = epsilon # Containers you may find useful. self._state = None self._action = None self._next_state = None @property def q_values(self): return self._q def select_action(self, observation): return epsilon_greedy(self._q[observation], self._epsilon) def observe_first(self, timestep): # Set current state. self._state = timestep.observation def observe(self, action, next_timestep): # Unpacking the timestep to lighten notation. s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute the action that would be taken from the next state. next_a = self.select_action(next_s) # Compute the on-policy Q-value update. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the on-policy Q-value update") ################################################# # TODO complete the line below to compute the temporal difference error self._td_error = r + g * self._q[next_s, next_a] - self._q[s, a] def update(self): # Optional unpacking to lighten notation. s = self._state a = self._action ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete value update") ################################################# # Update the Q-value table value at (s, a). # TODO: Update the Q-value table value at (s, a). self._q[s, a] += ... # Update the current state. self._state = self._next_state ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_7bde630d.py) ###Code #@title Run SARSA agent num_steps = 1e5 #@param {type:"number"} num_steps = int(num_steps) # Create the environment. grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # Create the agent. agent = SarsaAgent(environment_spec, epsilon=0.1, step_size=0.1) # Run the experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) print('AFTER {0:,} STEPS ...'.format(num_steps)) # Get the Q-values and reshape them to recover grid-like structure of states. q_values = agent.q_values grid_shape = grid.layout.shape q_values = q_values.reshape([*grid_shape, -1]) # Visualize the value and Q-value tables. plot_action_values(q_values, epsilon=1.) # Visualize the greedy policy. environment.plot_greedy_policy(q_values) ###Output _____no_output_____ ###Markdown Section 5.2 Off-policy control: Q-learning AgentReminder: $\color{green}Q$-learning is a very powerful and general algorithm, that enables control (figuring out the optimal policy/value function) both on and off-policy.**Initialize** $\color{green}Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s} \in \color{red}{\mathcal{S}}$ and $\color{blue}{a} \in \color{blue}{\mathcal{A}}$**Loop forever**:1. Get $\color{red}{s} \gets{}$current (non-terminal) state 2. Select $\color{blue}{a} \gets{} \text{behaviour_policy}(\color{red}{s})$ 3. Step in the environment by passing the selected action $\color{blue}{a}$4. Observe resulting reward $\color{green}{r}$, discount $\gamma$, and state, $\color{red}{s'}$5. Compute the TD error: $\Delta \color{green}Q \gets \color{green}{r} + \gamma \color{green}Q(\color{red}{s'}, \color{blue}{a'}) − \color{green}Q(\color{red}{s}, \color{blue}{a})$, where $\color{blue}{a'} \gets \arg\max_{\color{blue}{\mathcal A}} \color{green}Q(\color{red}{s'}, \cdot)$6. Update $\color{green}Q(\color{red}{s}, \color{blue}{a}) \gets \color{green}Q(\color{red}{s}, \color{blue}{a}) + \alpha \Delta \color{green}Q$Notice that the actions $\color{blue}{a}$ and $\color{blue}{a'}$ are not selected using the same policy, hence this algorithm being **off-policy**. Coding Exercise 5.3: Implement Q-Learning ###Code QValues = np.ndarray Action = int # A value-based policy takes the Q-values at a state and returns an action. ValueBasedPolicy = Callable[[QValues], Action] class QLearningAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, behaviour_policy: ValueBasedPolicy, step_size: float = 0.1): # Get number of states and actions from the environment spec. self._num_states = environment_spec.observations.num_values self._num_actions = environment_spec.actions.num_values # Create the table of Q-values, all initialized at zero. self._q = np.zeros((self._num_states, self._num_actions)) # Store algorithm hyper-parameters. self._step_size = step_size # Store behavior policy. self._behaviour_policy = behaviour_policy # Containers you may find useful. self._state = None self._action = None self._next_state = None @property def q_values(self): return self._q def select_action(self, observation): return self._behaviour_policy(self._q[observation]) def observe_first(self, timestep): # Set current state. self._state = timestep.observation def observe(self, action, next_timestep): # Unpacking the timestep to lighten notation. s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute the TD error. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the off-policy Q-value update") ################################################# # TODO complete the line below to compute the temporal difference error self._td_error = ... def update(self): # Optional unpacking to lighten notation. s = self._state a = self._action ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete value update") ################################################# # Update the Q-value table value at (s, a). # TODO: Update the Q-value table value at (s, a). self._q[...] += ... # Update the current state. self._state = self._next_state ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_195bbb16.py) Run your Q-learning agent on the `obstacle` environment ###Code #@title Run your Q-learning epsilon = 1. #@param {type:"number"} num_steps = 1e5 #@param {type:"number"} num_steps = int(num_steps) # environment grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # behavior policy behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon) # agent agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) # get the q-values q = agent.q_values.reshape(grid.layout.shape + (4,)) # visualize value functions print('AFTER {:,} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=0) # visualise the greedy policy grid.plot_greedy_policy(q) ###Output _____no_output_____ ###Markdown Experiment with different levels of greediness* The default was $\epsilon=1.$, what does this correspond to?* Try also $\epsilon =0.1, 0.5$. What do you observe? Does the behaviour policy affect the training in any way? ###Code #@title Run the cell epsilon = 0.1 #@param {type:"number"} num_steps = 1e5 #@param {type:"number"} num_steps = int(num_steps) # environment grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # behavior policy behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon) # agent agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) # get the q-values q = agent.q_values.reshape(grid.layout.shape + (4,)) # visualize value functions print('AFTER {:,} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=epsilon) # visualise the greedy policy grid.plot_greedy_policy(q) ###Output _____no_output_____ ###Markdown So far we only considered look-up tables for value-functions. In all previous cases every state and action pair $(\color{red}{s}, \color{blue}{a})$, had an entry in our $\color{green}Q$-table. Again, this is possible in this environment as the number of states is equal to the number of cells in the grid. But this is not scalable to situations where, say, the goal location changes or the obstacles are in different locations at every episode (consider how big the table could be in this situation?).An example (not covered in this tutorial) is ATARI from pixels, where the number of possible frames an agent can see is exponential in the number of pixels on the screen.But what we **really** want is just to be able to *compute* the Q-value, when fed with a particular $(\color{red}{s}, \color{blue}{a})$ pair. So if we had a way to get a function to do this work instead of keeping a big table, we'd get around this problem.To address this, we can use **function approximation** as a way to generalize Q-values over some representation of the very large state space, and **train** them to output the values they should. In this section, we will explore $\color{green}Q$-learning with function approximation, which (although it has been theoretically proven to diverge for some degenerate MDPs) can yield impressive results in very large environments. In particular, we will look at [Neural Fitted Q (NFQ) Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) and [Deep Q-Networks (DQN)](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf). --- Section 6: Function Approximation ###Code #@title Video 6: Function approximation # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="7_MYePyYhrM", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown Section 6.1 Replay BuffersAn important property of off-policy methods like $\color{green}Q$-learning is that they involve two policies: one for exploration and one that is being optimized (via the $\color{green}Q$-function updates). This means that we can generate data from the **behavior** policy and insert that data into some form of data storage---usually referred to as **replay**.In order to optimize the $\color{green}Q$-function we can then sample data from the replay **dataset** and use that data to perform an update. An illustration of this learning loop is shown below. In the next cell we will show how to implement a simple replay buffer. This can be as simple as a python list containing transition data. In more complicated scenarios we might want to have a more performance-tuned variant, we might have to be more concerned about how large replay is and what to do when its full, and we might want to sample from replay in different ways. But a simple python list can go a surprisingly long way. ###Code # Simple replay buffer # Create a convenient container for the SARS tuples required by deep RL agents. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) class ReplayBuffer(object): """A simple Python replay buffer.""" def __init__(self, capacity: int = None): self.buffer = collections.deque(maxlen=capacity) self._prev_state = None def add_first(self, initial_timestep: dm_env.TimeStep): self._prev_state = initial_timestep.observation def add(self, action: int, timestep: dm_env.TimeStep): transition = Transitions( state=self._prev_state, action=action, reward=timestep.reward, discount=timestep.discount, next_state=timestep.observation, ) self.buffer.append(transition) self._prev_state = timestep.observation def sample(self, batch_size: int) -> Transitions: # Sample a random batch of Transitions as a list. batch_as_list = random.sample(self.buffer, batch_size) # Convert the list of `batch_size` Transitions into a single Transitions # object where each field has `batch_size` stacked fields. return tree_utils.stack_sequence_fields(batch_as_list) def flush(self) -> Transitions: entire_buffer = tree_utils.stack_sequence_fields(self.buffer) self.buffer.clear() return entire_buffer def is_ready(self, batch_size: int) -> bool: return batch_size <= len(self.buffer) ###Output _____no_output_____ ###Markdown Section 6.2: NFQ Agent[Neural Fitted Q Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) was one of the first papers to demonstrate how to leverage recent advances in Deep Learning to approximate the Q-value by a neural network.$^1$In other words, the value $\color{green}Q(\color{red}{s}, \color{blue}{a})$ are approximated by the output of a neural network $\color{green}{Q_w}(\color{red}{s}, \color{blue}{a})$ for each possible action $\color{blue}{a} \in \color{blue}{\mathcal{A}}$.$^2$When introducing function approximations, and neural networks in particular, we need to have a loss to optimize. But looking back at the tabular setting above, you can see that we already have some notion of error: the **TD error**.By training our neural network to output values such that the *TD error is minimized*, we will also satisfy the Bellman Optimality Equation, which is a good sufficient condition to enforce, to obtain an optimal policy.Thanks to automatic differentiation, we can just write the TD error as a loss, e.g. with an $\ell^2$ loss, but others would work too:$$L(\color{green}w) = \mathbb{E}\left[ \left( \color{green}{r} + \gamma \max_\color{blue}{a'} \color{green}{Q_w}(\color{red}{s'}, \color{blue}{a'}) − \color{green}{Q_w}(\color{red}{s}, \color{blue}{a}) \right)^2\right].$$Then we can compute the gradient with respect to the parameters of the neural network and improve our Q-value approximation incrementally.NFQ builds on $\color{green}Q$-learning, but if one were to update the Q-values online directly, the training can be unstable and very slow.Instead, NFQ uses a replay buffer, similar to what you just implemented above, to update the Q-value in a batched setting.When it was introduced, it also was entirely off-policy using a uniformly random policy to collect data, which was prone to instability when applied to more complex environments (e.g. when the input are pixels or the tasks are longer and more complicated).But it is a good stepping stone to the more complex agents used today. Here, we will look at a slightly different and modernised implementation of NFQ.Below you will find an incomplete NFQ agent that takes in observations from a gridworld. Instead of receiving a tabular state, it receives an observation in the form of its (x,y) coordinates in the gridworld, and the (x,y) coordinates of the goal.---$^1$ If you read the NFQ paper, they use a "control" notation, where there is a "cost to minimize", instead of "rewards to maximize", so don't be surprised if signs/max/min do not correspond.$^2$ We could feed it $\color{blue}{a}$ as well and ask $Q_w$ for a single scalar value, but given we have a fixed number of actions and we usually need to take an $argmax$ over them, it's easiest to just output them all in one pass. Coding Exercise 6.1: Implement NFQ ###Code # Create a convenient container for the SARS tuples required by NFQ. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) class NeuralFittedQAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, q_network: nn.Module, replay_capacity: int = 100_000, epsilon: float = 0.1, batch_size: int = 1, learning_rate: float = 3e-4): # Store agent hyperparameters and network. self._num_actions = environment_spec.actions.num_values self._epsilon = epsilon self._batch_size = batch_size self._q_network = q_network # Container for the computed loss (see run_loop implementation above). self.last_loss = 0.0 # Create the replay buffer. self._replay_buffer = ReplayBuffer(replay_capacity) # Setup optimizer that will train the network to minimize the loss. self._optimizer = torch.optim.Adam(self._q_network.parameters(),lr = learning_rate) self._loss_fn = nn.MSELoss() def select_action(self, observation): # Compute Q-values. q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) # Adds batch dimension. q_values = q_values.squeeze(0) # Removes batch dimension # Select epsilon-greedy action. if self._epsilon < torch.rand(1): action = q_values.argmax(axis=-1) else: action = torch.randint(low=0, high=self._num_actions , size=(1,), dtype=torch.int64) return action def q_values(self, observation): q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) return q_values.squeeze(0).detach() def update(self): if not self._replay_buffer.is_ready(self._batch_size): # If the replay buffer is not ready to sample from, do nothing. return # Sample a minibatch of transitions from experience replay. transitions = self._replay_buffer.sample(self._batch_size) # Note: each of these tensors will be of shape [batch_size, ...]. s = torch.tensor(transitions.state) a = torch.tensor(transitions.action,dtype=torch.int64) r = torch.tensor(transitions.reward) d = torch.tensor(transitions.discount) next_s = torch.tensor(transitions.next_state) # Compute the Q-values at next states in the transitions. with torch.no_grad(): q_next_s = self._q_network(next_s) # Shape [batch_size, num_actions]. max_q_next_s = q_next_s.max(axis=-1)[0] # Compute the TD error and then the losses. target_q_value = r + d * max_q_next_s # Compute the Q-values at original state. q_s = self._q_network(s) # Gather the Q-value corresponding to each action in the batch. q_s_a = q_s.gather(1, a.view(-1,1)).squeeze(0) ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the NFQ Agent") ################################################# # TODO Average the squared TD errors over the entire batch (axis=0). loss = ... # Compute the gradients of the loss with respect to the q_network variables. self._optimizer.zero_grad() loss.backward() # Apply the gradient update. self._optimizer.step() # Store the loss for logging purposes (see run_loop implementation above). self.last_loss = loss.detach().numpy() def observe_first(self, timestep: dm_env.TimeStep): self._replay_buffer.add_first(timestep) def observe(self, action: int, next_timestep: dm_env.TimeStep): self._replay_buffer.add(action, next_timestep) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_b33e659b.py) Train and Evaluate the NFQ Agent ###Code #@title Training the NFQ Agent. { form-width: "30%" } epsilon = 0.5 # @param {type:"number"} max_episode_length = 200 # Create the environment. grid = build_gridworld_task( task='simple', observation_type=ObservationType.AGENT_GOAL_POS, max_episode_length=max_episode_length) environment, environment_spec = setup_environment(grid) # Define the neural function approximator (aka Q network). q_network = nn.Sequential(nn.Linear(4, 50), nn.ReLU(), nn.Linear(50, 50), nn.ReLU(), nn.Linear(50, environment_spec.actions.num_values)) # Build the trainable Q-learning agent agent = NeuralFittedQAgent( environment_spec, q_network, epsilon=epsilon, replay_capacity=100_000, batch_size=10, learning_rate=1e-3) returns = run_loop( environment=environment, agent=agent, num_episodes=100, logger_time_delta=1., log_loss=True) #@title Evaluating the agent. { form-width: "30%" } # Temporarily change epsilon to be more greedy; remember to change it back. agent._epsilon = 0.05 # Record a few episodes. frames = evaluate(environment, agent, evaluation_episodes=5) # Change epsilon back. agent._epsilon = epsilon # Display the video of the episodes. display_video(frames, frame_rate=6) #@title Visualise the learned Q values { form-width: "30%" } # Evaluate the policy for every state, similar to tabular agents above. environment.reset() pi = np.zeros(grid._layout_dims, dtype=np.int32) q = np.zeros(grid._layout_dims + (4,)) for y in range(grid._layout_dims[0]): for x in range(grid._layout_dims[1]): # Hack observation to see what the Q-network would output at that point. environment.set_state(x, y) obs = environment.get_obs() q[y, x] = np.asarray(agent.q_values(obs)) pi[y, x] = np.asarray(agent.select_action(obs)) plot_action_values(q) ###Output _____no_output_____ ###Markdown Compare the greedy and behaviour ($\epsilon$-greedy) policiesNotice that the behaviour policy randomly flips arrows to random directions. ###Code environment.plot_greedy_policy(q) plt.title('Greedy policy using the learnt Q-values') environment.plot_policy(pi) plt.title("Policy using the agent's behaviour policy"); ###Output _____no_output_____ ###Markdown --- Section 7: DQN ###Code #@title Video 7: Deep Q-Networks (DQN) # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="HEDoNtV1y-w", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown --> In this section, we will look at an advanced deep RL Agent based on the following publication, [Playing Atari with Deep Reinforcement Learning](https://deepmind.com/research/publications/playing-atari-deep-reinforcement-learning), which introduced the first deep learning model to successfully learn control policies directly from high-dimensional pixel inputs using RL.Here the agent will act directly on a pixel representation of the gridworld. You can find an incomplete implementation below. Coding Exercise 7.1: Run a DQN Agent ###Code class DQN(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, network: nn.Module, replay_capacity: int = 100_000, epsilon: float = 0.1, batch_size: int = 1, learning_rate: float = 5e-4, target_update_frequency: int = 10): # Store agent hyperparameters and network. self._num_actions = environment_spec.actions.num_values self._epsilon = epsilon self._batch_size = batch_size self._q_network = q_network self._target_network = copy.deepcopy(self._q_network) # Container for the computed loss (see run_loop implementation above). self.last_loss = 0.0 # Create the replay buffer. self._replay_buffer = ReplayBuffer(replay_capacity) # Keep an internal tracker of steps self._current_step = 0 # How often to update the target network self._target_update_frequency = target_update_frequency # Setup optimizer that will train the network to minimize the loss. self._optimizer = torch.optim.Adam(self._q_network.parameters(),lr = learning_rate) self._loss_fn = nn.MSELoss() def select_action(self, observation): # Compute Q-values. # Sonnet requires a batch dimension, which we squeeze out right after. q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) # Adds batch dimension. q_values = q_values.squeeze(0) # Removes batch dimension # Select epsilon-greedy action. if self._epsilon < torch.rand(1): action = q_values.argmax(axis=-1) else: action = torch.randint(low=0, high=self._num_actions , size=(1,), dtype=torch.int64) return action def q_values(self, observation): q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) return q_values.squeeze(0).detach() def update(self): self._current_step += 1 if not self._replay_buffer.is_ready(self._batch_size): # If the replay buffer is not ready to sample from, do nothing. return # Sample a minibatch of transitions from experience replay. transitions = self._replay_buffer.sample(self._batch_size) # Optionally unpack the transitions to lighten notation. # Note: each of these tensors will be of shape [batch_size, ...]. s = torch.tensor(transitions.state) a = torch.tensor(transitions.action,dtype=torch.int64) r = torch.tensor(transitions.reward) d = torch.tensor(transitions.discount) next_s = torch.tensor(transitions.next_state) # Compute the Q-values at next states in the transitions. with torch.no_grad(): ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the DQN Agent") ################################################# #TODO get the value of the next states evaluated by the target network q_next_s = ... # Shape [batch_size, num_actions]. max_q_next_s = q_next_s.max(axis=-1)[0] # Compute the TD error and then the losses. # TODO compute the target value target_q_value = ... # Compute the Q-values at original state. q_s = self._q_network(s) # Gather the Q-value corresponding to each action in the batch. q_s_a = q_s.gather(1, a.view(-1,1)).squeeze(0) # Compute the TD errors. #td_error = target_q_value - q_s_a # Average the squared TD errors over the entire batch (axis=0). loss = self._loss_fn(target_q_value, q_s_a) # Compute the gradients of the loss with respect to the q_network variables. self._optimizer.zero_grad() loss.backward() # Apply the gradient update. self._optimizer.step() if self._current_step % self._target_update_frequency == 0: self._target_network.load_state_dict(self._q_network.state_dict()) # Store the loss for logging purposes (see run_loop implementation above). self.last_loss = loss.detach().numpy() def observe_first(self, timestep: dm_env.TimeStep): self._replay_buffer.add_first(timestep) def observe(self, action: int, next_timestep: dm_env.TimeStep): self._replay_buffer.add(action, next_timestep) # Create a convenient container for the SARS tuples required by NFQ. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_892324a5.py) ###Code #@title Train and evaluate the DQN agent { form-width: "30%" } epsilon = 0.25 # @param {type: "number"} grid = build_gridworld_task( task='simple', observation_type=ObservationType.GRID, max_episode_length=200) environment, environment_spec = setup_environment(grid) # Build the agent's network. class Permute(nn.Module): def __init__(self, order: list): super(Permute,self).__init__() self.order = order def forward(self, x): return x.permute(self.order) q_network = nn.Sequential(Permute([0, 3, 1, 2]), nn.Conv2d(3, 32, kernel_size=4, stride=2,padding=1), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(3, 1), nn.Flatten(), nn.Linear(384, 50), nn.ReLU(), nn.Linear(50, environment_spec.actions.num_values) ) agent = DQN( environment_spec=environment_spec, network=q_network, batch_size=10, epsilon=epsilon, target_update_frequency=25) returns = run_loop( environment=environment, agent=agent, num_episodes=1000, num_steps=100_000) # @title Visualise the learned Q values { form-width: "30%" } # Evaluate the policy for every state, similar to tabular agents above. pi = np.zeros(grid._layout_dims, dtype=np.int32) q = np.zeros(grid._layout_dims + (4,)) for y in range(grid._layout_dims[0]): for x in range(grid._layout_dims[1]): # Hack observation to see what the Q-network would output at that point. environment.set_state(x, y) obs = environment.get_obs() q[y, x] = np.asarray(agent.q_values(obs)) pi[y, x] = np.asarray(agent.select_action(obs)) plot_action_values(q) #@title Compare the greedy policy with the agent's policy { form-width: "30%" } environment.plot_greedy_policy(q) plt.title('Greedy policy using the learnt Q-values') environment.plot_policy(pi) plt.title("Policy using the agent's epsilon-greedy policy"); ###Output _____no_output_____ ###Markdown --- Section 8: Beyond Value Based Model-Free Methods ###Code #@title Video 8: Other RL Methods # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video # Tune hyperparameters SEED=2021 learning_rate = 0.01 gamma = 0.99 dropout = 0.6 # hyperparameters hidden_neurons = 128 # Only used in Policy Gradient Method hidden_size = 256 # only used in Actor-Critic Method num_steps = 300 max_episodes = 1000 # Use the CartPole example env = gym.make('CartPole-v1') # Set seeds env.seed(SEED) set_seed(SEED) ###Output _____no_output_____ ###Markdown Section 8.1: Policy gradientNow we will turn to policy gradient methods. Rather than defining the policy in terms of a value function, i.e. $\color{blue}\pi(\color{red}s) = \arg\max_{\color{blue}a}\color{green}Q(\color{red}s, \color{blue}a)$, we will directly parameterize the policy and write it as the distribution$$\color{blue}a_t \sim \color{blue}\pi_{\theta}(\color{blue}a_t|\color{red}s_t).$$Here $\theta$ represent the parameters of the policy. We will update the policy parameters using gradient ascent to **maximize** expected future reward.One convenient way to represent the conditional distribution above is as a function that takes a state $\color{red}s$ and returns a distribution over actions $\color{blue}a$.Defined below is an agent which implements the REINFORCE algorithm. REINFORCE (Williams 1992) is the simplest model-free general reinforcement learning technique.The **basic idea** is to use probabilistic action choice. If the reward at the end turns out to be high, we make **all** actions in this sequence **more likely** (otherwise, we do the opposite).This may sometimes reinforce "bad" actions as well, but they will hence turn out to be part of trajectories with low reward and will likely not get accentuated.From the lectures, we know that we need to compute$$\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} G_t \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]$$where $\color{green} G_t$ is the sum over future rewards from time $t$, defined as$$\color{green} G_t = \sum_{n=t}^T \gamma^{n-t} \color{green} R(\color{red}{s_t}, \color{blue}{a_t}, \color{red}{s_{t+1}}).$$**insert a pictorial slide from the slide deck**The algorithm below will collect the state, action, and reward data in its buffer until it reaches a full trajectory. It will then update its policy given the above gradient (and the Adam optimizer).A policy gradient trains an agent without explicitly mapping the value for every state-action pair in an environment by taking small steps and updating the policy based on the reward associated with that step. In this section, we will build a small network that trains using policy gradient using PyTorch.The agent can receive a reward immediately for an action or the agent can receive the award at a later time such as the end of the episode. The policy function for our agent will try to learn as $\pi_\theta(a,s)$, where $\theta$ is the parameter vector, $s$ is a particular state, and $a$ is an action.Monte-Carlo Policy Gradient approach will be used, which means the agent will run through an entire episode and then update policy based on the rewards obtained. Coding Exercise 8.1: Creating a simple neural networkBelow you will find some incomplete code. Fill in the missing code to construct the specified neural network.Let us define a simple feed forward neural network with one hidden layer of 128 neurons and a dropout of 0.6. Let's use Adam as our optimizer and a learning rate of 0.01. Use the hyperparameters already defined rather than using explicit values.Using dropout will significantly improve the performance of the policy. Do compare your results with and without dropout and experiment with other hyper-parameter values as well. ###Code class PolicyGradientNet(nn.Module): def __init__(self): super(PolicyGradientNet, self).__init__() self.state_space = env.observation_space.shape[0] self.action_space = env.action_space.n ################################################# ## TODO for students: fill in the missing code ## from the first expression raise NotImplementedError("Student exercise: Create FF neural network.") ################################################# self.l1 = ... self.l2 = ... self.gamma = ... # Episode policy and past rewards self.past_policy = Variable(torch.Tensor()) self.reward_episode = [] # Overall reward and past loss self.past_reward = [] self.past_loss = [] def forward(self, x): model = torch.nn.Sequential( self.l1, nn.Dropout(p=dropout), nn.ReLU(), self.l2, nn.Softmax(dim=-1) ) return model(x) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_74dcaa64.py) Now let's create an instance of the network we have defined and use ADAM as the optimizer using the learning_rate as hyperparameter already defined above. ###Code policy = PolicyGradientNet() pg_optimizer = optim.Adam(policy.parameters(), lr=learning_rate) ###Output _____no_output_____ ###Markdown Select ActionThe `select_action()` function chooses an action based on our policy probability distribution using the PyTorch distributions package. Our policy returns a probability for each possible action in our action space (move left or move right) as an array of length two such as [0.7, 0.3]. We then choose an action based on these probabilities, record our history, and return our action. ###Code def select_action(state): #Select an action (0 or 1) by running policy model and choosing based on the probabilities in state state = torch.from_numpy(state).type(torch.FloatTensor) state = policy(Variable(state)) c = Categorical(state) action = c.sample() # Add log probability of chosen action if policy.past_policy.dim() != 0: policy.past_policy = torch.cat([policy.past_policy, c.log_prob(action).reshape(1)]) else: policy.past_policy = (c.log_prob(action).reshape(1)) return action ###Output _____no_output_____ ###Markdown Update policyThis function updates the policy. Reward $G_t$We update our policy by taking a sample of the action value function $Q^{\pi_\theta} (s_t,a_t)$ by playing through episodes of the game. $Q^{\pi_\theta} (s_t,a_t)$ is defined as the expected return by taking action $a$ in state $s$ following policy $\pi$.We know that for every step the simulation continues we receive a reward of 1. We can use this to calculate the policy gradient at each time step, where $r$ is the reward for a particular state-action pair. Rather than using the instantaneous reward, $r$, we instead use a long term reward $ v_{t} $ where $v_t$ is the discounted sum of all future rewards for the length of the episode. $v_{t}$ is then,$$\color{green} G_t = \sum_{n=t}^T \gamma^{n-t} \color{green} R(\color{red}{s_t}, \color{blue}{a_t}, \color{red}{s_{t+1}}).$$where $\gamma$ is the discount factor (0.99). For example, if an episode lasts 5 steps, the reward for each step will be [4.90, 3.94, 2.97, 1.99, 1].Next we scale our reward vector by substracting the mean from each element and scaling to unit variance by dividing by the standard deviation. This practice is common for machine learning applications and the same operation as Scikit Learn's __[StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)__. It also has the effect of compensating for future uncertainty. Update PolicyAfter each episode we apply Monte-Carlo Policy Gradient to improve our policy according to the equation:$$\Delta\theta_t = \alpha\nabla_\theta \, \log \pi_\theta (s_t,a_t)G_t $$We will then feed our policy history multiplied by our rewards to our optimizer and update the weights of our neural network using stochastic gradient **ascent**. This should increase the likelihood of actions that got our agent a larger reward. Exercise 8.2: Update network weights while updating the overall policyBelow you will find some incomplete code. Fill in the missing code to construct the specified neural network. ###Code def update_policy(): R = 0 rewards = [] # Discount future rewards back to the present using gamma for r in policy.reward_episode[::-1]: R = r + policy.gamma * R rewards.insert(0, R) # Scale rewards rewards = torch.FloatTensor(rewards) rewards = (rewards - rewards.mean()) / (rewards.std() + np.finfo(np.float32).eps) # Calculate loss pg_loss = (torch.sum(torch.mul(policy.past_policy, Variable(rewards)).mul(-1), -1)) ################################################# ## TODO for students: fill in the missing code ## from the first expression raise NotImplementedError("Student exercise: Update the network weights.") ################################################# # Update network weights # Use zero_grad(), backward() and step() methods of the optimizer instance. pg_optimizer.zero_grad() pg_loss.backward() # Update the weights for param in policy.parameters(): param.grad.data.clamp_(-1, 1) pg_optimizer.step() # Save and intialize episode past counters policy.past_loss.append(pg_loss.item()) policy.past_reward.append(np.sum(policy.reward_episode)) policy.past_policy = Variable(torch.Tensor()) policy.reward_episode= [] ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_3d4bb09a.py) TrainingThis is our main policy training loop. For each step in a training episode, we choose an action, take a step through the environment, and record the resulting new state and reward. We call update_policy() at the end of each episode to feed the episode history to our neural network and improve our policy. ###Code def policy_gradient_train(episodes): running_reward = 10 for episode in range(episodes): state = env.reset() done = False for time in range(1000): action = select_action(state) # Step through environment using chosen action state, reward, done, _ = env.step(action.item()) # Save reward policy.reward_episode.append(reward) if done: break # Used to determine when the environment is solved. running_reward = (running_reward * gamma) + (time * (1 - gamma)) update_policy() if episode % 50 == 0: print(f"Episode {episode}\tLast length: {time:5.0f}" f"\tAverage length: {running_reward:.2f}") if running_reward > env.spec.reward_threshold: print(f"Solved! Running reward is now {running_reward} " f"and the last episode runs to {time} time steps!") break ###Output _____no_output_____ ###Markdown Run the model ###Code episodes = 1000 policy_gradient_train(episodes) ###Output _____no_output_____ ###Markdown Plot the results ###Code #@title Helper function for plotting the training performance def plot_policy_gradient_training(): window = int(episodes / 20) fig, ((ax1), (ax2)) = plt.subplots(1, 2, sharey=True, figsize=[15, 4]); rolling_mean = pd.Series(policy.past_reward).rolling(window).mean() std = pd.Series(policy.past_reward).rolling(window).std() ax1.plot(rolling_mean) ax1.fill_between(range(len(policy.past_reward)), rolling_mean-std, rolling_mean+std, color='orange', alpha=0.2) ax1.set_title(f"Episode Length Moving Average ({window}-episode window)") ax1.set_xlabel('Episode'); ax1.set_ylabel('Episode Length') ax2.plot(policy.past_reward) ax2.set_title('Episode Length') ax2.set_xlabel('Episode') ax2.set_ylabel('Episode Length') fig.tight_layout(pad=2) plt.show() plot_policy_gradient_training() ###Output _____no_output_____ ###Markdown Exercise 8.3: BONUSTry running the model again, by modifying the hyperparameters and observe the outputs.What do you see when you 1. increase learning rate2. decrease learning rate3. decrease gamma4. increase number of neurons in the network. Section 8.2: Actor-criticRecall the policy gradient$$\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} G_t \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]$$The policy parameters are updated using Monte Carlo technique and uses random samples. This introduces high variability in log probabilities and cumulative reward values. This leads to noisy gradients and can cause unstable learning.One way to reduce variance and increase stability is subtracting the cumulative reward by a baseline:$$\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} (G_t - b) \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]$$Intuitively, reducing cumulative reward will make smaller gradients and thus smaller and more stable (hopefully) updates.From the lecture slides, we know that in Actor Critic Method:1. The “Critic” estimates the value function. This could be the action-value (the Q value) or state-value (the V value).2. The “Actor” updates the policy distribution in the direction suggested by the Critic (such as with policy gradients).Both the Critic and Actor functions are parameterized with neural networks. The "Critic" network parameterizes the Q-value. Actor Critic Network ###Code class ActorCriticNet(nn.Module): def __init__(self, num_inputs, num_actions, hidden_size, learning_rate=3e-4): super(ActorCriticNet, self).__init__() self.num_actions = num_actions self.critic_linear1 = nn.Linear(num_inputs, hidden_size) self.critic_linear2 = nn.Linear(hidden_size, 1) self.actor_linear1 = nn.Linear(num_inputs, hidden_size) self.actor_linear2 = nn.Linear(hidden_size, num_actions) self.all_rewards = [] self.all_lengths = [] self.average_lengths = [] def forward(self, state): state = Variable(torch.from_numpy(state).float().unsqueeze(0)) value = F.relu(self.critic_linear1(state)) value = self.critic_linear2(value) policy_dist = F.relu(self.actor_linear1(state)) policy_dist = F.softmax(self.actor_linear2(policy_dist), dim=1) return value, policy_dist ###Output _____no_output_____ ###Markdown Training ###Code def actor_critic_train(episodes): all_lengths = [] average_lengths = [] all_rewards = [] entropy_term = 0 for episode in range(episodes): log_probs = [] values = [] rewards = [] state = env.reset() for steps in range(num_steps): value, policy_dist = actor_critic.forward(state) value = value.detach().numpy()[0, 0] dist = policy_dist.detach().numpy() action = np.random.choice(num_outputs, p=np.squeeze(dist)) log_prob = torch.log(policy_dist.squeeze(0)[action]) entropy = -np.sum(np.mean(dist) * np.log(dist)) new_state, reward, done, _ = env.step(action) rewards.append(reward) values.append(value) log_probs.append(log_prob) entropy_term += entropy state = new_state if done or steps == num_steps - 1: qval, _ = actor_critic.forward(new_state) qval = qval.detach().numpy()[0, 0] all_rewards.append(np.sum(rewards)) all_lengths.append(steps) average_lengths.append(np.mean(all_lengths[-10:])) if episode % 50 == 0: print(f"episode: {episode},\treward: {np.sum(rewards)}," f"\ttotal length: {steps}," f"\taverage length: {average_lengths[-1]}") break # compute Q values qvals = np.zeros_like(values) for t in reversed(range(len(rewards))): qval = rewards[t] + gamma * qval qvals[t] = qval #update actor critic values = torch.FloatTensor(values) qvals = torch.FloatTensor(qvals) log_probs = torch.stack(log_probs) advantage = qvals - values actor_loss = (-log_probs * advantage).mean() critic_loss = 0.5 * advantage.pow(2).mean() ac_loss = actor_loss + critic_loss + 0.001 * entropy_term ac_optimizer.zero_grad() ac_loss.backward() ac_optimizer.step() # Store results actor_critic.average_lengths = average_lengths actor_critic.all_rewards = all_rewards actor_critic.all_lengths = all_lengths ###Output _____no_output_____ ###Markdown Run the model ###Code env.reset() num_inputs = env.observation_space.shape[0] num_outputs = env.action_space.n actor_critic = ActorCriticNet(num_inputs, num_outputs, hidden_size) ac_optimizer = optim.Adam(actor_critic.parameters()) actor_critic_train(500) ###Output _____no_output_____ ###Markdown Plot the results ###Code #@title Helper function for plotting training performance def plot_actor_critic_training(): window = int(episodes / 20) plt.figure(figsize=(15, 4)) plt.subplot(1, 2, 1) smoothed_rewards = pd.Series(actor_critic.all_rewards).rolling(window).mean() std = pd.Series(actor_critic.all_rewards).rolling(window).std() plt.plot(smoothed_rewards, label='Smoothed rewards') plt.fill_between(range(len(smoothed_rewards)), smoothed_rewards-std, smoothed_rewards+std, color='orange', alpha=0.2) plt.xlabel('Episode') plt.ylabel('Reward') plt.subplot(1, 2, 2) plt.plot(actor_critic.all_lengths, label='All lengths') plt.plot(actor_critic.average_lengths, label='Average lengths') plt.xlabel('Episode') plt.ylabel('Episode length') plt.legend() plt.tight_layout() plot_actor_critic_training() ###Output _____no_output_____ ###Markdown Exercise 8.4: Effect of episodes on performanceChange the episodes from 500 to 3000 and observe the performance impact. Exercise 8.5: Effect of learning rate on performanceModify the hyperparameters related to learning_rate and gamma and observe the impact on the performance. ---Section 9: RL in the real world ###Code #@title Video 9: Real-world applications and ethics # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="5kBtiW88QVw", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown Exercise 9.1: Group discussionForm a group of 2-3 and have discussions (roughly 3 minutes each) of the following questions:1. **Safety**: what are some safety issues that arise in RL that don’t arise with e.g. supervised learning?2. **Generalization**: What happens if your RL agent is presented with data it hasn’t trained on? (“goes out of distribution”)3. How important do you think **interpretability** is in the ethical and safe deployment of RL agents in the real world? ---Section 10: How to learn more ###Code #@title Video 10: How to learn more # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="dKaOpgor5Ek", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ###Output _____no_output_____ ###Markdown &nbsp; Tutorial 1: Introduction to Reinforcement Learning**Week 3, Day 2: Basic Reinforcement Learning (RL)****By Neuromatch Academy**__Content creators:__ Matthew Sargent, Anoop Kulkarni, Sowmya Parthiban, Feryal Behbahani, Jane Wang__Content reviewers:__ Ezekiel Williams, Mehul Rastogi, Lily Cheng, Roberto Guidotti, Arush Tagade, Kelson Shilling-Scrivo__Content editors:__ Spiros Chavlis __Production editors:__ Spiros Chavlis **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesBy the end of the tutorial, you should be able to:1. Within the RL framework, be able to identify the different components: environment, agent, states, and actions. 2. Understand the Bellman equation and components involved. 3. Implement tabular value-based model-free learning (Q-learning and SARSA).4. Discuss real-world applications and ethical issues of RL.By completing the Bonus sections, you should be able to:1. Run a DQN agent and experiment with different hyperparameters.2. Have a high-level understanding of other (nonvalue-based) RL methods. ###Code # @title Tutorial slides from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/m3kqy/?direct%26mode=render%26action=download%26mode=render", width=854, height=480) ###Output _____no_output_____ ###Markdown These are the slides for all videos in this tutorial. If you want to locally download the slides, click [here](https://osf.io/m3kqy/download). --- SetupRun the following *Setup* cells in order to set up needed functions. Don't worry about the code for now!**Note:** There is an issue with some images not showing up if you're using a Safari browser. Please switch to Chrome if this is the case. ###Code # @title Install requirements from IPython.display import clear_output # @markdown We install the acme library, see [here](https://github.com/deepmind/acme) for more info. # @markdown **WARNING:** There may be *errors* and/or *warnings* reported during the installation. However, they should be ignored. !pip install --upgrade pip --quiet !pip install imageio --quiet !pip install imageio-ffmpeg !pip install gym --quiet !pip install enum34 --quiet !pip install dm-env --quiet !pip install pandas --quiet !pip install grpcio==1.34.0 --quiet !pip install tensorflow --quiet !pip install typing --quiet !pip install einops --quiet !pip install dm-acme --quiet !pip install dm-acme[reverb] --quiet !pip install dm-acme[jax,tensorflow] --quiet !pip install dm-acme[envs] --quiet !pip install dm-env --quiet !pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet from evaltools.airtable import AirtableForm # generate airtable form atform = AirtableForm('appn7VdPRseSoMXEG','W3D2_T1','https://portal.neuromatchacademy.org/api/redirect/to/3e77471d-4de0-4e43-a026-9cfb603b5197') clear_output() # Import modules import gym import enum import copy import time import acme import torch import base64 import dm_env import IPython import imageio import warnings import itertools import collections import numpy as np import pandas as pd import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import matplotlib as mpl import matplotlib.pyplot as plt from acme import specs from acme import wrappers from acme.utils import tree_utils from acme.utils import loggers from torch.autograd import Variable from torch.distributions import Categorical from typing import Callable, Sequence warnings.filterwarnings('ignore') np.set_printoptions(precision=3, suppress=1) # @title Figure settings import ipywidgets as widgets # interactive display %matplotlib inline %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") mpl.rc('image', cmap='Blues') # @title Helper Functions # @markdown Implement helpers for value visualisation map_from_action_to_subplot = lambda a: (2, 6, 8, 4)[a] map_from_action_to_name = lambda a: ("up", "right", "down", "left")[a] def plot_values(values, colormap='pink', vmin=-1, vmax=10): plt.imshow(values, interpolation="nearest", cmap=colormap, vmin=vmin, vmax=vmax) plt.yticks([]) plt.xticks([]) plt.colorbar(ticks=[vmin, vmax]) def plot_state_value(action_values, epsilon=0.1): q = action_values fig = plt.figure(figsize=(4, 4)) vmin = np.min(action_values) vmax = np.max(action_values) v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1) plot_values(v, colormap='summer', vmin=vmin, vmax=vmax) plt.title("$v(s)$") def plot_action_values(action_values, epsilon=0.1): q = action_values fig = plt.figure(figsize=(8, 8)) fig.subplots_adjust(wspace=0.3, hspace=0.3) vmin = np.min(action_values) vmax = np.max(action_values) dif = vmax - vmin for a in [0, 1, 2, 3]: plt.subplot(3, 3, map_from_action_to_subplot(a)) plot_values(q[..., a], vmin=vmin - 0.05*dif, vmax=vmax + 0.05*dif) action_name = map_from_action_to_name(a) plt.title(r"$q(s, \mathrm{" + action_name + r"})$") plt.subplot(3, 3, 5) v = (1 - epsilon) * np.max(q, axis=-1) + epsilon * np.mean(q, axis=-1) plot_values(v, colormap='summer', vmin=vmin, vmax=vmax) plt.title("$v(s)$") def plot_stats(stats, window=10): plt.figure(figsize=(16,4)) plt.subplot(121) xline = range(0, len(stats.episode_lengths), window) plt.plot(xline, smooth(stats.episode_lengths, window=window)) plt.ylabel('Episode Length') plt.xlabel('Episode Count') plt.subplot(122) plt.plot(xline, smooth(stats.episode_rewards, window=window)) plt.ylabel('Episode Return') plt.xlabel('Episode Count') # @title Helper functions def smooth(x, window=10): return x[:window*(len(x)//window)].reshape(len(x)//window, window).mean(axis=1) # @title Set random seed # @markdown Executing `set_seed(seed=seed)` you are setting the seed # for DL its critical to set the random seed so that students can have a # baseline to compare their results to expected results. # Read more here: https://pytorch.org/docs/stable/notes/randomness.html # Call `set_seed` function in the exercises to ensure reproducibility. import random import torch def set_seed(seed=None, seed_torch=True): if seed is None: seed = np.random.choice(2 ** 32) random.seed(seed) np.random.seed(seed) if seed_torch: torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True print(f'Random seed {seed} has been set.') # In case that `DataLoader` is used def seed_worker(worker_id): worker_seed = torch.initial_seed() % 2**32 np.random.seed(worker_seed) random.seed(worker_seed) # @title Set device (GPU or CPU). Execute `set_device()` # especially if torch modules used. # inform the user if the notebook uses GPU or CPU. def set_device(): device = "cuda" if torch.cuda.is_available() else "cpu" if device != "cuda": print("WARNING: For this notebook to perform best, " "if possible, in the menu under `Runtime` -> " "`Change runtime type.` select `GPU` ") else: print("GPU is enabled in this notebook.") return device SEED = 2021 set_seed(seed=SEED) DEVICE = set_device() ###Output _____no_output_____ ###Markdown --- Section 1: Introduction to Reinforcement Learning*Time estimate: ~15mins* ###Code # @title Video 1: Introduction to RL from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV18V411p7iK", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"BWz3scQN50M", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 1: Introduction to RL') display(out) ###Output _____no_output_____ ###Markdown Acme: a research framework for reinforcement learning**Acme** is a library of reinforcement learning (RL) agents and agent building blocks by Google DeepMind. Acme strives to expose simple, efficient, and readable agents, that serve both as reference implementations of popular algorithms and as strong baselines, while still providing enough flexibility to do novel research. The design of Acme also attempts to provide multiple points of entry to the RL problem at differing levels of complexity.For more information see the github's repository [deepmind/acme](https://github.com/deepmind/acme). --- Section 2: General Formulation of RL Problems and Gridworlds*Time estimate: ~30mins* ###Code # @title Video 2: General Formulation and MDPs from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1k54y1E7Zn", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"h6TxAALY5Fc", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 2: General Formulation and MDPs') display(out) ###Output _____no_output_____ ###Markdown The agent interacts with the environment in a loop corresponding to the following diagram. The environment defines a set of **actions** that an agent can take. The agent takes an action informed by the **observations** it receives, and will get a **reward** from the environment after each action. The goal in RL is to find an agent whose actions maximize the total accumulation of rewards obtained from the environment. Section 2.1: The Environment For this practical session we will focus on a **simple grid world** environment,which consists of a 9 x 10 grid of either wall or empty cells, depicted in black and white, respectively. The smiling agent starts from an initial location and needs to navigate to reach the goal square.Below you will find an implementation of this Gridworld as a `dm_env.Environment`.There is no coding in this section, but if you want, you can look over the provided code so that you can familiarize yourself with an example of how to set up a **grid world** environment. ###Code # @title Implement GridWorld # @markdown ##### *Double-click* to inspect the contents of this cell. class ObservationType(enum.IntEnum): STATE_INDEX = enum.auto() AGENT_ONEHOT = enum.auto() GRID = enum.auto() AGENT_GOAL_POS = enum.auto() class GridWorld(dm_env.Environment): def __init__(self, layout, start_state, goal_state=None, observation_type=ObservationType.STATE_INDEX, discount=0.9, penalty_for_walls=-5, reward_goal=10, max_episode_length=None, randomize_goals=False): """Build a grid environment. Simple gridworld defined by a map layout, a start and a goal state. Layout should be a NxN grid, containing: * 0: empty * -1: wall * Any other positive value: value indicates reward; episode will terminate Args: layout: NxN array of numbers, indicating the layout of the environment. start_state: Tuple (y, x) of starting location. goal_state: Optional tuple (y, x) of goal location. Will be randomly sampled once if None. observation_type: Enum observation type to use. One of: * ObservationType.STATE_INDEX: int32 index of agent occupied tile. * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the agent is and 0 elsewhere. * ObservationType.GRID: NxNx3 float32 grid of feature channels. First channel contains walls (1 if wall, 0 otherwise), second the agent position (1 if agent, 0 otherwise) and third goal position (1 if goal, 0 otherwise) * ObservationType.AGENT_GOAL_POS: float32 tuple with (agent_y, agent_x, goal_y, goal_x) discount: Discounting factor included in all Timesteps. penalty_for_walls: Reward added when hitting a wall (should be negative). reward_goal: Reward added when finding the goal (should be positive). max_episode_length: If set, will terminate an episode after this many steps. randomize_goals: If true, randomize goal at every episode. """ if observation_type not in ObservationType: raise ValueError('observation_type should be a ObservationType instace.') self._layout = np.array(layout) self._start_state = start_state self._state = self._start_state self._number_of_states = np.prod(np.shape(self._layout)) self._discount = discount self._penalty_for_walls = penalty_for_walls self._reward_goal = reward_goal self._observation_type = observation_type self._layout_dims = self._layout.shape self._max_episode_length = max_episode_length self._num_episode_steps = 0 self._randomize_goals = randomize_goals if goal_state is None: # Randomly sample goal_state if not provided goal_state = self._sample_goal() self.goal_state = goal_state def _sample_goal(self): """Randomly sample reachable non-starting state.""" # Sample a new goal n = 0 max_tries = 1e5 while n < max_tries: goal_state = tuple(np.random.randint(d) for d in self._layout_dims) if goal_state != self._state and self._layout[goal_state] == 0: # Reachable state found! return goal_state n += 1 raise ValueError('Failed to sample a goal state.') @property def layout(self): return self._layout @property def number_of_states(self): return self._number_of_states @property def goal_state(self): return self._goal_state @property def start_state(self): return self._start_state @property def state(self): return self._state def set_state(self, x, y): self._state = (y, x) @goal_state.setter def goal_state(self, new_goal): if new_goal == self._state or self._layout[new_goal] < 0: raise ValueError('This is not a valid goal!') # Zero out any other goal self._layout[self._layout > 0] = 0 # Setup new goal location self._layout[new_goal] = self._reward_goal self._goal_state = new_goal def observation_spec(self): if self._observation_type is ObservationType.AGENT_ONEHOT: return specs.Array( shape=self._layout_dims, dtype=np.float32, name='observation_agent_onehot') elif self._observation_type is ObservationType.GRID: return specs.Array( shape=self._layout_dims + (3,), dtype=np.float32, name='observation_grid') elif self._observation_type is ObservationType.AGENT_GOAL_POS: return specs.Array( shape=(4,), dtype=np.float32, name='observation_agent_goal_pos') elif self._observation_type is ObservationType.STATE_INDEX: return specs.DiscreteArray( self._number_of_states, dtype=int, name='observation_state_index') def action_spec(self): return specs.DiscreteArray(4, dtype=int, name='action') def get_obs(self): if self._observation_type is ObservationType.AGENT_ONEHOT: obs = np.zeros(self._layout.shape, dtype=np.float32) # Place agent obs[self._state] = 1 return obs elif self._observation_type is ObservationType.GRID: obs = np.zeros(self._layout.shape + (3,), dtype=np.float32) obs[..., 0] = self._layout < 0 obs[self._state[0], self._state[1], 1] = 1 obs[self._goal_state[0], self._goal_state[1], 2] = 1 return obs elif self._observation_type is ObservationType.AGENT_GOAL_POS: return np.array(self._state + self._goal_state, dtype=np.float32) elif self._observation_type is ObservationType.STATE_INDEX: y, x = self._state return y * self._layout.shape[1] + x def reset(self): self._state = self._start_state self._num_episode_steps = 0 if self._randomize_goals: self.goal_state = self._sample_goal() return dm_env.TimeStep( step_type=dm_env.StepType.FIRST, reward=None, discount=None, observation=self.get_obs()) def step(self, action): y, x = self._state if action == 0: # up new_state = (y - 1, x) elif action == 1: # right new_state = (y, x + 1) elif action == 2: # down new_state = (y + 1, x) elif action == 3: # left new_state = (y, x - 1) else: raise ValueError( 'Invalid action: {} is not 0, 1, 2, or 3.'.format(action)) new_y, new_x = new_state step_type = dm_env.StepType.MID if self._layout[new_y, new_x] == -1: # wall reward = self._penalty_for_walls discount = self._discount new_state = (y, x) elif self._layout[new_y, new_x] == 0: # empty cell reward = 0. discount = self._discount else: # a goal reward = self._layout[new_y, new_x] discount = 0. new_state = self._start_state step_type = dm_env.StepType.LAST self._state = new_state self._num_episode_steps += 1 if (self._max_episode_length is not None and self._num_episode_steps >= self._max_episode_length): step_type = dm_env.StepType.LAST return dm_env.TimeStep( step_type=step_type, reward=np.float32(reward), discount=discount, observation=self.get_obs()) def plot_grid(self, add_start=True): plt.figure(figsize=(4, 4)) plt.imshow(self._layout <= -1, interpolation='nearest') ax = plt.gca() ax.grid(0) plt.xticks([]) plt.yticks([]) # Add start/goal if add_start: plt.text( self._start_state[1], self._start_state[0], r'$\mathbf{S}$', fontsize=16, ha='center', va='center') plt.text( self._goal_state[1], self._goal_state[0], r'$\mathbf{G}$', fontsize=16, ha='center', va='center') h, w = self._layout.shape for y in range(h - 1): plt.plot([-0.5, w - 0.5], [y + 0.5, y + 0.5], '-w', lw=2) for x in range(w - 1): plt.plot([x + 0.5, x + 0.5], [-0.5, h - 0.5], '-w', lw=2) def plot_state(self, return_rgb=False): self.plot_grid(add_start=False) # Add the agent location plt.text( self._state[1], self._state[0], u'😃', # fontname='symbola', fontsize=18, ha='center', va='center', ) if return_rgb: fig = plt.gcf() plt.axis('tight') plt.subplots_adjust(0, 0, 1, 1, 0, 0) fig.canvas.draw() data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') w, h = fig.canvas.get_width_height() data = data.reshape((h, w, 3)) plt.close(fig) return data def plot_policy(self, policy): action_names = [ r'$\uparrow$', r'$\rightarrow$', r'$\downarrow$', r'$\leftarrow$' ] self.plot_grid() plt.title('Policy Visualization') h, w = self._layout.shape for y in range(h): for x in range(w): # if ((y, x) != self._start_state) and ((y, x) != self._goal_state): if (y, x) != self._goal_state: action_name = action_names[policy[y, x]] plt.text(x, y, action_name, ha='center', va='center') def plot_greedy_policy(self, q): greedy_actions = np.argmax(q, axis=2) self.plot_policy(greedy_actions) def build_gridworld_task(task, discount=0.9, penalty_for_walls=-5, observation_type=ObservationType.STATE_INDEX, max_episode_length=200): """Construct a particular Gridworld layout with start/goal states. Args: task: string name of the task to use. One of {'simple', 'obstacle', 'random_goal'}. discount: Discounting factor included in all Timesteps. penalty_for_walls: Reward added when hitting a wall (should be negative). observation_type: Enum observation type to use. One of: * ObservationType.STATE_INDEX: int32 index of agent occupied tile. * ObservationType.AGENT_ONEHOT: NxN float32 grid, with a 1 where the agent is and 0 elsewhere. * ObservationType.GRID: NxNx3 float32 grid of feature channels. First channel contains walls (1 if wall, 0 otherwise), second the agent position (1 if agent, 0 otherwise) and third goal position (1 if goal, 0 otherwise) * ObservationType.AGENT_GOAL_POS: float32 tuple with (agent_y, agent_x, goal_y, goal_x). max_episode_length: If set, will terminate an episode after this many steps. """ tasks_specifications = { 'simple': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), 'goal_state': (7, 2) }, 'obstacle': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, -1, 0, 0, -1], [-1, 0, 0, 0, -1, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), 'goal_state': (2, 8) }, 'random_goal': { 'layout': [ [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, -1, -1, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, 0, 0, 0, 0, 0, 0, 0, 0, -1], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1], ], 'start_state': (2, 2), # 'randomize_goals': True }, } return GridWorld( discount=discount, penalty_for_walls=penalty_for_walls, observation_type=observation_type, max_episode_length=max_episode_length, **tasks_specifications[task]) def setup_environment(environment): """Returns the environment and its spec.""" # Make sure the environment outputs single-precision floats. environment = wrappers.SinglePrecisionWrapper(environment) # Grab the spec of the environment. environment_spec = specs.make_environment_spec(environment) return environment, environment_spec ###Output _____no_output_____ ###Markdown We will use two distinct tabular GridWorlds:* `simple` where the goal is at the bottom left of the grid, little navigation required.* `obstacle` where the goal is behind an obstacle the agent must avoid.You can visualize the grid worlds by running the cell below. Note that **S** indicates the start state and **G** indicates the goal. ###Code # Visualise GridWorlds # Instantiate two tabular environments, a simple task, and one that involves # the avoidance of an obstacle. simple_grid = build_gridworld_task( task='simple', observation_type=ObservationType.GRID) obstacle_grid = build_gridworld_task( task='obstacle', observation_type=ObservationType.GRID) # Plot them. simple_grid.plot_grid() plt.title('Simple') obstacle_grid.plot_grid() plt.title('Obstacle') ###Output _____no_output_____ ###Markdown In this environment, the agent has four possible **actions**: `up`, `right`, `down`, and `left`. The **reward** is `-5` for bumping into a wall, `+10` for reaching the goal, and `0` otherwise. The episode ends when the agent reaches the goal, and otherwise continues. The **discount** on continuing steps, is $\gamma = 0.9$. Before we start building an agent to interact with this environment, let's first look at the types of objects the environment either returns (e.g., **observations**) or consumes (e.g., **actions**). The `environment_spec` will show you the form of the **observations**, **rewards** and **discounts** that the environment exposes and the form of the **actions** that can be taken. ###Code # @title Look at `environment_spec` # @markdown ##### **Note:** `setup_environment` is implemented in the same cell as GridWorld. environment, environment_spec = setup_environment(simple_grid) print('actions:\n', environment_spec.actions, '\n') print('observations:\n', environment_spec.observations, '\n') print('rewards:\n', environment_spec.rewards, '\n') print('discounts:\n', environment_spec.discounts, '\n') ###Output _____no_output_____ ###Markdown We first set the environment to its initial state by calling the `reset()` method which returns the first observation and resets the agent to the starting location. ###Code environment.reset() environment.plot_state() ###Output _____no_output_____ ###Markdown Now we want to take an action to interact with the environment. We do this by passing a valid action to the `dm_env.Environment.step()` method which returns a `dm_env.TimeStep` namedtuple with fields `(step_type, reward, discount, observation)`.Let's take an action and visualise the resulting state of the grid-world. (You'll need to rerun the cell if you pick a new action.) **Note for kaggle users:** As kaggle does not render the forms automatically the students should be careful to notice the various instructions and manually play around with the values for the variables ###Code # @title Pick an action and see the state changing action = "left" #@param ["up", "right", "down", "left"] {type:"string"} action_int = {'up': 0, 'right': 1, 'down': 2, 'left':3 } action = int(action_int[action]) timestep = environment.step(action) # pytype: dm_env.TimeStep environment.plot_state() # @title Run loop # @markdown ##### This function runs an agent in the environment for a number of episodes, allowing it to learn. # @markdown ##### *Double-click* to inspect the `run_loop` function. def run_loop(environment, agent, num_episodes=None, num_steps=None, logger_time_delta=1., label='training_loop', log_loss=False, ): """Perform the run loop. We are following the Acme run loop. Run the environment loop for `num_episodes` episodes. Each episode is itself a loop which interacts first with the environment to get an observation and then give that observation to the agent in order to retrieve an action. Upon termination of an episode a new episode will be started. If the number of episodes is not given then this will interact with the environment infinitely. Args: environment: dm_env used to generate trajectories. agent: acme.Actor for selecting actions in the run loop. num_steps: number of steps to run the loop for. If `None` (default), runs without limit. num_episodes: number of episodes to run the loop for. If `None` (default), runs without limit. logger_time_delta: time interval (in seconds) between consecutive logging steps. label: optional label used at logging steps. """ logger = loggers.TerminalLogger(label=label, time_delta=logger_time_delta) iterator = range(num_episodes) if num_episodes else itertools.count() all_returns = [] num_total_steps = 0 for episode in iterator: # Reset any counts and start the environment. start_time = time.time() episode_steps = 0 episode_return = 0 episode_loss = 0 timestep = environment.reset() # Make the first observation. agent.observe_first(timestep) # Run an episode. while not timestep.last(): # Generate an action from the agent's policy and step the environment. action = agent.select_action(timestep.observation) timestep = environment.step(action) # Have the agent observe the timestep and let the agent update itself. agent.observe(action, next_timestep=timestep) agent.update() # Book-keeping. episode_steps += 1 num_total_steps += 1 episode_return += timestep.reward if log_loss: episode_loss += agent.last_loss if num_steps is not None and num_total_steps >= num_steps: break # Collect the results and combine with counts. steps_per_second = episode_steps / (time.time() - start_time) result = { 'episode': episode, 'episode_length': episode_steps, 'episode_return': episode_return, } if log_loss: result['loss_avg'] = episode_loss/episode_steps all_returns.append(episode_return) # Log the given results. logger.write(result) if num_steps is not None and num_total_steps >= num_steps: break return all_returns # @title Implement the evaluation loop # @markdown ##### This function runs the agent in the environment for a number of episodes, without allowing it to learn, in order to evaluate it. # @markdown ##### *Double-click* to inspect the `evaluate` function. def evaluate(environment: dm_env.Environment, agent: acme.Actor, evaluation_episodes: int): frames = [] for episode in range(evaluation_episodes): timestep = environment.reset() episode_return = 0 steps = 0 while not timestep.last(): frames.append(environment.plot_state(return_rgb=True)) action = agent.select_action(timestep.observation) timestep = environment.step(action) steps += 1 episode_return += timestep.reward print( f'Episode {episode} ended with reward {episode_return} in {steps} steps' ) return frames def display_video(frames: Sequence[np.ndarray], filename: str = 'temp.mp4', frame_rate: int = 12): """Save and display video.""" # Write the frames to a video. with imageio.get_writer(filename, fps=frame_rate) as video: for frame in frames: video.append_data(frame) # Read video and display the video. video = open(filename, 'rb').read() b64_video = base64.b64encode(video) video_tag = ('<video width="320" height="240" controls alt="test" ' 'src="data:video/mp4;base64,{0}">').format(b64_video.decode()) return IPython.display.HTML(video_tag) ###Output _____no_output_____ ###Markdown Section 2.2: The AgentWe will be implementing Tabular & Function Approximation agents. Tabular agents are purely in Python.All agents will share the same interface from the Acme `Actor`. Here we borrow a figure from Acme to show how this interaction occurs: Agent interfaceEach agent implements the following functions:```pythonclass Agent(acme.Actor): def __init__(self, number_of_actions, number_of_states, ...): """Provides the agent the number of actions and number of states.""" def select_action(self, observation): """Generates actions from observations.""" def observe_first(self, timestep): """Records the initial timestep in a trajectory.""" def observe(self, action, next_timestep): """Records the transition which occurred from taking an action.""" def update(self): """Updates the agent's internals to potentially change its behavior."""```Remarks on the `observe()` function:1. In the last method, the `next_timestep` provides the `reward`, `discount`, and `observation` that resulted from selecting `action`.2. The `next_timestep.step_type` will be either `MID` or `LAST` and should be used to determine whether this is the last observation in the episode.3. The `next_timestep.step_type` cannot be `FIRST`; such a timestep should only ever be given to `observe_first()`. Coding Exercise 2.1: Random AgentBelow is a partially complete implemention of an agent that follows a random (non-learning) policy. Fill in the ```select_action``` method.The ```select_action``` method should return a random **integer** between 0 and ```self._num_actions``` (not a tensor or an array!) ###Code class RandomAgent(acme.Actor): def __init__(self, environment_spec): """Gets the number of available actions from the environment spec.""" self._num_actions = environment_spec.actions.num_values def select_action(self, observation): """Selects an action uniformly at random.""" ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the select action method") ################################################# # TODO return a random integer beween 0 and self._num_actions. # HINT: see the reference for how to sample a random integer in numpy: # https://numpy.org/doc/1.16/reference/routines.random.html action = ... return action def observe_first(self, timestep): """Does not record as the RandomAgent has no use for data.""" pass def observe(self, action, next_timestep): """Does not record as the RandomAgent has no use for data.""" pass def update(self): """Does not update as the RandomAgent does not learn from data.""" pass # add event to airtable atform.add_event('Coding Exercise 2.1: Random Agent') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_23bbdfe0.py) ###Code # @title Visualisation of a random agent in GridWorld # Create the agent by giving it the action space specification. agent = RandomAgent(environment_spec) # Run the agent in the evaluation loop, which returns the frames. frames = evaluate(environment, agent, evaluation_episodes=1) # Visualize the random agent's episode. display_video(frames) ###Output _____no_output_____ ###Markdown --- Section 3: The Bellman Equation*Time estimate: ~15mins* ###Code # @title Video 3: The Bellman Equation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Lv411E7CB", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"cLCoNBmYUns", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 3: The Bellman Equation') display(out) ###Output _____no_output_____ ###Markdown In this tutorial we focus mainly on **value based methods**, where agents maintain a value for all state-action pairs and use those estimates to choose actions that maximize that **value** (instead of maintaining a policy directly, like in **policy gradient methods**). We represent the **action-value function** (otherwise known as $\color{green}Q$-function associated with following/employing a policy $\pi$ in a given MDP as:\begin{equation}\color{green}Q^{\color{blue}{\pi}}(\color{red}{s},\color{blue}{a}) = \mathbb{E}_{\tau \sim P^{\color{blue}{\pi}}} \left[ \sum_t \gamma^t \color{green}{r_t}| s_0=\color{red}s,a_0=\color{blue}{a} \right]\end{equation}where $\tau = \{\color{red}{s_0}, \color{blue}{a_0}, \color{green}{r_0}, \color{red}{s_1}, \color{blue}{a_1}, \color{green}{r_1}, \cdots \}$Recall that efficient value estimations are based on the famous **_Bellman Expectation Equation_**:\begin{equation}\color{green}Q^\color{blue}{\pi}(\color{red}{s},\color{blue}{a}) = \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\left( \color{green}{R}(\color{red}{s},\color{blue}{a}, \color{red}{s'}) + \gamma \color{green}V^\color{blue}{\pi}(\color{red}{s'}) \right)\end{equation}where $\color{green}V^\color{blue}{\pi}$ is the expected $\color{green}Q^\color{blue}{\pi}$ value for a particular state, i.e. $\color{green}V^\color{blue}{\pi}(\color{red}{s}) = \sum_{\color{blue}{a} \in \color{blue}{\mathcal{A}}} \color{blue}{\pi}(\color{blue}{a} |\color{red}{s}) \color{green}Q^\color{blue}{\pi}(\color{red}{s},\color{blue}{a})$. --- Section 4: Policy Evaluation*Time estimate: ~30mins* ###Code # @title Video 4: Policy Evaluation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV15f4y157zA", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"HAxR4SuaZs4", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 4: Policy Evaluation') display(out) ###Output _____no_output_____ ###Markdown Lecture footnotes: **Episodic vs non-episodic environments:** Up until now, we've mainly been talking about episodic environments, or environments that terminate and reset (resampled) after a finite number of steps. However, there are also *non-episodic* environments, in which an agent cannot count on the environment resetting. Thus, they are forced to learn in a *continual* fashion.**Policy iteration vs value iteration:** Compare the two equations below, noting that the only difference is that in value iteration, the second sum is replaced by a max.*Policy iteration (using Bellman expectation equation)*\begin{equation}\color{green}Q_\color{green}{k}(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}{R}(\color{red}{s},\color{blue}{a}) +\gamma \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\sum_{\color{blue}{a'} \in \color{blue}{\mathcal{A}}} \color{blue}{\pi_{k-1}}(\color{blue}{a'} |\color{red}{s'}) \color{green}{Q_{k-1}}(\color{red}{s'},\color{blue}{a'})\end{equation}*Value iteration (using Bellman optimality equation)*\begin{equation}\color{green}Q_\color{green}{k}(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}{R}(\color{red}{s},\color{blue}{a}) +\gamma \sum_{\color{red}{s'}\in \color{red}{\mathcal{S}}} \color{purple}P(\color{red}{s'} |\color{red}{s},\color{blue}{a})\max_{\color{blue}{a'}} \color{green}{Q_{k-1}}(\color{red}{s'},\color{blue}{a'})\end{equation} Coding Exercise 4.1 Policy Evaluation Agent Tabular agents implement a function `q_values()` returning a matrix of Q valuesof shape: (`number_of_states`, `number_of_actions`)In this section, we will implement a `PolicyEvalAgent` as an ACME actor: given an `evaluation_policy` $\pi_e$ and a `behaviour_policy` $\pi_b$, it will use the `behaviour_policy` to choose actions, and it will use the corresponding trajectory data to evaluate the `evaluation_policy` (i.e. compute the Q-values as if you were following the `evaluation_policy`). Algorithm:**Initialize** $Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s}$ ∈ $\mathcal{\color{red}S}$ and $\color{blue}a$ ∈ $\mathcal{\color{blue}A}(\color{red}s)$**Loop forever**:1. $\color{red}{s} \gets{}$current (nonterminal) state 2. $\color{blue}{a} \gets{} \text{behaviour_policy }\pi_b(\color{red}s)$ 3. Take action $\color{blue}{a}$; observe resulting reward $\color{green}{r}$, discount $\gamma$, and state, $\color{red}{s'}$4. Compute TD-error: $\delta = \color{green}R + \gamma Q(\color{red}{s'}, \underbrace{\pi_e(\color{red}{s'}}_{\color{blue}{a'}})) − Q(\color{red}s, \color{blue}a)$4. Update Q-value with a small $\alpha$ step: $Q(\color{red}s, \color{blue}a) \gets Q(\color{red}s, \color{blue}a) + \alpha \delta$We will use a uniform `random policy` as our `evaluation policy` here, but you could replace this with any policy you want, such as a greedy one. ###Code # Uniform random policy def random_policy(q): return np.random.randint(4) class PolicyEvalAgent(acme.Actor): def __init__(self, environment_spec, evaluated_policy, behaviour_policy=random_policy, step_size=0.1): self._state = None # Get number of states and actions from the environment spec. self._number_of_states = environment_spec.observations.num_values self._number_of_actions = environment_spec.actions.num_values self._step_size = step_size self._behaviour_policy = behaviour_policy self._evaluated_policy = evaluated_policy ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Initialize your Q-values!") ################################################# # TODO Initialize the Q-values to be all zeros. # (Note: can also be random, but we use zeros here for reproducibility) # HINT: This is a table of state and action pairs, so needs to be a 2-D # array. See the reference for how to create this in numpy: # https://numpy.org/doc/stable/reference/generated/numpy.zeros.html self._q = ... self._action = None self._next_state = None @property def q_values(self): # return the Q values return self._q def select_action(self, observation): # Select an action return self._behaviour_policy(self._q[observation]) def observe_first(self, timestep): self._state = timestep.observation def observe(self, action, next_timestep): s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute TD-Error. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Need to select the next action") ################################################# # TODO Select the next action from the evaluation policy # HINT: Refer to step 4 of the algorithm above. next_a = ... self._td_error = r + g * self._q[next_s, next_a] - self._q[s, a] def update(self): # Updates s = self._state a = self._action # Q-value table update. self._q[s, a] += self._step_size * self._td_error # Update the state self._state = self._next_state # add event to airtable atform.add_event('Coding Exercise 4.1 Policy Evaluation Agent') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_b681200a.py) ###Code # @title Perform policy evaluation # @markdown ##### Here you can visualize the state value and action-value functions for the "simple" task. num_steps = 1e3 # Create the environment grid = build_gridworld_task(task='simple') environment, environment_spec = setup_environment(grid) # Create the policy evaluation agent to evaluate a random policy. agent = PolicyEvalAgent(environment_spec, evaluated_policy=random_policy) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=int(num_steps)) # get the q-values q = agent.q_values.reshape(grid._layout.shape + (4, )) # visualize value functions print('AFTER {} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=1.) ###Output _____no_output_____ ###Markdown --- Section 5: Tabular Value-Based Model-Free Learning*Time estimate: ~50mins* ###Code # @title Video 5: Model-Free Learning from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1iU4y1n7M6", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"Y4TweUYnexU", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 5: Model-Free Learning') display(out) ###Output _____no_output_____ ###Markdown Lecture footnotes: **On-policy (SARSA) vs off-policy (Q-learning) TD control:** Compare the two equations below and see that the only difference is that for Q-learning, the update is performed assuming that a greedy policy is followed, which is not the one used to collect the data, hence the name *off-policy*. *SARSA*\begin{equation}\color{green}Q(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}Q(\color{red}{s},\color{blue}{a}) +\alpha(\color{green}{r} + \gamma\color{green}{Q}(\color{red}{s'},\color{blue}{a'}) - \color{green}{Q}(\color{red}{s},\color{blue}{a}))\end{equation}*Q-learning*\begin{equation}\color{green}Q(\color{red}{s},\color{blue}{a}) \leftarrow \color{green}Q(\color{red}{s},\color{blue}{a}) +\alpha(\color{green}{r} + \gamma\max_{\color{blue}{a'}} \color{green}{Q}(\color{red}{s'},\color{blue}{a'}) - \color{green}{Q}(\color{red}{s},\color{blue}{a}))\end{equation} Section 5.1: On-policy control: SARSA AgentIn this section, we are focusing on control RL algorithms, which perform the **evaluation** and **improvement** of the policy synchronously. That is, the policy that is being evaluated improves as the agent is using it to interact with the environent.The first algorithm we are going to be looking at is SARSA. This is an **on-policy algorithm** -- i.e: the data collection is done by leveraging the policy we're trying to optimize. As discussed during lectures, a greedy policy with respect to a given $\color{Green}Q$ fails to explore the environment as needed; we will use instead an $\epsilon$-greedy policy with respect to $\color{Green}Q$. SARSA Algorithm**Input:**- $\epsilon \in (0, 1)$ the probability of taking a random action , and- $\alpha > 0$ the step size, also known as learning rate.**Initialize:** $\color{green}Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s}$ ∈ $\mathcal{\color{red}S}$ and $\color{blue}a$ ∈ $\mathcal{\color{blue}A}$**Loop forever:**1. Get $\color{red}s \gets{}$current (non-terminal) state 2. Select $\color{blue}a \gets{} \text{epsilon_greedy}(\color{green}Q(\color{red}s, \cdot))$ 3. Step in the environment by passing the selected action $\color{blue}a$4. Observe resulting reward $\color{green}r$, discount $\gamma$, and state $\color{red}{s'}$5. Compute TD error: $\Delta \color{green}Q \gets \color{green}r + \gamma \color{green}Q(\color{red}{s'}, \color{blue}{a'}) − \color{green}Q(\color{red}s, \color{blue}a)$, where $\color{blue}{a'} \gets \text{epsilon_greedy}(\color{green}Q(\color{red}{s'}, \cdot))$5. Update $\color{green}Q(\color{red}s, \color{blue}a) \gets \color{green}Q(\color{red}s, \color{blue}a) + \alpha \Delta \color{green}Q$ Coding Exercise 5.1: Implement $\epsilon$-greedyBelow you will find incomplete code for sampling from an $\epsilon$-greedy policy, to be used later when we implement an agent that learns values according to the SARSA algorithm. ###Code def epsilon_greedy( q_values_at_s: np.ndarray, # Q-values in state s: Q(s, a). epsilon: float = 0.1 # Probability of taking a random action. ): """Return an epsilon-greedy action sample.""" ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete epsilon greedy policy function") ################################################# # TODO generate a uniform random number and compare it to epsilon to decide if # the action should be greedy or not # HINT: Use np.random.random() to generate a random float from 0 to 1. if ...: #TODO Greedy: Pick action with the largest Q-value. action = ... else: # Get the number of actions from the size of the given vector of Q-values. num_actions = np.array(q_values_at_s).shape[-1] # TODO else return a random action # HINT: Use np.random.randint() to generate a random integer. action = ... return action # add event to airtable atform.add_event('Coding Exercise 5.1: Implement epsilon-greedy') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_7137b538.py) ###Code # @title Sample action from $\epsilon$-greedy # @markdown ##### With $\epsilon=0.5$, you should see that about half the time, you will get back the optimal action 3, but half the time, it will be random. # Create fake q-values q_values = np.array([0, 0, 0, 1]) # Set epsilon = 0.5 epsilon = 0.5 action = epsilon_greedy(q_values, epsilon=epsilon) print(action) ###Output _____no_output_____ ###Markdown Coding Exercise 5.2: Run your SARSA agent on the `obstacle` environmentThis environment is similar to the Cliff-walking example from [Sutton & Barto](http://incompleteideas.net/book/RLbook2018.pdf) and allows us to see the different policies learned by on-policy vs off-policy methods. Try varying the number of steps. ###Code class SarsaAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, epsilon: float, step_size: float = 0.1 ): # Get number of states and actions from the environment spec. self._num_states = environment_spec.observations.num_values self._num_actions = environment_spec.actions.num_values # Create the table of Q-values, all initialized at zero. self._q = np.zeros((self._num_states, self._num_actions)) # Store algorithm hyper-parameters. self._step_size = step_size self._epsilon = epsilon # Containers you may find useful. self._state = None self._action = None self._next_state = None @property def q_values(self): return self._q def select_action(self, observation): return epsilon_greedy(self._q[observation], self._epsilon) def observe_first(self, timestep): # Set current state. self._state = timestep.observation def observe(self, action, next_timestep): # Unpacking the timestep to lighten notation. s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute the action that would be taken from the next state. next_a = self.select_action(next_s) # Compute the on-policy Q-value update. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the on-policy Q-value update") ################################################# # TODO complete the line below to compute the temporal difference error # HINT: see step 5 in the pseudocode above. self._td_error = ... def update(self): # Optional unpacking to lighten notation. s = self._state a = self._action ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete value update") ################################################# # Update the Q-value table value at (s, a). # TODO: Update the Q-value table value at (s, a). # HINT: see step 6 in the pseudocode above, remember that alpha = step_size! self._q[s, a] += ... # Update the current state. self._state = self._next_state # add event to airtable atform.add_event('Coding Exercise 5.2: Run your SARSA agent on the obstacle environment') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_4099088a.py) ###Code # @title Run SARSA agent and visualize value function num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # Create the environment. grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # Create the agent. agent = SarsaAgent(environment_spec, epsilon=0.1, step_size=0.1) # Run the experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) print('AFTER {0:,} STEPS ...'.format(num_steps)) # Get the Q-values and reshape them to recover grid-like structure of states. q_values = agent.q_values grid_shape = grid.layout.shape q_values = q_values.reshape([*grid_shape, -1]) # Visualize the value and Q-value tables. plot_action_values(q_values, epsilon=1.) # Visualize the greedy policy. environment.plot_greedy_policy(q_values) ###Output _____no_output_____ ###Markdown Section 5.2 Off-policy control: Q-learning AgentReminder: $\color{green}Q$-learning is a very powerful and general algorithm, that enables control (figuring out the optimal policy/value function) both on and off-policy.**Initialize** $\color{green}Q(\color{red}{s}, \color{blue}{a})$ for all $\color{red}{s} \in \color{red}{\mathcal{S}}$ and $\color{blue}{a} \in \color{blue}{\mathcal{A}}$**Loop forever**:1. Get $\color{red}{s} \gets{}$current (non-terminal) state 2. Select $\color{blue}{a} \gets{} \text{behaviour_policy}(\color{red}{s})$ 3. Step in the environment by passing the selected action $\color{blue}{a}$4. Observe resulting reward $\color{green}{r}$, discount $\gamma$, and state, $\color{red}{s'}$5. Compute the TD error: $\Delta \color{green}Q \gets \color{green}{r} + \gamma \color{green}Q(\color{red}{s'}, \color{blue}{a'}) − \color{green}Q(\color{red}{s}, \color{blue}{a})$, where $\color{blue}{a'} \gets \arg\max_{\color{blue}{\mathcal A}} \color{green}Q(\color{red}{s'}, \cdot)$6. Update $\color{green}Q(\color{red}{s}, \color{blue}{a}) \gets \color{green}Q(\color{red}{s}, \color{blue}{a}) + \alpha \Delta \color{green}Q$Notice that the actions $\color{blue}{a}$ and $\color{blue}{a'}$ are not selected using the same policy, hence this algorithm being **off-policy**. Coding Exercise 5.3: Implement Q-Learning ###Code QValues = np.ndarray Action = int # A value-based policy takes the Q-values at a state and returns an action. ValueBasedPolicy = Callable[[QValues], Action] class QLearningAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, behaviour_policy: ValueBasedPolicy, step_size: float = 0.1): # Get number of states and actions from the environment spec. self._num_states = environment_spec.observations.num_values self._num_actions = environment_spec.actions.num_values # Create the table of Q-values, all initialized at zero. self._q = np.zeros((self._num_states, self._num_actions)) # Store algorithm hyper-parameters. self._step_size = step_size # Store behavior policy. self._behaviour_policy = behaviour_policy # Containers you may find useful. self._state = None self._action = None self._next_state = None @property def q_values(self): return self._q def select_action(self, observation): return self._behaviour_policy(self._q[observation]) def observe_first(self, timestep): # Set current state. self._state = timestep.observation def observe(self, action, next_timestep): # Unpacking the timestep to lighten notation. s = self._state a = action r = next_timestep.reward g = next_timestep.discount next_s = next_timestep.observation # Compute the TD error. self._action = a self._next_state = next_s ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the off-policy Q-value update") ################################################# # TODO complete the line below to compute the temporal difference error # HINT: This is very similar to what we did for SARSA, except keep in mind # that we're now taking a max over the q-values (see lecture footnotes above). # You will find the function np.max() useful. self._td_error = ... def update(self): # Optional unpacking to lighten notation. s = self._state a = self._action ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete value update") ################################################# # Update the Q-value table value at (s, a). # TODO: Update the Q-value table value at (s, a). # HINT: see step 6 in the pseudocode above, remember that alpha = step_size! self._q[...] += ... # Update the current state. self._state = self._next_state # add event to airtable atform.add_event('Coding Exercise 5.3: Implement Q-Learning') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_8c430935.py) Run your Q-learning agent on the `obstacle` environment ###Code # @title Run your Q-learning epsilon = 1. # @param {type:"number"} num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # environment grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # behavior policy behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon) # agent agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) # get the q-values q = agent.q_values.reshape(grid.layout.shape + (4,)) # visualize value functions print('AFTER {:,} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=0) # visualise the greedy policy grid.plot_greedy_policy(q) plt.show() ###Output _____no_output_____ ###Markdown Experiment with different levels of greediness* The default was $\epsilon=1.$, what does this correspond to?* Try also $\epsilon =0.1, 0.5$. What do you observe? Does the behaviour policy affect the training in any way? ###Code # @title Run the cell epsilon = 0.1 # @param {type:"number"} num_steps = 1e5 # @param {type:"number"} num_steps = int(num_steps) # environment grid = build_gridworld_task(task='obstacle') environment, environment_spec = setup_environment(grid) # behavior policy behavior_policy = lambda qval: epsilon_greedy(qval, epsilon=epsilon) # agent agent = QLearningAgent(environment_spec, behavior_policy, step_size=0.1) # run experiment and get the value functions from agent returns = run_loop(environment=environment, agent=agent, num_steps=num_steps) # get the q-values q = agent.q_values.reshape(grid.layout.shape + (4,)) # visualize value functions print('AFTER {:,} STEPS ...'.format(num_steps)) plot_action_values(q, epsilon=epsilon) # visualise the greedy policy grid.plot_greedy_policy(q) plt.show() ###Output _____no_output_____ ###Markdown --- Section 6: Function Approximation*Time estimate: ~25mins* ###Code # @title Video 6: Function approximation from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1sg411M7cn", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"7_MYePyYhrM", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 6: Function approximation') display(out) ###Output _____no_output_____ ###Markdown So far we only considered look-up tables for value-functions. In all previous cases every state and action pair $(\color{red}{s}, \color{blue}{a})$, had an entry in our $\color{green}Q$-table. Again, this is possible in this environment as the number of states is equal to the number of cells in the grid. But this is not scalable to situations where, say, the goal location changes or the obstacles are in different locations at every episode (consider how big the table could be in this situation?).An example (not covered in this tutorial) is ATARI from pixels, where the number of possible frames an agent can see is exponential in the number of pixels on the screen.But what we **really** want is just to be able to *compute* the Q-value, when fed with a particular $(\color{red}{s}, \color{blue}{a})$ pair. So if we had a way to get a function to do this work instead of keeping a big table, we'd get around this problem.To address this, we can use **function approximation** as a way to generalize Q-values over some representation of the very large state space, and **train** them to output the values they should. In this section, we will explore $\color{green}Q$-learning with function approximation, which (although it has been theoretically proven to diverge for some degenerate MDPs) can yield impressive results in very large environments. In particular, we will look at [Neural Fitted Q (NFQ) Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) and [Deep Q-Networks (DQN)](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf). Section 6.1 Replay BuffersAn important property of off-policy methods like $\color{green}Q$-learning is that they involve two policies: one for exploration and one that is being optimized (via the $\color{green}Q$-function updates). This means that we can generate data from the **behavior** policy and insert that data into some form of data storage---usually referred to as **replay**.In order to optimize the $\color{green}Q$-function we can then sample data from the replay **dataset** and use that data to perform an update. An illustration of this learning loop is shown below. In the next cell we will show how to implement a simple replay buffer. This can be as simple as a python list containing transition data. In more complicated scenarios we might want to have a more performance-tuned variant, we might have to be more concerned about how large replay is and what to do when its full, and we might want to sample from replay in different ways. But a simple python list can go a surprisingly long way. ###Code # Simple replay buffer # Create a convenient container for the SARS tuples required by deep RL agents. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) class ReplayBuffer(object): """A simple Python replay buffer.""" def __init__(self, capacity: int = None): self.buffer = collections.deque(maxlen=capacity) self._prev_state = None def add_first(self, initial_timestep: dm_env.TimeStep): self._prev_state = initial_timestep.observation def add(self, action: int, timestep: dm_env.TimeStep): transition = Transitions( state=self._prev_state, action=action, reward=timestep.reward, discount=timestep.discount, next_state=timestep.observation, ) self.buffer.append(transition) self._prev_state = timestep.observation def sample(self, batch_size: int) -> Transitions: # Sample a random batch of Transitions as a list. batch_as_list = random.sample(self.buffer, batch_size) # Convert the list of `batch_size` Transitions into a single Transitions # object where each field has `batch_size` stacked fields. return tree_utils.stack_sequence_fields(batch_as_list) def flush(self) -> Transitions: entire_buffer = tree_utils.stack_sequence_fields(self.buffer) self.buffer.clear() return entire_buffer def is_ready(self, batch_size: int) -> bool: return batch_size <= len(self.buffer) ###Output _____no_output_____ ###Markdown Section 6.2: NFQ Agent[Neural Fitted Q Iteration](http://ml.informatik.uni-freiburg.de/former/_media/publications/rieecml05.pdf) was one of the first papers to demonstrate how to leverage recent advances in Deep Learning to approximate the Q-value by a neural network.$^1$In other words, the value $\color{green}Q(\color{red}{s}, \color{blue}{a})$ are approximated by the output of a neural network $\color{green}{Q_w}(\color{red}{s}, \color{blue}{a})$ for each possible action $\color{blue}{a} \in \color{blue}{\mathcal{A}}$.$^2$When introducing function approximations, and neural networks in particular, we need to have a loss to optimize. But looking back at the tabular setting above, you can see that we already have some notion of error: the **TD error**.By training our neural network to output values such that the *TD error is minimized*, we will also satisfy the Bellman Optimality Equation, which is a good sufficient condition to enforce, to obtain an optimal policy.Thanks to automatic differentiation, we can just write the TD error as a loss, e.g., with an $\ell^2$ loss, but others would work too:\begin{equation}L(\color{green}w) = \mathbb{E}\left[ \left( \color{green}{r} + \gamma \max_\color{blue}{a'} \color{green}{Q_w}(\color{red}{s'}, \color{blue}{a'}) − \color{green}{Q_w}(\color{red}{s}, \color{blue}{a}) \right)^2\right].\end{equation}Then we can compute the gradient with respect to the parameters of the neural network and improve our Q-value approximation incrementally.NFQ builds on $\color{green}Q$-learning, but if one were to update the Q-values online directly, the training can be unstable and very slow.Instead, NFQ uses a replay buffer, similar to what we see implemented above (Section 6.1), to update the Q-value in a batched setting.When it was introduced, it also was entirely off-policy using a uniformly random policy to collect data, which was prone to instability when applied to more complex environments (e.g. when the input are pixels or the tasks are longer and more complicated).But it is a good stepping stone to the more complex agents used today. Here, we will look at a slightly different and modernised implementation of NFQ.Below you will find an incomplete NFQ agent that takes in observations from a gridworld. Instead of receiving a tabular state, it receives an observation in the form of its (x,y) coordinates in the gridworld, and the (x,y) coordinates of the goal.The goal of this coding exercise is to complete this agent by implementing the loss, using mean squared error.---$^1$ If you read the NFQ paper, they use a "control" notation, where there is a "cost to minimize", instead of "rewards to maximize", so don't be surprised if signs/max/min do not correspond.$^2$ We could feed it $\color{blue}{a}$ as well and ask $Q_w$ for a single scalar value, but given we have a fixed number of actions and we usually need to take an $argmax$ over them, it's easiest to just output them all in one pass. Coding Exercise 6.1: Implement NFQ ###Code # Create a convenient container for the SARS tuples required by NFQ. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) class NeuralFittedQAgent(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, q_network: nn.Module, replay_capacity: int = 100_000, epsilon: float = 0.1, batch_size: int = 1, learning_rate: float = 3e-4): # Store agent hyperparameters and network. self._num_actions = environment_spec.actions.num_values self._epsilon = epsilon self._batch_size = batch_size self._q_network = q_network # Container for the computed loss (see run_loop implementation above). self.last_loss = 0.0 # Create the replay buffer. self._replay_buffer = ReplayBuffer(replay_capacity) # Setup optimizer that will train the network to minimize the loss. self._optimizer = torch.optim.Adam(self._q_network.parameters(),lr = learning_rate) self._loss_fn = nn.MSELoss() def select_action(self, observation): # Compute Q-values. q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) # Adds batch dimension. q_values = q_values.squeeze(0) # Removes batch dimension # Select epsilon-greedy action. if self._epsilon < torch.rand(1): action = q_values.argmax(axis=-1) else: action = torch.randint(low=0, high=self._num_actions , size=(1,), dtype=torch.int64) return action def q_values(self, observation): q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) return q_values.squeeze(0).detach() def update(self): if not self._replay_buffer.is_ready(self._batch_size): # If the replay buffer is not ready to sample from, do nothing. return # Sample a minibatch of transitions from experience replay. transitions = self._replay_buffer.sample(self._batch_size) # Note: each of these tensors will be of shape [batch_size, ...]. s = torch.tensor(transitions.state) a = torch.tensor(transitions.action,dtype=torch.int64) r = torch.tensor(transitions.reward) d = torch.tensor(transitions.discount) next_s = torch.tensor(transitions.next_state) # Compute the Q-values at next states in the transitions. with torch.no_grad(): q_next_s = self._q_network(next_s) # Shape [batch_size, num_actions]. max_q_next_s = q_next_s.max(axis=-1)[0] # Compute the TD error and then the losses. target_q_value = r + d * max_q_next_s # Compute the Q-values at original state. q_s = self._q_network(s) # Gather the Q-value corresponding to each action in the batch. q_s_a = q_s.gather(1, a.view(-1,1)).squeeze(0) ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the NFQ Agent") ################################################# # TODO Average the squared TD errors over the entire batch using # self._loss_fn, which is defined above as nn.MSELoss() # HINT: Take a look at the reference for nn.MSELoss here: # https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html # What should you put for the input and the target? loss = ... # Compute the gradients of the loss with respect to the q_network variables. self._optimizer.zero_grad() loss.backward() # Apply the gradient update. self._optimizer.step() # Store the loss for logging purposes (see run_loop implementation above). self.last_loss = loss.detach().numpy() def observe_first(self, timestep: dm_env.TimeStep): self._replay_buffer.add_first(timestep) def observe(self, action: int, next_timestep: dm_env.TimeStep): self._replay_buffer.add(action, next_timestep) # add event to airtable atform.add_event('Coding Exercise 6.1: Implement NFQ') ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_f331422f.py) Train and Evaluate the NFQ Agent ###Code # @title Training the NFQ Agent epsilon = 0.4 # @param {type:"number"} max_episode_length = 200 # Create the environment. grid = build_gridworld_task( task='simple', observation_type=ObservationType.AGENT_GOAL_POS, max_episode_length=max_episode_length) environment, environment_spec = setup_environment(grid) # Define the neural function approximator (aka Q network). q_network = nn.Sequential(nn.Linear(4, 50), nn.ReLU(), nn.Linear(50, 50), nn.ReLU(), nn.Linear(50, environment_spec.actions.num_values)) # Build the trainable Q-learning agent agent = NeuralFittedQAgent( environment_spec, q_network, epsilon=epsilon, replay_capacity=100_000, batch_size=10, learning_rate=1e-3) returns = run_loop( environment=environment, agent=agent, num_episodes=500, logger_time_delta=1., log_loss=True) # @title Evaluating the agent (set $\epsilon=0$) # @markdown ##### Temporarily change epsilon to be more greedy; remember to change it back. agent._epsilon = 0.0 # Record a few episodes. frames = evaluate(environment, agent, evaluation_episodes=5) # Change epsilon back. agent._epsilon = epsilon # Display the video of the episodes. display_video(frames, frame_rate=6) # @title Visualise the learned $Q$ values # @markdown ##### Evaluate the policy for every state, similar to tabular agents above. environment.reset() pi = np.zeros(grid._layout_dims, dtype=np.int32) q = np.zeros(grid._layout_dims + (4, )) for y in range(grid._layout_dims[0]): for x in range(grid._layout_dims[1]): # Hack observation to see what the Q-network would output at that point. environment.set_state(x, y) obs = environment.get_obs() q[y, x] = np.asarray(agent.q_values(obs)) pi[y, x] = np.asarray(agent.select_action(obs)) plot_action_values(q) ###Output _____no_output_____ ###Markdown Compare the Q-values approximated with the neural network with the tabular case in **Section 5.3**. Notice how the neural network is generalizing from the visited states to the unvisited similar states, while in the tabular case we updated the value of each state only when we visited that state. Compare the greedy and behaviour ($\epsilon$-greedy) policies ###Code # @title Compare the greedy policy with the agent's policy # @markdown ##### Notice that the agent's behavior policy has a lot more randomness, due to the high $\epsilon$. However, the greedy policy that's learned is optimal. environment.plot_greedy_policy(q) plt.figtext(-.08, .95, 'Greedy policy using the learnt Q-values') plt.title('') plt.show() environment.plot_policy(pi) plt.figtext(-.08, .95, "Policy using the agent's behavior policy") plt.title('') plt.show() ###Output _____no_output_____ ###Markdown --- Section 7: RL in the real world*Time estimate: ~10mins* ###Code # @title Video 7: Real-world applications and ethics from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Nq4y1X7AF", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"5kBtiW88QVw", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 7: Real-world applications and ethics') display(out) ###Output _____no_output_____ ###Markdown Exercise 7: Group discussionForm a group of 2-3 and have discussions (roughly 3 minutes each) of the following questions:1. **Safety**: what are some safety issues that arise in RL that don’t arise with e.g. supervised learning?2. **Generalization**: What happens if your RL agent is presented with data it hasn’t trained on? (“goes out of distribution”)3. How important do you think **interpretability** is in the ethical and safe deployment of RL agents in the real world? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_99944c89.py) --- Summary*Time estimate: ~3mins* In this tutorial we learn the most important aspects of RL. Within the RL framework, we are able to identify the different components: environment, agent, states, and actions. In addition, we learned and understand the Bellman equation and components involved.We implemented tabular value-based model-free learning (Q-learning and SARSA). Finally, we discussed real-world applications and ethical issues of RL.If you have time left, in Bonus sections you can run a DQN agent and experiment with different hyperparameters (Bounus 1), and you can have a high-level understanding of other (nonvalue-based) RL methods (Bonus 2).See also our *Appendix and further reading* for more information. ###Code # @title Video 8: How to learn more from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1WM4y1T7G5", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"dKaOpgor5Ek", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 8: How to learn more') display(out) # @title Airtable Submission Link from IPython import display as IPydisplay IPydisplay.HTML( f""" <div> <a href= "{atform.url()}" target="_blank"> <img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1" alt="button link end of day Survey" style="width:410px"></a> </div>""" ) ###Output _____no_output_____ ###Markdown --- Bonus 1: DQN*Time estimate: ~30mins* ###Code # @title Video 9: Deep Q-Networks (DQN) from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV1Mo4y1Q7yD", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"HEDoNtV1y-w", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 9: Deep Q-Networks (DQN)') display(out) ###Output _____no_output_____ ###Markdown Adopted from Mnih et al., 2015 In this section, we will look at an advanced deep RL Agent based on the following publication, [Playing Atari with Deep Reinforcement Learning](https://deepmind.com/research/publications/playing-atari-deep-reinforcement-learning), which introduced the first deep learning model to successfully learn control policies directly from high-dimensional pixel inputs using RL.Here the agent will act directly on a pixel representation of the gridworld. You can find an incomplete implementation below. Bonus Coding Exercise 1: Run a DQN Agent ###Code class DQN(acme.Actor): def __init__(self, environment_spec: specs.EnvironmentSpec, network: nn.Module, replay_capacity: int = 100_000, epsilon: float = 0.1, batch_size: int = 1, learning_rate: float = 5e-4, target_update_frequency: int = 10): # Store agent hyperparameters and network. self._num_actions = environment_spec.actions.num_values self._epsilon = epsilon self._batch_size = batch_size self._q_network = q_network # create a second q net with the same structure and initial values, which # we'll be updating separately from the learned q-network. self._target_network = copy.deepcopy(self._q_network) # Container for the computed loss (see run_loop implementation above). self.last_loss = 0.0 # Create the replay buffer. self._replay_buffer = ReplayBuffer(replay_capacity) # Keep an internal tracker of steps self._current_step = 0 # How often to update the target network self._target_update_frequency = target_update_frequency # Setup optimizer that will train the network to minimize the loss. self._optimizer = torch.optim.Adam(self._q_network.parameters(), lr=learning_rate) self._loss_fn = nn.MSELoss() def select_action(self, observation): # Compute Q-values. # Sonnet requires a batch dimension, which we squeeze out right after. q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) # Adds batch dimension. q_values = q_values.squeeze(0) # Removes batch dimension # Select epsilon-greedy action. if self._epsilon < torch.rand(1): action = q_values.argmax(axis=-1) else: action = torch.randint(low=0, high=self._num_actions , size=(1,), dtype=torch.int64) return action def q_values(self, observation): q_values = self._q_network(torch.tensor(observation).unsqueeze(0)) return q_values.squeeze(0).detach() def update(self): self._current_step += 1 if not self._replay_buffer.is_ready(self._batch_size): # If the replay buffer is not ready to sample from, do nothing. return # Sample a minibatch of transitions from experience replay. transitions = self._replay_buffer.sample(self._batch_size) # Optionally unpack the transitions to lighten notation. # Note: each of these tensors will be of shape [batch_size, ...]. s = torch.tensor(transitions.state) a = torch.tensor(transitions.action,dtype=torch.int64) r = torch.tensor(transitions.reward) d = torch.tensor(transitions.discount) next_s = torch.tensor(transitions.next_state) # Compute the Q-values at next states in the transitions. with torch.no_grad(): ################################################# # Fill in missing code below (...), # then remove or comment the line below to test your implementation raise NotImplementedError("Student exercise: complete the DQN Agent") ################################################# #TODO get the value of the next states evaluated by the target network #HINT: use self._target_network, defined above. q_next_s = ... # Shape [batch_size, num_actions]. max_q_next_s = q_next_s.max(axis=-1)[0] # Compute the TD error and then the losses. target_q_value = r + d * max_q_next_s # Compute the Q-values at original state. q_s = self._q_network(s) # Gather the Q-value corresponding to each action in the batch. q_s_a = q_s.gather(1, a.view(-1,1)).squeeze(0) # Average the squared TD errors over the entire batch loss = self._loss_fn(target_q_value, q_s_a) # Compute the gradients of the loss with respect to the q_network variables. self._optimizer.zero_grad() loss.backward() # Apply the gradient update. self._optimizer.step() if self._current_step % self._target_update_frequency == 0: self._target_network.load_state_dict(self._q_network.state_dict()) # Store the loss for logging purposes (see run_loop implementation above). self.last_loss = loss.detach().numpy() def observe_first(self, timestep: dm_env.TimeStep): self._replay_buffer.add_first(timestep) def observe(self, action: int, next_timestep: dm_env.TimeStep): self._replay_buffer.add(action, next_timestep) # Create a convenient container for the SARS tuples required by NFQ. Transitions = collections.namedtuple( 'Transitions', ['state', 'action', 'reward', 'discount', 'next_state']) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_c2f18cc9.py) ###Code # @title Train and evaluate the DQN agent epsilon = 0.25 # @param {type: "number"} num_episodes = 500 # @param {type: "integer"} max_episode_length = 50 # @param {type: "integer"} grid = build_gridworld_task( task='simple', observation_type=ObservationType.GRID, max_episode_length=max_episode_length) environment, environment_spec = setup_environment(grid) # Build the agent's network. class Permute(nn.Module): def __init__(self, order: list): super(Permute,self).__init__() self.order = order def forward(self, x): return x.permute(self.order) q_network = nn.Sequential(Permute([0, 3, 1, 2]), nn.Conv2d(3, 32, kernel_size=4, stride=2,padding=1), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.MaxPool2d(3, 1), nn.Flatten(), nn.Linear(384, 50), nn.ReLU(), nn.Linear(50, environment_spec.actions.num_values) ) agent = DQN( environment_spec=environment_spec, network=q_network, batch_size=10, epsilon=epsilon, target_update_frequency=25) returns = run_loop( environment=environment, agent=agent, num_episodes=num_episodes, num_steps=100000) # @title Visualise the learned $Q$ values # @markdown ##### Evaluate the policy for every state, similar to tabular agents above. pi = np.zeros(grid._layout_dims, dtype=np.int32) q = np.zeros(grid._layout_dims + (4,)) for y in range(grid._layout_dims[0]): for x in range(grid._layout_dims[1]): # Hack observation to see what the Q-network would output at that point. environment.set_state(x, y) obs = environment.get_obs() q[y, x] = np.asarray(agent.q_values(obs)) pi[y, x] = np.asarray(agent.select_action(obs)) plot_action_values(q) # @title Compare the greedy policy with the agent's policy environment.plot_greedy_policy(q) plt.figtext(-.08, .95, "Greedy policy using the learnt Q-values") plt.title('') plt.show() environment.plot_policy(pi) plt.figtext(-.08, .95, "Policy using the agent's epsilon-greedy policy") plt.title('') plt.show() ###Output _____no_output_____ ###Markdown **Note:** You’ll get a better estimate of the value functions if you increase `num_episodes` and `max_episode_length`, but this will take longer to train. Feel free to play around after the day! --- Bonus 2: Beyond Value Based Model-Free Methods*Time estimate: ~25mins* ###Code # @title Video 10: Other RL Methods from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id=f"BV14w411977Y", width=854, height=480, fs=1) print("Video available at https://www.bilibili.com/video/{0}".format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id=f"1N4Jm9loJx4", width=854, height=480, fs=1, rel=0) print("Video available at https://youtube.com/watch?v=" + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') # add event to airtable atform.add_event('Video 10: Other RL Methods') display(out) ###Output _____no_output_____ ###Markdown Cartpole taskHere we switch to training on a different kind of task, which has a continuous action space: Cartpole in [Gym](https://gym.openai.com/). As you recall from the video, policy-based methods are particularly well-suited for these kinds of tasks. We will be exploring two of those methods below. ###Code # @title Make a CartPole environment, `gym.make('CartPole-v1')` env = gym.make('CartPole-v1') # Set seeds env.seed(SEED) set_seed(SEED) ###Output _____no_output_____ ###Markdown Bonus 2.1: Policy gradientNow we will turn to policy gradient methods. Rather than defining the policy in terms of a value function, i.e. $\color{blue}\pi(\color{red}s) = \arg\max_{\color{blue}a}\color{green}Q(\color{red}s, \color{blue}a)$, we will directly parameterize the policy and write it as the distribution\begin{equation}\color{blue}a_t \sim \color{blue}\pi_{\theta}(\color{blue}a_t|\color{red}s_t).\end{equation}Here $\theta$ represent the parameters of the policy. We will update the policy parameters using gradient ascent to **maximize** expected future reward.One convenient way to represent the conditional distribution above is as a function that takes a state $\color{red}s$ and returns a distribution over actions $\color{blue}a$.Defined below is an agent which implements the REINFORCE algorithm. REINFORCE (Williams 1992) is the simplest model-free general reinforcement learning technique.The **basic idea** is to use probabilistic action choice. If the reward at the end turns out to be high, we make **all** actions in this sequence **more likely** (otherwise, we do the opposite).This strategy could reinforce "bad" actions as well, however they will turn out to be part of trajectories with low reward and will likely not get accentuated.From the lectures, we know that we need to compute\begin{equation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} G_t \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}where $\color{green} G_t$ is the sum over future rewards from time $t$, defined as\begin{equation}\color{green} G_t = \sum_{n=t}^T \gamma^{n-t} \color{green} R(\color{red}{s_t}, \color{blue}{a_t}, \color{red}{s_{t+1}}).\end{equation}The algorithm below will collect the state, action, and reward data in its buffer until it reaches a full trajectory. It will then update its policy given the above gradient (and the Adam optimizer).A policy gradient trains an agent without explicitly mapping the value for every state-action pair in an environment by taking small steps and updating the policy based on the reward associated with that step. In this section, we will build a small network that trains using policy gradient using PyTorch.The agent can receive a reward immediately for an action or it can receive the award at a later time such as the end of the episode. The policy function our agent will try to learn is $\pi_\theta(a,s)$, where $\theta$ is the parameter vector, $s$ is a particular state, and $a$ is an action.Monte-Carlo Policy Gradient approach will be used, which means the agent will run through an entire episode and then update policy based on the rewards obtained. ###Code # @title Set the hyperparameters for Policy Gradient num_steps = 300 learning_rate = 0.01 # @param {type:"number"} gamma = 0.99 # @param {type:"number"} dropout = 0.6 # @param {type:"number"} # @markdown Only used in Policy Gradient Method: hidden_neurons = 128 # @param {type:"integer"} ###Output _____no_output_____ ###Markdown Bonus Coding Exercise 2.1: Creating a simple neural networkBelow you will find some incomplete code. Fill in the missing code to construct the specified neural network.Let us define a simple feed forward neural network with one hidden layer of 128 neurons and a dropout of 0.6. Let's use Adam as our optimizer and a learning rate of 0.01. Use the hyperparameters already defined rather than using explicit values.Using dropout will significantly improve the performance of the policy. Do compare your results with and without dropout and experiment with other hyper-parameter values as well. ###Code class PolicyGradientNet(nn.Module): def __init__(self): super(PolicyGradientNet, self).__init__() self.state_space = env.observation_space.shape[0] self.action_space = env.action_space.n ################################################# ## TODO for students: Define two linear layers ## from the first expression raise NotImplementedError("Student exercise: Create FF neural network.") ################################################# # HINT: you can construct linear layers using nn.Linear(); what are the # sizes of the inputs and outputs of each of the layers? Also remember # that you need to use hidden_neurons (see hyperparameters section above). # https://pytorch.org/docs/stable/generated/torch.nn.Linear.html self.l1 = ... self.l2 = ... self.gamma = ... # Episode policy and past rewards self.past_policy = Variable(torch.Tensor()) self.reward_episode = [] # Overall reward and past loss self.past_reward = [] self.past_loss = [] def forward(self, x): model = torch.nn.Sequential( self.l1, nn.Dropout(p=dropout), nn.ReLU(), self.l2, nn.Softmax(dim=-1) ) return model(x) ###Output _____no_output_____ ###Markdown [*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W3D2_BasicReinforcementLearning/solutions/W3D2_Tutorial1_Solution_9aaf4a83.py) Now let's create an instance of the network we have defined and use Adam as the optimizer using the learning_rate as hyperparameter already defined above. ###Code policy = PolicyGradientNet() pg_optimizer = optim.Adam(policy.parameters(), lr=learning_rate) ###Output _____no_output_____ ###Markdown Select ActionThe `select_action()` function chooses an action based on our policy probability distribution using the PyTorch distributions package. Our policy returns a probability for each possible action in our action space (move left or move right) as an array of length two such as [0.7, 0.3]. We then choose an action based on these probabilities, record our history, and return our action. ###Code def select_action(state): #Select an action (0 or 1) by running policy model and choosing based on the probabilities in state state = torch.from_numpy(state).type(torch.FloatTensor) state = policy(Variable(state)) c = Categorical(state) action = c.sample() # Add log probability of chosen action if policy.past_policy.dim() != 0: policy.past_policy = torch.cat([policy.past_policy, c.log_prob(action).reshape(1)]) else: policy.past_policy = (c.log_prob(action).reshape(1)) return action ###Output _____no_output_____ ###Markdown Update policyThis function updates the policy. Reward $G_t$We update our policy by taking a sample of the action value function $Q^{\pi_\theta} (s_t,a_t)$ by playing through episodes of the game. $Q^{\pi_\theta} (s_t,a_t)$ is defined as the expected return by taking action $a$ in state $s$ following policy $\pi$.We know that for every step the simulation continues we receive a reward of 1. We can use this to calculate the policy gradient at each time step, where $r$ is the reward for a particular state-action pair. Rather than using the instantaneous reward, $r$, we instead use a long term reward $ v_{t} $ where $v_t$ is the discounted sum of all future rewards for the length of the episode. $v_{t}$ is then,\begin{equation}\color{green} G_t = \sum_{n=t}^T \gamma^{n-t} \color{green} R(\color{red}{s_t}, \color{blue}{a_t}, \color{red}{s_{t+1}}).\end{equation}where $\gamma$ is the discount factor (0.99). For example, if an episode lasts 5 steps, the reward for each step will be [4.90, 3.94, 2.97, 1.99, 1].Next we scale our reward vector by substracting the mean from each element and scaling to unit variance by dividing by the standard deviation. This practice is common for machine learning applications and the same operation as Scikit Learn's __[StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)__. It also has the effect of compensating for future uncertainty. Update Policy: equationAfter each episode we apply Monte-Carlo Policy Gradient to improve our policy according to the equation:\begin{equation}\Delta\theta_t = \alpha\nabla_\theta \, \log \pi_\theta (s_t,a_t)G_t\end{equation}We will then feed our policy history multiplied by our rewards to our optimizer and update the weights of our neural network using stochastic gradient **ascent**. This should increase the likelihood of actions that got our agent a larger reward. The following function ```update_policy``` updates the network weights and therefore the policy. ###Code def update_policy(): R = 0 rewards = [] # Discount future rewards back to the present using gamma for r in policy.reward_episode[::-1]: R = r + policy.gamma * R rewards.insert(0, R) # Scale rewards rewards = torch.FloatTensor(rewards) rewards = (rewards - rewards.mean()) / (rewards.std() + np.finfo(np.float32).eps) # Calculate loss pg_loss = (torch.sum(torch.mul(policy.past_policy, Variable(rewards)).mul(-1), -1)) # Update network weights # Use zero_grad(), backward() and step() methods of the optimizer instance. pg_optimizer.zero_grad() pg_loss.backward() # Update the weights for param in policy.parameters(): param.grad.data.clamp_(-1, 1) pg_optimizer.step() # Save and intialize episode past counters policy.past_loss.append(pg_loss.item()) policy.past_reward.append(np.sum(policy.reward_episode)) policy.past_policy = Variable(torch.Tensor()) policy.reward_episode= [] ###Output _____no_output_____ ###Markdown TrainingThis is our main policy training loop. For each step in a training episode, we choose an action, take a step through the environment, and record the resulting new state and reward. We call `update_policy()` at the end of each episode to feed the episode history to our neural network and improve our policy. ###Code def policy_gradient_train(episodes): running_reward = 10 for episode in range(episodes): state = env.reset() done = False for time in range(1000): action = select_action(state) # Step through environment using chosen action state, reward, done, _ = env.step(action.item()) # Save reward policy.reward_episode.append(reward) if done: break # Used to determine when the environment is solved. running_reward = (running_reward * gamma) + (time * (1 - gamma)) update_policy() if episode % 50 == 0: print(f"Episode {episode}\tLast length: {time:5.0f}" f"\tAverage length: {running_reward:.2f}") if running_reward > env.spec.reward_threshold: print(f"Solved! Running reward is now {running_reward} " f"and the last episode runs to {time} time steps!") break ###Output _____no_output_____ ###Markdown Run the model ###Code episodes = 500 # @param {type:"integer"} policy_gradient_train(episodes) ###Output _____no_output_____ ###Markdown Plot the results ###Code # @title Plot the training performance for policy gradient def plot_policy_gradient_training(): window = int(episodes / 20) fig, ((ax1), (ax2)) = plt.subplots(1, 2, sharey=True, figsize=[15, 4]); rolling_mean = pd.Series(policy.past_reward).rolling(window).mean() std = pd.Series(policy.past_reward).rolling(window).std() ax1.plot(rolling_mean) ax1.fill_between(range(len(policy.past_reward)), rolling_mean-std, rolling_mean+std, color='orange', alpha=0.2) ax1.set_title(f"Episode Length Moving Average ({window}-episode window)") ax1.set_xlabel('Episode'); ax1.set_ylabel('Episode Length') ax2.plot(policy.past_reward) ax2.set_title('Episode Length') ax2.set_xlabel('Episode') ax2.set_ylabel('Episode Length') fig.tight_layout(pad=2) plt.show() plot_policy_gradient_training() ###Output _____no_output_____ ###Markdown Bonus Exercise 2.1: Explore different hyperparameters.Try running the model again, by modifying the hyperparameters and observe the outputs. Be sure to rerun the function definition cells in order to pick up on the updated values.What do you see when you 1. increase learning rate2. decrease learning rate3. decrease gamma ($\gamma$)4. increase number of hidden neurons in the network Bonus 2.2: Actor-criticRecall the policy gradient\begin{equation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} G_t \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}The policy parameters are updated using Monte Carlo technique and uses random samples. This introduces high variability in log probabilities and cumulative reward values. This leads to noisy gradients and can cause unstable learning.One way to reduce variance and increase stability is subtracting the cumulative reward by a baseline:\begin{equation}\nabla J(\theta) = \mathbb{E}\left[ \sum_{t=0}^T \color{green} (G_t - b) \nabla\log\color{blue}\pi_\theta(\color{red}{s_t})\right]\end{equation}Intuitively, reducing cumulative reward will make smaller gradients and thus smaller and more stable (hopefully) updates.From the lecture slides, we know that in Actor Critic Method:1. The “Critic” estimates the value function. This could be the action-value (the Q value) or state-value (the V value).2. The “Actor” updates the policy distribution in the direction suggested by the Critic (such as with policy gradients).Both the Critic and Actor functions are parameterized with neural networks. The "Critic" network parameterizes the Q-value. ###Code # @title Set the hyperparameters for Actor Critic learning_rate = 0.01 # @param {type:"number"} gamma = 0.99 # @param {type:"number"} dropout = 0.6 # Only used in Actor-Critic Method hidden_size = 256 # @param {type:"integer"} num_steps = 300 ###Output _____no_output_____ ###Markdown Actor Critic Network ###Code class ActorCriticNet(nn.Module): def __init__(self, num_inputs, num_actions, hidden_size, learning_rate=3e-4): super(ActorCriticNet, self).__init__() self.num_actions = num_actions self.critic_linear1 = nn.Linear(num_inputs, hidden_size) self.critic_linear2 = nn.Linear(hidden_size, 1) self.actor_linear1 = nn.Linear(num_inputs, hidden_size) self.actor_linear2 = nn.Linear(hidden_size, num_actions) self.all_rewards = [] self.all_lengths = [] self.average_lengths = [] def forward(self, state): state = Variable(torch.from_numpy(state).float().unsqueeze(0)) value = F.relu(self.critic_linear1(state)) value = self.critic_linear2(value) policy_dist = F.relu(self.actor_linear1(state)) policy_dist = F.softmax(self.actor_linear2(policy_dist), dim=1) return value, policy_dist ###Output _____no_output_____ ###Markdown Training ###Code def actor_critic_train(episodes): all_lengths = [] average_lengths = [] all_rewards = [] entropy_term = 0 for episode in range(episodes): log_probs = [] values = [] rewards = [] state = env.reset() for steps in range(num_steps): value, policy_dist = actor_critic.forward(state) value = value.detach().numpy()[0, 0] dist = policy_dist.detach().numpy() action = np.random.choice(num_outputs, p=np.squeeze(dist)) log_prob = torch.log(policy_dist.squeeze(0)[action]) entropy = -np.sum(np.mean(dist) * np.log(dist)) new_state, reward, done, _ = env.step(action) rewards.append(reward) values.append(value) log_probs.append(log_prob) entropy_term += entropy state = new_state if done or steps == num_steps - 1: qval, _ = actor_critic.forward(new_state) qval = qval.detach().numpy()[0, 0] all_rewards.append(np.sum(rewards)) all_lengths.append(steps) average_lengths.append(np.mean(all_lengths[-10:])) if episode % 50 == 0: print(f"episode: {episode},\treward: {np.sum(rewards)}," f"\ttotal length: {steps}," f"\taverage length: {average_lengths[-1]}") break # compute Q values qvals = np.zeros_like(values) for t in reversed(range(len(rewards))): qval = rewards[t] + gamma * qval qvals[t] = qval #update actor critic values = torch.FloatTensor(values) qvals = torch.FloatTensor(qvals) log_probs = torch.stack(log_probs) advantage = qvals - values actor_loss = (-log_probs * advantage).mean() critic_loss = 0.5 * advantage.pow(2).mean() ac_loss = actor_loss + critic_loss + 0.001 * entropy_term ac_optimizer.zero_grad() ac_loss.backward() ac_optimizer.step() # Store results actor_critic.average_lengths = average_lengths actor_critic.all_rewards = all_rewards actor_critic.all_lengths = all_lengths ###Output _____no_output_____ ###Markdown Run the model ###Code episodes = 500 # @param {type:"integer"} env.reset() num_inputs = env.observation_space.shape[0] num_outputs = env.action_space.n actor_critic = ActorCriticNet(num_inputs, num_outputs, hidden_size) ac_optimizer = optim.Adam(actor_critic.parameters()) actor_critic_train(episodes) ###Output _____no_output_____ ###Markdown Plot the results ###Code # @title Plot the training performance for Actor Critic def plot_actor_critic_training(actor_critic, episodes): window = int(episodes / 20) plt.figure(figsize=(15, 4)) plt.subplot(1, 2, 1) smoothed_rewards = pd.Series(actor_critic.all_rewards).rolling(window).mean() std = pd.Series(actor_critic.all_rewards).rolling(window).std() plt.plot(smoothed_rewards, label='Smoothed rewards') plt.fill_between(range(len(smoothed_rewards)), smoothed_rewards - std, smoothed_rewards + std, color='orange', alpha=0.2) plt.xlabel('Episode') plt.ylabel('Reward') plt.subplot(1, 2, 2) plt.plot(actor_critic.all_lengths, label='All lengths') plt.plot(actor_critic.average_lengths, label='Average lengths') plt.xlabel('Episode') plt.ylabel('Episode length') plt.legend() plt.tight_layout() plt.show() plot_actor_critic_training(actor_critic, episodes) ###Output _____no_output_____
site/ja/tutorials/keras/keras_tuner.ipynb
###Markdown Copyright 2020 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Keras Tuner の基礎 TensorFlow.org で表示 Google Colab で実行 GitHub でソースを表示 ノートブックをダウンロード 概要Keras Tuner は、TensorFlow プログラム向けに最適なハイパーパラメータを選択するためのライブラリです。ユーザーの機械学習(ML)アプリケーションに適切なハイパーパラメータを選択するためのプロセスは、*ハイパーパラメータチューニング*または*ハイパーチューニング*と呼ばれます。ハイパーパラメータは、ML のトレーニングプロセスとトポロジーを管理する変数です。これらの変数はトレーニングプロセス中、一貫して定数を維持し、ML プログラムのパフォーマンスに直接影響を与えます。ハイパーパラメータには、以下の 2 種類があります。1. **モデルハイパーパラメータ**: 非表示レイヤーの数と幅などのモデルの選択に影響します。2. **アルゴリズムハイパーパラメータ**: 確率的勾配降下法 (SGD) の学習率や k 最近傍 (KNN) 分類器の最近傍の数など、学習アルゴリズムの速度と質に影響します。このチュートリアルでは、Keras Tuner を使用して、画像分類アプリケーションのハイパーチューニングを実施します。 セットアップ ###Code import tensorflow as tf from tensorflow import keras ###Output _____no_output_____ ###Markdown Keras Tuner をインストールしてインポートします。 ###Code !pip install -q -U keras-tuner import keras_tuner as kt ###Output _____no_output_____ ###Markdown データセットをダウンロードして準備するこのチュートリアルでは、Keras Tuner を使用して、[Fashion MNIST データセット](https://github.com/zalandoresearch/fashion-mnist)の服飾の画像を分類する学習モデル向けに最適なハイパーパラメータを見つけます。 データを読み込みます。 ###Code (img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data() # Normalize pixel values between 0 and 1 img_train = img_train.astype('float32') / 255.0 img_test = img_test.astype('float32') / 255.0 ###Output _____no_output_____ ###Markdown モデルを定義するハイパーチューニングを行うモデルを構築する際、モデルアーキテクチャのほかにハイパーパラメータ検索空間も定義します。ハイパーチューニング用にセットアップするモデルを*ハイパーモデル*と呼びます。ハイパーモデルの定義は、以下の 2 つの方法で行います。- モデルビルダー関数を使用する- Keras Tuner API の `HyperModel` クラスをサブクラス化するまた、コンピュータビジョンアプリケーション用の [HyperXception](https://keras-team.github.io/keras-tuner/documentation/hypermodels/hyperresnet-class) と HyperResNet という 2 つの事前定義済みの HyperModel クラスも使用します。このチュートリアルでは、モデルビルダー関数を使用して、画像分類モデルを定義します。モデルビルダー関数は、コンパイル済みのモデルを返し、インラインで定義するハイパーパラメータを使用してモデルをハイパーチューニングします。 ###Code def model_builder(hp): model = keras.Sequential() model.add(keras.layers.Flatten(input_shape=(28, 28))) # Tune the number of units in the first Dense layer # Choose an optimal value between 32-512 hp_units = hp.Int('units', min_value=32, max_value=512, step=32) model.add(keras.layers.Dense(units=hp_units, activation='relu')) model.add(keras.layers.Dense(10)) # Tune the learning rate for the optimizer # Choose an optimal value from 0.01, 0.001, or 0.0001 hp_learning_rate = hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4]) model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp_learning_rate), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) return model ###Output _____no_output_____ ###Markdown チューナーをインスタンス化してハイパーチューニングを実行するチューナーをインスタンス化して、ハイパーチューニングを実行します。Keras Tuner には、`RandomSearch`、`Hyperband`、`BayesianOptimization`、および `Sklearn` チューナーがあります。このチュートリアルでは、[Hyperband](https://arxiv.org/pdf/1603.06560.pdf) チューナーを使用します。Hyperband チューナーをインスタンス化するには、ハイパーモデル、最適化する `objective`、およびトレーニングするエポックの最大数 (`max_epochs`) を指定する必要があります。 ###Code tuner = kt.Hyperband(model_builder, objective='val_accuracy', max_epochs=10, factor=3, directory='my_dir', project_name='intro_to_kt') ###Output _____no_output_____ ###Markdown Hyperband チューニングアルゴリズムは、適応型リソース割り当てと早期停止を使用して、高パフォーマンスモデルに素早く収束させます。これは、トーナメント式のツリーを使用して行われます。アルゴリズムは、数回のエポックで大量のモデルをトレーニングし、性能の高い上位半数のモデル次のラウンドに持ち越します。Hyperband は、1 + logfactor(`max_epochs`) を計算し、直近の整数に繰り上げて、トーナメントでトレーニングするモデル数を決定します。 検証損失の特定の値に達した後、トレーニングを早期に停止するためのコールバックを作成します。 ###Code stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5) ###Output _____no_output_____ ###Markdown ハイパーパラメータ検索を実行します。検索メソッドの引数は、上記のコールバックのほか、`tf.keras.model.fit` に使用される引数と同じです。 ###Code tuner.search(img_train, label_train, epochs=50, validation_split=0.2, callbacks=[stop_early]) # Get the optimal hyperparameters best_hps=tuner.get_best_hyperparameters(num_trials=1)[0] print(f""" The hyperparameter search is complete. The optimal number of units in the first densely-connected layer is {best_hps.get('units')} and the optimal learning rate for the optimizer is {best_hps.get('learning_rate')}. """) ###Output _____no_output_____ ###Markdown モデルをトレーニングする検索から取得したハイパーパラメータを使用してモデルをトレーニングするための最適なエポック数を見つけます。 ###Code # Build the model with the optimal hyperparameters and train it on the data for 50 epochs model = tuner.hypermodel.build(best_hps) history = model.fit(img_train, label_train, epochs=50, validation_split=0.2) val_acc_per_epoch = history.history['val_accuracy'] best_epoch = val_acc_per_epoch.index(max(val_acc_per_epoch)) + 1 print('Best epoch: %d' % (best_epoch,)) ###Output _____no_output_____ ###Markdown ハイパーモデルを再インスタンス化し、前述の最適なエポック数でトレーニングします。 ###Code hypermodel = tuner.hypermodel.build(best_hps) # Retrain the model hypermodel.fit(img_train, label_train, epochs=best_epoch, validation_split=0.2) ###Output _____no_output_____ ###Markdown このチュートリアルを終了するには、テストデータでハイパーモデルを評価します。 ###Code eval_result = hypermodel.evaluate(img_test, label_test) print("[test loss, test accuracy]:", eval_result) ###Output _____no_output_____ ###Markdown Copyright 2020 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Keras Tuner の基礎 TensorFlow.org で表示 Google Colab で実行 GitHub でソースを表示 ノートブックをダウンロード 概要Keras Tuner は、TensorFlow プログラム向けに最適なハイパーパラメータを選択するためのライブラリです。ユーザーの機械学習(ML)アプリケーションに適切なハイパーパラメータを選択するためのプロセスは、*ハイパーパラメータチューニング*または*ハイパーチューニング*と呼ばれます。ハイパーパラメータは、ML のトレーニングプロセスとトポロジーを管理する変数です。これらの変数はトレーニングプロセス中、一貫して定数を維持し、ML プログラムのパフォーマンスに直接影響を与えます。ハイパーパラメータには、以下の 2 種類があります。1. **モデルハイパーパラメータ**: 非表示レイヤーの数と幅などのモデルの選択に影響します。2. **アルゴリズムハイパーパラメータ**: 確率的勾配降下法(SGD)の学習速度や k 最近傍(KNN)分類器の最近傍の数など、学習アルゴリズムの速度と質に影響します。このチュートリアルでは、Keras Tuner を使用して、画像分類アプリケーションのハイパーチューニングを実施します。 セットアップ ###Code import tensorflow as tf from tensorflow import keras import IPython ###Output _____no_output_____ ###Markdown Keras Tuner をインストールしてインポートします。 ###Code !pip install -U keras-tuner import kerastuner as kt ###Output _____no_output_____ ###Markdown データセットをダウンロードして準備するこのチュートリアルでは、Keras Tuner を使用して、[Fashion MNIST データセット](https://github.com/zalandoresearch/fashion-mnist)の服飾の画像を分類する学習モデル向けに最適なハイパーパラメータを見つけます。 データを読み込みます。 ###Code (img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data() # Normalize pixel values between 0 and 1 img_train = img_train.astype('float32') / 255.0 img_test = img_test.astype('float32') / 255.0 ###Output _____no_output_____ ###Markdown モデルを定義するハイパーチューニングを行うモデルを構築する際、モデルアーキテクチャのほかにハイパーパラメータ検索空間も定義します。ハイパーチューニング用にセットアップするモデルを*ハイパーモデル*と呼びます。ハイパーモデルの定義は、以下の 2 つの方法で行います。- モデルビルダー関数を使用する- Keras Tuner API の `HyperModel` クラスをサブクラス化するまた、コンピュータビジョンアプリケーション用の [HyperXception](https://keras-team.github.io/keras-tuner/documentation/hypermodels/hyperresnet-class) と HyperResNet という 2 つの事前定義済みの HyperModel クラスも使用します。このチュートリアルでは、モデルビルダー関数を使用して、画像分類モデルを定義します。モデルビルダー関数は、コンパイル済みのモデルを返し、インラインで定義するハイパーパラメータを使用してモデルをハイパーチューニングします。 ###Code def model_builder(hp): model = keras.Sequential() model.add(keras.layers.Flatten(input_shape=(28, 28))) # Tune the number of units in the first Dense layer # Choose an optimal value between 32-512 hp_units = hp.Int('units', min_value = 32, max_value = 512, step = 32) model.add(keras.layers.Dense(units = hp_units, activation = 'relu')) model.add(keras.layers.Dense(10)) # Tune the learning rate for the optimizer # Choose an optimal value from 0.01, 0.001, or 0.0001 hp_learning_rate = hp.Choice('learning_rate', values = [1e-2, 1e-3, 1e-4]) model.compile(optimizer = keras.optimizers.Adam(learning_rate = hp_learning_rate), loss = keras.losses.SparseCategoricalCrossentropy(from_logits = True), metrics = ['accuracy']) return model ###Output _____no_output_____ ###Markdown チューナーをインスタンス化してハイパーチューニングを実行するチューナーをインスタンス化して、ハイパーチューニングを実行します。Keras Tuner には、`RandomSearch`、`Hyperband`、`BayesianOptimization`、および `Sklearn` チューナーがあります。このチュートリアルでは、[Hyperband](https://arxiv.org/pdf/1603.06560.pdf) チューナーを使用します。Hyperband チューナーをインスタンス化するには、ハイパーモデル、最適化する `objective`、およびトレーニングするエポックの最大数(`max_epochs`)を指定する必要があります。 ###Code tuner = kt.Hyperband(model_builder, objective = 'val_accuracy', max_epochs = 10, factor = 3, directory = 'my_dir', project_name = 'intro_to_kt') ###Output _____no_output_____ ###Markdown Hyperband チューニングアルゴリズムは、適応型リソース割り当てと早期停止を使用して、高パフォーマンスモデルに素早く収束させます。これは、トーナメント式のツリーを使用して行われます。アルゴリズムは、数回のエポックで大量のモデルをトレーニングし、性能の高い上位半数のモデル次のラウンドに持ち越します。Hyperband は、1 + logfactor(`max_epochs`) を計算し、直近の整数に繰り上げて、トーナメントでトレーニングするモデル数を決定します。 ハイパーパラメータ検索を実行する前に、トレーニングステップごとにトレーニング出力をクリアにするコールバックを定義します。 ###Code class ClearTrainingOutput(tf.keras.callbacks.Callback): def on_train_end(*args, **kwargs): IPython.display.clear_output(wait = True) ###Output _____no_output_____ ###Markdown ハイパーパラメータ検索を実行します。検索メソッドの引数は、上記のコールバックのほか、`tf.keras.model.fit` に使用される引数と同じです。 ###Code tuner.search(img_train, label_train, epochs = 10, validation_data = (img_test, label_test), callbacks = [ClearTrainingOutput()]) # Get the optimal hyperparameters best_hps = tuner.get_best_hyperparameters(num_trials = 1)[0] print(f""" The hyperparameter search is complete. The optimal number of units in the first densely-connected layer is {best_hps.get('units')} and the optimal learning rate for the optimizer is {best_hps.get('learning_rate')}. """) ###Output _____no_output_____ ###Markdown このチュートリアルの最後のステップとして、検索から得た最適なハイパーパラメータでモデルを保存します。 ###Code # Build the model with the optimal hyperparameters and train it on the data model = tuner.hypermodel.build(best_hps) model.fit(img_train, label_train, epochs = 10, validation_data = (img_test, label_test)) ###Output _____no_output_____ ###Markdown Copyright 2020 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Keras Tuner の基礎 TensorFlow.org で表示 Google Colab で実行 GitHub でソースを表示 ノートブックをダウンロード 概要Keras Tuner は、TensorFlow プログラム向けに最適なハイパーパラメータを選択するためのライブラリです。ユーザーの機械学習(ML)アプリケーションに適切なハイパーパラメータを選択するためのプロセスは、*ハイパーパラメータチューニング*または*ハイパーチューニング*と呼ばれます。ハイパーパラメータは、ML のトレーニングプロセスとトポロジーを管理する変数です。これらの変数はトレーニングプロセス中、一貫して定数を維持し、ML プログラムのパフォーマンスに直接影響を与えます。ハイパーパラメータには、以下の 2 種類があります。1. **モデルハイパーパラメータ**: 非表示レイヤーの数と幅などのモデルの選択に影響します。2. **アルゴリズムハイパーパラメータ**: 確率的勾配降下法(SGD)の学習速度や k 最近傍(KNN)分類器の最近傍の数など、学習アルゴリズムの速度と質に影響します。このチュートリアルでは、Keras Tuner をし米須ヒデ、画像分類アプリケーションのハイパーチューニングを実施します。 セットアップ ###Code import tensorflow as tf from tensorflow import keras import IPython ###Output _____no_output_____ ###Markdown Keras Tuner をインストールしてインポートします。 ###Code !pip install -U keras-tuner import kerastuner as kt ###Output _____no_output_____ ###Markdown データセットをダウンロードして準備するこのチュートリアルでは、Keras Tuner を使用して、[Fashion MNIST データセット](https://github.com/zalandoresearch/fashion-mnist)の服飾の画像を分類する学習モデル向けに最適なハイパーパラメータを見つけます。 データを読み込みます。 ###Code (img_train, label_train), (img_test, label_test) = keras.datasets.fashion_mnist.load_data() # Normalize pixel values between 0 and 1 img_train = img_train.astype('float32') / 255.0 img_test = img_test.astype('float32') / 255.0 ###Output _____no_output_____ ###Markdown モデルを定義するハイパーチューニングを行うモデルを構築する際、モデルアーキテクチャのほかにハイパーパラメータ検索空間も定義します。ハイパーチューニング用にセットアップするモデルを*ハイパーモデル*と呼びます。ハイパーモデルの定義は、以下の 2 つの方法で行います。- モデルビルダー関数を使用する- Keras Tuner API の `HyperModel` クラスをサブクラス化するまた、コンピュータビジョンアプリケーション用の [HyperXception](https://keras-team.github.io/keras-tuner/documentation/hypermodels/hyperresnet-class) と HyperResNet という 2 つの事前定義済みの HyperModel クラスも使用します。このチュートリアルでは、モデルビルダー関数を使用して、画像分類モデルを定義します。モデルビルダー関数は、コンパイル済みのモデルを返し、インラインで定義するハイパーパラメータを使用してモデルをハイパーチューニングします。 ###Code def model_builder(hp): model = keras.Sequential() model.add(keras.layers.Flatten(input_shape=(28, 28))) # Tune the number of units in the first Dense layer # Choose an optimal value between 32-512 hp_units = hp.Int('units', min_value = 32, max_value = 512, step = 32) model.add(keras.layers.Dense(units = hp_units, activation = 'relu')) model.add(keras.layers.Dense(10)) # Tune the learning rate for the optimizer # Choose an optimal value from 0.01, 0.001, or 0.0001 hp_learning_rate = hp.Choice('learning_rate', values = [1e-2, 1e-3, 1e-4]) model.compile(optimizer = keras.optimizers.Adam(learning_rate = hp_learning_rate), loss = keras.losses.SparseCategoricalCrossentropy(from_logits = True), metrics = ['accuracy']) return model ###Output _____no_output_____ ###Markdown チューナーをインスタンス化してハイパーチューニングを実行するチューナーをインスタンス化して、ハイパーチューニングを実行します。Keras Tuner には、`RandomSearch`、`Hyperband`、`BayesianOptimization`、および `Sklearn` チューナーがあります。このチュートリアルでは、[Hyperband](https://arxiv.org/pdf/1603.06560.pdf) チューナーを使用します。Hyperband チューナーをインスタンス化するには、ハイパーモデル、最適化する `objective`、およびトレーニングするエポックの最大数(`max_epochs`)を指定する必要があります。 ###Code tuner = kt.Hyperband(model_builder, objective = 'val_accuracy', max_epochs = 10, factor = 3, directory = 'my_dir', project_name = 'intro_to_kt') ###Output _____no_output_____ ###Markdown Hyperband チューニングアルゴリズムは、適応型リソース割り当てと早期停止を使用して、高パフォーマンスモデルに素早く収束させます。これは、トーナメント式のツリーを使用して行われます。アルゴリズムは、数回のエポックで大量のモデルをトレーニングし、性能の高い上位半数のモデル次のラウンドに持ち越します。Hyperband は、1 + logfactor(`max_epochs`) を計算し、直近の整数に繰り上げて、トーナメントでトレーニングするモデル数を決定します。 ハイパーパラメータ検索を実行する前に、トレーニングステップごとにトレーニング出力をクリアにするコールバックを定義します。 ###Code class ClearTrainingOutput(tf.keras.callbacks.Callback): def on_train_end(*args, **kwargs): IPython.display.clear_output(wait = True) ###Output _____no_output_____ ###Markdown ハイパーパラメータ検索を実行します。検索メソッドの引数は、上記のコールバックのほか、`tf.keras.model.fit` に使用される引数と同じです。 ###Code tuner.search(img_train, label_train, epochs = 10, validation_data = (img_test, label_test), callbacks = [ClearTrainingOutput()]) # Get the optimal hyperparameters best_hps = tuner.get_best_hyperparameters(num_trials = 1)[0] print(f""" The hyperparameter search is complete. The optimal number of units in the first densely-connected layer is {best_hps.get('units')} and the optimal learning rate for the optimizer is {best_hps.get('learning_rate')}. """) ###Output _____no_output_____ ###Markdown このチュートリアルの最後のステップとして、検索から得た最適なハイパーパラメータでモデルを保存します。 ###Code # Build the model with the optimal hyperparameters and train it on the data model = tuner.hypermodel.build(best_hps) model.fit(img_train, label_train, epochs = 10, validation_data = (img_test, label_test)) ###Output _____no_output_____